patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11861016 | While each of the drawing figures depicts a particular embodiment for purposes of depicting a clear example, other embodiments may omit, add to, reorder, and/or modify any of the elements shown in the drawing figures. For purposes of depicting clear examples, one or more figures may be described with reference to one or more other figures, but using the particular arrangement depicted in the one or more other figures is not required in other embodiments. DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. Embodiments are described herein according to the following outline:1.0 Introduction2.0 Structural and Functional Overview3.0 Process Overview4.0 Hardware Overview 1.0 INTRODUCTION The embodiments disclosed herein are related to exploit prediction based on machine learning. One or more machine learning computers may be used to generate a prediction of whether an exploit will be developed for a particular vulnerability and/or a prediction of whether an exploit to be developed for a particular vulnerability will be used in an attack. As used herein, a prediction of “whether” an event will occur may also include more specific information about the event, such as when the event will occur, how many times the event will occur, the probability or likelihood that the event will occur, and/or the like. A separate system may interact with the one or more machine learning computers to provide training and input data as well as to receive output data comprising predictions. The system comprises storage media, one or more processors, and one or more programs stored in the storage media and configured for execution by the one or more processors. The system provides, to the one or more machine learning computers, training data with which to generate one or more predictive models. The training data may comprise one or more features corresponding to vulnerabilities that have been selected for training the one or more machine learning computers. The one or more predictive models may include a classification model, a linear regression model, and/or the like. Thus, the one or more predictive models may establish a correlation between the one or more features and whether an exploit will be developed for a particular vulnerability and/or whether an exploit to be developed for a particular vulnerability will be used in an attack. In some embodiments, the correlation is established using a subset of the training data that corresponds to vulnerabilities for which exploits have already been developed. In an embodiment, the system provides first training data with which to generate a first predictive model. The training data comprises a first plurality of vulnerabilities that have been selected for training the first predictive model. First output data is generated based on applying the first predictive model to the first training data. Based on the first output data, a selected set of the first training data is provided to one or more machine learning computers to train a second predictive model. The selected set may comprise vulnerabilities of the first plurality of vulnerabilities that were indicated by the first output data to be likely (a) to have an exploit developed for them and/or (b) that an exploit to be developed will be used in an attack. The system also provides, to the one or more machine learning computers, input data that comprises the one or more features. The one or more features correspond to a second plurality of vulnerabilities that do not yet have exploits developed for them. In some embodiments, the input data also comprises one or more predictions generated by the one or more machine learning computers. For example, the input data may comprise a prediction that a particular vulnerability will have an exploit developed for it, a prediction that an exploit will be developed for a particular vulnerability within a particular number of days of publishing the particular vulnerability, and/or the like. In an embodiment, based on output data generated by applying a first predictive model to a set of input data, a subset of the input data is selected. The second predictive model is applied to the selected subset of input data. The system receives, from one or more machine learning computers, output data generated based on applying one or more predictive models to the input data. For example, the system receives output data generated by applying the first predictive model and/or the second predictive model. The output data indicates which of the second plurality of vulnerabilities is predicted to have exploits developed for them; when, if ever, exploits are predicted to be developed for them; and/or which of the second plurality of vulnerabilities is predicted to be attacked. In some embodiments, the output data comprises predicted values of one or more of the aforementioned features, such as the developed exploit feature, the exploit development time feature, and/or the successful/unsuccessful attack features. 2.0 STRUCTURAL AND FUNCTIONAL OVERVIEW Referring to the example embodiment ofFIG.1, machine learning computer(s)100are communicatively coupled to a system comprising risk assessment computer(s)102and database(s)104. Although not explicitly depicted inFIG.1, a network connection typically separates machine learning computer(s)100from the system. Machine learning computer(s)100and the system may reside on the same network or on different networks. For example, machine learning computer(s)100may provide a cloud-based service, such as a machine learning product provided by the Amazon Web Services™ cloud computing platform. Each of the logical and/or functional units depicted in the figures or described herein may be implemented using any of the techniques further described herein in connection withFIG.3. While the figures include lines that indicate various devices and/or logical units being communicatively coupled, each of the systems, computers, devices, storage, and logic may be communicatively coupled with each other. As used herein, a “computer” may be one or more physical computers, virtual computers, and/or computing devices. For example, a computer may be a server computer; a cloud-based computer; a cloud-based cluster of computers; a virtual machine instance or virtual machine computing elements such as a virtual processor, storage, and memory; a data center; a storage device; a desktop computer; a laptop computer; a mobile device; and/or the like. A computer may be a client and/or a server. Any reference to “a computer” herein may mean one or more computers, unless expressly stated otherwise. 2.1 Machine Learning Computer(s) As mentioned above, machine learning is used to generate a plurality of prediction models that are used to predict whether an exploit will be developed for a particular vulnerability and/or whether an exploit to be developed for a particular vulnerability will be used in an attack. Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computation learning theory in artificial intelligence. Machine learning explores the study and construction of algorithms that can learn from and make predictions based on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines, and computer vision. Within the field of data analytics, machine learning is a method used to devise complex models and algorithms that lend themselves to prediction. These analytical models allow researchers, data scientists, engineers, and analysts to produce reliable, repeatable decisions and results as well as to uncover hidden insights through learning from historical relationships and trends in the data. Any machine learning technique may be used to generate the one or more prediction models. Examples of machine learning algorithms include random forest, decision tree learning, association rule learning, artificial neural network, support vector machines, and/or Bayesian networks. Embodiments are not limited to any particular type of machine learning technique or algorithm. Referring toFIG.1, machine learning computer(s)100comprise modeling logic106and prediction logic108. Machine learning computer(s)100receive training data110and input data112from risk assessment computer(s)102, and machine learning computer(s)100send output data114to risk assessment computer(s)102. 2.1.1 Modeling Logic Modeling logic106processes training data110and implements one or more machine learning techniques to generate one or more prediction models. Training data110corresponds to a plurality of software vulnerabilities referred to herein as a “training set” of software vulnerabilities. More specifically, training data210comprises a number of features for each software vulnerability in the training set. Any of a variety of prediction models can be used. Example prediction models include a binary classification model, a logistic regression model, a multiclass classification model, a multinomial logistic regression model, and/or a linear regression model. In some embodiments, modeling logic106generates a prediction model for determining whether and/or when an exploit will be developed for a particular software vulnerability. Training data110may comprise a developed exploit feature and/or a developed exploit time feature for each software vulnerability in the training set. Training data110may further comprise one or more other features, such as one or more prevalence features, attack features, and/or the like. This enables modeling logic106to generate the prediction model based on the one or more other features. In some embodiments, modeling logic106generates a prediction model for determining whether an exploit to be developed for a particular software vulnerability will be used in an attack. Training data110may comprise a developed exploit feature/developed exploit time feature and an attack feature. Training data110may further comprise one or more other features, such as one or more prevalence features. This enables modeling logic106to generate the prediction model based on the one or more other features. 2.1.2 Prediction Logic Prediction logic108applies one or more prediction models to at least some of input data112to generate output data114. Input data112corresponds to a plurality of software vulnerabilities that have yet to have an exploit developed for them. Output data114comprises predictions regarding the plurality of software vulnerabilities. In some embodiments, the predictions serve as features used to generate other predictions. In some embodiments, the predictions are used to adjust the risk scores of the plurality of software vulnerabilities. For example, input data112may comprise a prevalence feature, but not a developed exploit feature/developed exploit time feature, for each software vulnerability of a plurality of software vulnerabilities. Prediction logic108may apply a prediction model for determining whether and/or when an exploit will be developed for a particular software vulnerability. Thus, values of a developed exploit feature/developed exploit time feature may be predicted. These values may be sent to risk assessment computer(s)102as output data114or at least some of these values may be used as input data for predicting values of an attack feature. If predicted values of a developed exploit feature/developed exploit time feature are used as input data, prediction logic108may apply a prediction model for determining whether an exploit to be developed for a particular software vulnerability will be used in an attack. For example, if the predicted value of a developed exploit feature corresponds to “No”, then the predicted value of an attack feature would also correspond to “No”; however, if the predicted value of a developed exploit feature corresponds to “Yes”, then the predicted value of an attack feature may correspond to “Yes” or “No” depending on the values of other features, such as a prevalence feature. Thus, values of an attack feature may be predicted. These values may be sent to risk assessment computer(s)102as output data114. 2.2 Risk Assessment System In the example ofFIG.1, a risk assessment system comprises risk assessment computer(s)102and database(s)104. Risk assessment computer(s)102is (are) communicatively coupled to database(s)104. 2.2.1 Risk Assessment Computer(s) Risk assessment computer(s)102comprise vulnerability selection logic116and score adjustment logic118. Vulnerability selection logic116generates training data110and input data112. Score adjustment logic118processes output data114. 2.2.1.1 Vulnerability Selection Logic Vulnerability selection logic116may generate training data110based on interacting with database(s)104. More specifically, vulnerability selection logic116may determine which of the software vulnerabilities stored in database(s)104are to be included in a training set. For example, to cause generation of a prediction model for determining whether and/or when an exploit will be developed for a particular software vulnerability, vulnerability selection logic116may include, in the training set, a plurality of software vulnerabilities, wherein each software vulnerability in the training set has a value for a developed exploit feature and/or a value for a developed exploit time feature. Additionally or alternatively, to cause generation of a prediction model for determining whether an exploit to be developed for a particular software vulnerability will be used in an attack, vulnerability selection logic116may include, in the training set, a plurality of software vulnerabilities, where each software vulnerability in the training set has values for a developed exploit feature/developed exploit time feature and an attack feature. Vulnerability selection logic116also generates input data112. In some embodiments, vulnerability selection logic116determines which of the software vulnerabilities stored in database(s)104do not yet have an exploit developed for them and includes one or more features for them in input data112. For example, input data112may include recently published software vulnerabilities. When a prediction model for determining whether and/or when an exploit will be developed for a particular software vulnerability is applied to input data112, machine learning computer(s)100generate(s) predictions as to whether and/or when exploits will be developed for the software vulnerabilities of input data112. When a prediction model for determining whether an exploit to be developed for a particular software vulnerability will be used in an attack is applied to input data112, machine learning computer(s)100generates predictions as to whether exploits to be developed for the software vulnerabilities of input data112will be used in attacks. In some embodiments, vulnerability selection logic116generates input data based on a subset of predictions generated by machine learning computer(s)100. For example, at time Ti, vulnerability selection logic116may include features of software vulnerabilities A-C in input data112to a first prediction model. At time T2, vulnerability selection logic116may receive output data114comprising predictions indicating that software vulnerabilities A and B, but not C, will have exploits developed for them. At time T3, vulnerability selection logic116may include features of software vulnerabilities A and B, but not C, in input data112to a second prediction model that is different than the first prediction model. Input data112may include predicted values for a developed exploit feature/developed exploit time feature. At time T4, risk assessment computer(s)102may receive output data114comprising predictions indicating whether software vulnerabilities A and B will have exploits developed for them that will be used in attacks. 2.2.1.2 Score Adjustment Logic In an embodiment, score adjustment logic118modifies risk scores for software vulnerabilities based on output data114. Modified risk scores may be stored in database(s)104. For example, software vulnerability A may be a recently published vulnerability having a risk score of seventy out of one hundred. If software vulnerability A is predicted to have an exploit developed for it, then the risk score may be increased to eighty. If the exploit is predicted to be used in an attack, the risk score may be increased to ninety. Additionally or alternatively, if no exploit is predicted to be developed for it, the risk score may be decreased to sixty. 2.2.2 Database(s) Database(s)104may be implemented on any storage medium, including volatile or non-volatile storage media. Database(s)104store vulnerability data120.FIG.2illustrates example vulnerability data120. The vulnerability data may correlate a plurality of vulnerabilities with features of software vulnerabilities. InFIG.2, example features216-224correspond to a plurality of software vulnerabilities200-214. In the illustrated example, the features include prevalence feature216(how prevalent the software vulnerability is), developed exploit feature218(whether an exploit was developed for the software vulnerability), exploit development time feature220(amount of time taken to develop an exploit for the software vulnerability), attack feature222(whether an exploit for the software vulnerability was used in an attack), and score feature224(a score corresponding to the software vulnerability). Each software vulnerability corresponds to a respective set of feature values. For example, software vulnerability200has a value of “2,000,000” for prevalence feature216, indicating for example, that there have been 2,000,000 instances of software vulnerability200; a value of “YES” for developed exploit feature218, indicating that an exploit has been developed for software vulnerability200; a value of 7 for exploit development time feature220, indicating for example that an exploit for software vulnerability200was developed after 7 days; a value of “25 for attack feature222, indicating for example that an exploit for software vulnerability200was used in 25 attacks; and a value of “95” for score feature224, indicating a risk score of 95 for software vulnerability200. For the purpose of illustrating a clear example,FIG.2depicts example features216-224as being organized in a structured format. However, some features may exist as unstructured data that may or may not undergo feature transformation to enable organization in a structured format. Non-limiting examples of feature transformation involve tokenization, n-grams, orthogonal sparse bigrams, quantile binning, normalization, and Cartesian products of multiple feature. 3.0 PROCESS OVERVIEW FIG.3is a flow diagram that depicts an example approach for exploit prediction based on machine learning. In some embodiments, the approach is performed by risk assessment computer(s)102. At block300, first training data is provided to one or more machine learning computers. The training data comprises one or more features for each software vulnerability in a training set. The one or more machine learning computers generate a first model for determining whether an exploit will be developed for a particular software vulnerability based on a plurality of features of the particular software vulnerability. Additionally or alternatively, the first model determines whether an exploit to be developed for a particular software vulnerability will be used in an attack. In an embodiment, the first model determines a score, probability, or other data value that indicates a likelihood of whether an exploit will be developed for the particular software vulnerability and/or whether an exploit to be developed for a particular software vulnerability will be used in an attack. As an example, the first model may determine that, for a particular software vulnerability, there is a 35% chance that an exploit will be developed for the particular software vulnerability. At block302, the first model is applied to the first training data. Block302involves providing the first training data to one or more machine learning computers, which apply or execute the first model to generate predictions for each training instance in the first training set. Referring to the above example, the first model may determine that, for a particular training instance, there is a 35% chance that an exploit will be developed for the corresponding software vulnerability. Based on the predictions generated by the first model, one or more training instances of the first training data are added to second training data. In an embodiment, the one or more training instances are added to second training data if they are predicted to be likely to have an exploit developed for the corresponding software vulnerability and/or an exploit to be developed for the corresponding software vulnerability will be used in an attack. In some embodiments, the first model may indicate a ‘Yes’ or a ‘No’ as to whether an exploit will be developed and/or used in an attack. The training instance may be added to the second training data if the first model predicts a ‘Yes’ exploit and/or attack. In other embodiments, the first model indicates a data value that indicates the likelihood, such as a percentage or a probability. The training instance may be added to the second training data if it exceeds a threshold value. Referring to the above example, the training instance may be added to the second training data if there is over 10% chance of an exploit and/or attack. The selected threshold value may be a different value depending on the embodiment. A threshold value may be selected to reduce the number of false positives and/or false negatives generated by the first model. The first model may be tuned such that the number of false positives and/or false negatives is under a threshold amount. At block304, the second training data is provided to the one or more machine learning computers. The second training data is a strict subset of the first training data. The one or more machine learning computers generate a second model for determining whether an exploit will be developed for a particular software vulnerability based on a plurality of features of the particular software vulnerability. Additionally or alternatively, the second model determines whether an exploit to be developed for a particular software vulnerability will be used in an attack. In an embodiment, the second model uses the same plurality of features as the first model. In other embodiments, the plurality of features is different than the first model. In an embodiment, the second model is trained to make the same type of determination as the first model. Additionally, the second model may generate the same type of output as the first model. At block306, first input data is provided to the one or more machine learning computers. The input data comprises one or more features for a plurality of software vulnerabilities that do not yet have an exploit developed for them. Thus, the one or more machine learning computers apply the first model to generate predictions for the input data based on the one or more features. The predictions indicate whether and/or when an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities. At block308, the one or more machine learning computers return output data indicating a prediction of whether an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities. For example, referring toFIG.2, the output data may comprise a predicted value for the developed exploit feature218, i.e., whether an exploit is likely to be developed, and/or a predicted value of the exploit development time feature220, i.e., an amount of time taken to develop an exploit, for each software vulnerability of the plurality of software vulnerabilities. Based on the predictions generated by the first model, one or more software vulnerabilities of the plurality of software vulnerabilities are added to second input data. In an embodiment, a software vulnerability is added to the second input data if it is predicted to be likely to have an exploit developed and/or predicted to be likely to have an exploit used in an attack. In some embodiments, the first model may indicate a ‘Yes’ or a ‘No’ as to whether an exploit will be developed and/or used in an attack. The software vulnerability may be added to the second input data if the first model predicts a ‘Yes’ exploit and/or attack. In other embodiments, the first model indicates a data value that indicates the likelihood, such as a percentage or a probability. The software vulnerability may be added to the second input data if it exceeds a threshold value. The selected threshold value may be a different value depending on the embodiment. A threshold value may be selected to reduce the number of false positives and/or false negatives generated by the second model. The second model may be tuned such that the number of false positives and/or false negatives is under a threshold amount. Additionally, the selected threshold value may be different from the threshold value used for selecting the second training data. The first model may be tuned to reduce the number of false positives and the second model may be tuned to reduce the number of false negatives, or vice versa. At block310, the second input data is provided to the one or more machine learning computers. The second input data is a strict subset of the first input data. The one or more machine learning computers apply the second model to generate predictions for the second input data based on the one or more features of the second model. The predictions indicate whether and/or when an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities. At block312, the one or more machine learning computers return output data indicating a prediction of whether, according to the second model, an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities. For example, the output data may comprise predicted values of a developed exploit feature/developed exploit time feature for each software vulnerability of the plurality of software vulnerabilities. In an embodiment, a first and second model determine whether an exploit will be developed for each software vulnerability of a plurality of software vulnerabilities, and a third and fourth model determine whether an exploit to be developed for the software vulnerability will be used in an attack. A subset of the first and/or second input data may be provided to the one or more machine learning computers. The subset may be limited to software vulnerabilities that are predicted to have exploits developed for them. Determination of the subset of the input data may be based on the output data of block308and/or block312. More specifically, the subset of the input data may be limited to software vulnerabilities that correspond to a subset of the output data of block308and/or block312. The subset of the output data may include software vulnerabilities that are predicted, based on the first and/or second model, to have exploits developed for them. Accordingly, the one or more machine learning computers apply the third model to generate a prediction for each software vulnerability included in the subset of the plurality of software vulnerabilities. The prediction indicates whether an exploit to be developed for the software vulnerability will be used in an attack. Additionally, based on the predictions generated by the third model, a subset of the software vulnerabilities is provided to the one or more machine learning computers for applying the fourth model. In an embodiment, one or more software vulnerabilities of the subset are selected if an exploit to be developed for the corresponding software vulnerability is predicted to be used in an attack. In some embodiments, the output data of the models discussed above are used to adjust a risk score for one or more software vulnerabilities. Risk scores may be used to prioritize remediation of software vulnerabilities. For example, remediation may be prioritized in the following order: (1) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted to be used in attacks; (2) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted not to be used in attacks; and (3) software vulnerabilities predicted not to have exploits developed for them. Furthermore, software vulnerabilities predicted to have exploits developed for them may be prioritized according to when exploits are predicted to be developed and/or when attacks are predicted to occur. The multi-stage machine training (and application) of machine learning models described herein provides several benefits. One example is the ability to train a machine learning model using a more precise set of test data. This may be particularly useful when events are rare, or in unbalanced datasets where down sampling is necessary to limit overfitting. Typically, the number of software vulnerabilities that will have exploits developed for them is low relative to the overall number of software vulnerabilities. Thus, the training data may include a large number of software vulnerabilities that result in a ‘no’ determination, and a smaller number that result in a ‘yes’ determination. When the first machine learning model is applied to the training data set, a subset of the training data is selected based on the output of the first machine learning model. The subset includes software vulnerabilities that the first model determined were at least likely to have exploits relative to other software vulnerabilities. In other words, training data that the first model determined will not (or are relatively unlikely to) have exploits are filtered out. The second machine learning model is trained on the subset of training data, which has a higher percentage of software vulnerabilities that might have exploits. Thus, the second machine learning model is trained on more precise training data. The two (or more) stage approach yields more accurate results than if the training data was filtered manually (i.e., not using the first machine learning model). In addition, the techniques described above may be applied to areas other than software vulnerabilities and exploits. The techniques may be used in any situation where a binary decision (e.g., yes or no) is desired, and one option has a greater number of results than the other. 5.0 HARDWARE OVERVIEW According to one embodiment, the techniques described herein are implemented by at least one computing device. The techniques may be implemented in whole or in part using a combination of at least one server computer and/or other computing devices that are coupled using a network, such as a packet data network. The computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as at least one application-specific integrated circuit (ASIC) or field programmable gate array (FPGA) that is persistently programmed to perform the techniques, or may include at least one general purpose hardware processor programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the described techniques. The computing devices may be server computers, workstations, personal computers, portable computer systems, handheld devices, mobile computing devices, wearable devices, body mounted or implantable devices, smartphones, smart appliances, internetworking devices, autonomous or semi-autonomous devices such as robots or unmanned ground or aerial vehicles, any other electronic device that incorporates hard-wired and/or program logic to implement the described techniques, one or more virtual computing machines or instances in a data center, and/or a network of server computers and/or personal computers. FIG.4is a block diagram that illustrates an example computer system with which an embodiment may be implemented. In the example ofFIG.4, a computer system400and instructions for implementing the disclosed technologies in hardware, software, or a combination of hardware and software, are represented schematically, for example as boxes and circles, at the same level of detail that is commonly used by persons of ordinary skill in the art to which this disclosure pertains for communicating about computer architecture and computer systems implementations. Computer system400includes an input/output (I/O) subsystem402which may include a bus and/or other communication mechanism(s) for communicating information and/or instructions between the components of the computer system400over electronic signal paths. The I/O subsystem402may include an I/O controller, a memory controller and at least one I/O port. The electronic signal paths are represented schematically in the drawings, for example as lines, unidirectional arrows, or bidirectional arrows. At least one hardware processor404is coupled to I/O subsystem402for processing information and instructions. Hardware processor404may include, for example, a general-purpose microprocessor or microcontroller and/or a special-purpose microprocessor such as an embedded system or a graphics processing unit (GPU) or a digital signal processor or ARM processor. Processor404may comprise an integrated arithmetic logic unit (ALU) or may be coupled to a separate ALU. Computer system400includes one or more units of memory406, such as a main memory, which is coupled to I/O subsystem402for electronically digitally storing data and instructions to be executed by processor404. Memory406may include volatile memory such as various forms of random-access memory (RAM) or other dynamic storage device. Memory406also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor404. Such instructions, when stored in non-transitory computer-readable storage media accessible to processor404, can render computer system400into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system400further includes non-volatile memory such as read only memory (ROM)408or other static storage device coupled to I/O subsystem402for storing information and instructions for processor404. The ROM408may include various forms of programmable ROM (PROM) such as erasable PROM (EPROM) or electrically erasable PROM (EEPROM). A unit of persistent storage410may include various forms of non-volatile RAM (NVRAM), such as FLASH memory, or solid-state storage, magnetic disk or optical disk such as CD-ROM or DVD-ROM, and may be coupled to I/O subsystem402for storing information and instructions. Storage410is an example of a non-transitory computer-readable medium that may be used to store instructions and data which when executed by the processor404cause performing computer-implemented methods to execute the techniques herein. The instructions in memory406, ROM408or storage410may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format processing instructions to parse or render files coded using HTML, XML, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. The instructions may implement a web server, web application server or web client. The instructions may be organized as a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage. Computer system400may be coupled via I/O subsystem402to at least one output device412. In one embodiment, output device412is a digital computer display. Examples of a display that may be used in various embodiments include a touch screen display or a light-emitting diode (LED) display or a liquid crystal display (LCD) or an e-paper display. Computer system400may include other type(s) of output devices412, alternatively or in addition to a display device. Examples of other output devices412include printers, ticket printers, plotters, projectors, sound cards or video cards, speakers, buzzers or piezoelectric devices or other audible devices, lamps or LED or LCD indicators, haptic devices, actuators or servos. At least one input device414is coupled to I/O subsystem402for communicating signals, data, command selections or gestures to processor404. Examples of input devices414include touch screens; microphones; still and video digital cameras; alphanumeric and other keys; keypads; keyboards; graphics tablets; image scanners; joysticks; clocks; switches; buttons; dials; slides; and/or various types of sensors such as force sensors, motion sensors, heat sensors, accelerometers, gyroscopes, inertial measurement unit (IMU) sensors; and/or various types of transceivers such as wireless (e.g. cellular or Wi-Fi™ technology) transceivers, radio frequency (RF) transceivers, infrared (IR) transceivers, and Global Positioning System (GPS) transceivers. Another type of input device is a control device416, which may perform cursor control or other automated control functions such as navigation in a graphical interface on a display screen, alternatively or in addition to input functions. Control device416may be a touchpad, a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor404and for controlling cursor movement on display412. The input device may have at least two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), which allow the input device to specify positions in a plane. Another type of input device is a wired, wireless, or optical control device such as a joystick, wand, console, steering wheel, pedal, gearshift mechanism or other type of control device. An input device414may include a combination of multiple different input devices, such as a video camera and a depth sensor. In another embodiment, computer system400may comprise an internet of things (IoT) device in which one or more of the output device412, input device414, and control device416are omitted. Or, in such an embodiment, the input device414may comprise one or more cameras, motion detectors, thermometers, microphones, seismic detectors, other sensors or detectors, measurement devices or encoders and the output device412may comprise a special-purpose display such as a single-line LED or LCD display, one or more indicators, a display panel, a meter, a valve, a solenoid, an actuator or a servo. When computer system400is a mobile computing device, input device414may comprise a global positioning system (GPS) receiver coupled to a GPS module that is capable of triangulating to a plurality of GPS satellites, determining and generating geo-location or position data such as latitude-longitude values for a geophysical location of the computer system400. Output device412may include hardware, software, firmware and interfaces for generating position reporting packets, notifications, pulse or heartbeat signals, or other recurring data transmissions that specify a position of the computer system400, alone or in combination with other application-specific data, directed toward host424or server430. Computer system400may implement the techniques described herein using customized hard-wired logic, at least one ASIC or FPGA, firmware and/or program instructions or logic which when loaded and used or executed in combination with the computer system causes or programs the computer system to operate as a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system400in response to processor404executing at least one sequence of at least one instruction contained in main memory406. Such instructions may be read into main memory406from another storage medium, such as storage410. Execution of the sequences of instructions contained in main memory406causes processor404to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage410. Volatile media includes dynamic memory, such as memory406. Common forms of storage media include, for example, a hard disk, solid state drive, flash drive, magnetic data storage medium, any optical or physical data storage medium, memory chip, or the like. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus of I/O subsystem402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying at least one sequence of at least one instruction to processor404for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a communication link such as a fiber optic or coaxial cable or telephone line using a modem. A modem or router local to computer system400can receive the data on the communication link and convert the data to a format that can be read by computer system400. For instance, a receiver such as a radio frequency antenna or an infrared detector can receive the data carried in a wireless or optical signal and appropriate circuitry can provide the data to I/O subsystem402, for example, by placing the data on a bus. I/O subsystem402carries the data to memory406, from which processor404retrieves and executes the instructions. The instructions received by memory406may optionally be stored on storage410either before or after execution by processor404. Computer system400also includes a communication interface418coupled to bus402. Communication interface418provides a two-way data communication coupling to network link(s)420that are directly or indirectly connected to at least one communication networks, such as a network422or a public or private cloud on the Internet. For example, communication interface418may be an Ethernet networking interface, integrated-services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of communications line, for example an Ethernet cable or a metal cable of any kind or a fiber-optic line or a telephone line. Network422broadly represents a local area network (LAN), wide-area network (WAN), campus network, internetwork or any combination thereof. Communication interface418may comprise a LAN card to provide a data communication connection to a compatible LAN, or a cellular radiotelephone interface that is wired to send or receive cellular data according to cellular radiotelephone wireless networking standards, or a satellite radio interface that is wired to send or receive digital data according to satellite wireless networking standards. In any such implementation, communication interface418sends and receives electrical, electromagnetic or optical signals over signal paths that carry digital data streams representing various types of information. Network link420typically provides electrical, electromagnetic, or optical data communication directly or through at least one network to other data devices, using, for example, satellite, cellular, Wi-Fi™, or BLUETOOTH® technology. For example, network link420may provide a connection through a network422to a host computer424. Furthermore, network link420may provide a connection through network422or to other computing devices via internetworking devices and/or computers that are operated by an Internet Service Provider (ISP)426. ISP426provides data communication services through a world-wide packet data communication network represented as internet428. A server computer430may be coupled to internet428. Server430broadly represents any computer, data center, virtual machine or virtual computing instance with or without a hypervisor, or computer executing a containerized program system such as the DOCKER™ computer software or the KUBERNETES® container system. Server430may represent an electronic digital service that is implemented using more than one computer or instance and that is accessed and used by transmitting web services requests, uniform resource locator (URL) strings with parameters in HTTP payloads, API calls, app services calls, or other service calls. Computer system400and server430may form elements of a distributed computing system that includes other computers, a processing cluster, server farm or other organization of computers that cooperate to perform tasks or execute applications or services. Server430may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format processing instructions to parse or render files coded using HTML, XML, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. Server430may comprise a web application server that hosts a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage. Computer system400can send messages and receive data and instructions, including program code, through the network(s), network link420and communication interface418. In the Internet example, a server430might transmit a requested code for an application program through Internet428, ISP426, local network422and communication interface418. The received code may be executed by processor404as it is received, and/or stored in storage410, or other non-volatile storage for later execution. The execution of instructions as described in this section may implement a process in the form of an instance of a computer program that is being executed, and consisting of program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently. In this context, a computer program is a passive collection of instructions, while a process may be the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed. Multitasking may be implemented to allow multiple processes to share processor404. While each processor404or core of the processor executes a single task at a time, computer system400may be programmed to implement multitasking to allow each processor to switch between tasks that are being executed without having to wait for each task to finish. In an embodiment, switches may be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts. Time-sharing may be implemented to allow fast response for interactive user applications by rapidly performing context switches to provide the appearance of concurrent execution of multiple processes simultaneously. In an embodiment, for security and reliability, an operating system may prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. | 51,345 |
11861017 | Like reference numerals are used in the drawings to denote like elements and features. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS In an aspect, the present disclosure describes a computing system for processing requests from a third-party application to access a protected data resource. The computing system includes a communications module communicable with an external network, a memory, and a processor coupled to the communications module and the memory. The processor is configured to: receive, from a first application, a request to obtain first account data for a user account associated with a protected data resource; generate fake data for at least a portion of the requested first account data; provide, to the first application, a first data set in response to the request, the first data set including at least the generated fake data; monitor use of the first data set by the first application; detect a trigger condition indicating misuse of account data based on monitoring use of the first data set by the first application; in response to detecting the trigger condition, generate a notification identifying the misuse of account data; and transmit the notification to a computing device associated with an application user. In some implementations, the first data set may include fake historical transactions data associated with the user account, the fake historical transactions data including data for at least one fake transfer operation of transferring value to or from the user account. In some implementations, the at least one fake transfer operation may include a first set of transfer operations representing transfer of a first cumulative value from the user account and a second set of offsetting transfer operations representing transfer of a second cumulative value to the user account, the second cumulative value being equal to the first cumulative value. In some implementations, the data for the at least one fake transfer operation may include one or more of: a value transferred by the at least one fake transfer operation; a date associated with the at least one fake transfer operation; a transfer type of the at least one fake transfer operation; and a transfer identifier of the at least one fake transfer operation. In some implementations, fake data may be generated for only a subset of the requested first account data and the first data set may comprise account data for a real user account and the generated fake data for the subset of the requested first account data. In some implementations, generating the fake data may comprise: generating a first subset of fake account data representing control data; and generating a second subset of fake account data representing test data, the second subset being different from the first subset in only a single data parameter, and detecting the trigger condition indicating misuse of account data may comprises detecting that a first output of the first application which is based on use of the first subset does not differ from a second output of the first application which is based on use of the second subset. In some implementations, monitoring the use of the first data set may comprise: obtaining output data generated by the first application; and evaluating the output data of the first application to determine whether the generated fake data affects the output data. In some implementations, obtaining the outputs of the first application may comprise retrieving data presented in application pages of a graphical user interface associated with the first application. In some implementations, monitoring the use of the first data set may comprise performing a keyword search of resources in a network based on a search query including the generated fake data. In some implementations, the notification may include a risk score indicating a level of risk associated with the first application. In another aspect, the present disclosure describes a processor-implemented method for processing requests from a third-party application to access a protected data resource. The method includes: receiving, from a first application, a request to obtain first account data for a user account associated with a protected data resource; generating fake data for at least a portion of the requested first account data; providing, to the first application, a first data set in response to the request, the first data set including at least the generated fake data; monitoring use of the first data set by the first application; detecting a trigger condition indicating misuse of account data based on monitoring use of the first data set by the first application; in response to detecting the trigger condition, generating a notification identifying the misuse of account data; and transmitting the notification to a computing device associated with an application user. In another aspect, the present disclosure describes a computing system for evaluating a security level of a third-party application. The computing system includes a communications module communicable with an external network, a memory, and a processor coupled to the communications module and the memory. The processor is configured to: launch, in an automated test environment, a test instance of a first application; detect at least one data retrieval operation by the first application of retrieving data from a protected data resource; for each of the at least one data retrieval operation, identify an application state of the first application at a time of detecting the at least one data retrieval operation; determine a data access pattern for the first application of accessing the protected data resource based on the at least one data retrieval operation and application states of the first application associated with the at least one data retrieval operation; and present the data access pattern for the first application on a client device associated with a user. In some implementations, the processor may be further configured to create a test user account associated with the protected data resource, the test user account including fake user account data, and detecting the at least one data retrieval operation may comprise: receiving, from the first application, a request to obtain account data for a user account associated with the protected data resource; and providing, to the first application, a first data set in response to the request, the first data set including at least the fake user account data of the test user account. In some implementations, the first data set may include fake historical transactions data associated with the test user account, the fake historical transactions data including data for at least one fake transfer operation of transferring value to or from the test user account. In some implementations, identifying an application state of the first application may comprise determining an execution state of the first application at a time of detecting a data retrieval operation. In some implementations, determining the execution state of the first application may comprise determining that the first application is not being executed, and the processor may be further configured to determine a frequency of data retrieval by the first application from the protected data resource. In some implementations, determining the execution state of the first application may comprise determining that the first application is being executed, and detecting the at least one data retrieval operation by the first application may comprise determining that the at least one data retrieval operation is performed by the first application only in response to a user-initiated action in the first application. In some implementations, the user-initiated action in the first application may comprise a user selection of a functionality associated with the first application. In some implementations, the processor may be further configured to cause the first application to perform a plurality of predetermined operations, and detecting the at least one data retrieval operation by the first application may comprise determining that the at least one data retrieval operation is performed by the first application in response to select ones of the plurality of predetermined operations. In some implementations, the processor may be further configured to assign, to the first application, a risk score that is based on the data access pattern for the first application In some implementations, the automated test environment may comprise an emulator for an operating system associated with the first application. In another aspect, the present disclosure describes a processor-implemented method for evaluating a security level of a third-party application. The method includes: launching, in an automated test environment, a test instance of a first application; detecting at least one data retrieval operation by the first application of retrieving data from a protected data resource; for each of the at least one data retrieval operation, identifying an application state of the first application at a time of detecting the at least one data retrieval operation; determining a data access pattern for the first application of accessing the protected data resource based on the at least one data retrieval operation and application states of the first application associated with the at least one data retrieval operation; and presenting the data access pattern for the first application on a client device associated with a user. In another aspect, the present disclosure describes a computing system for evaluating a security level of a third-party application. The computing system includes a communications module communicable with an external network, a memory and a processor coupled to the communications module and the memory. The processor is configured to: in an automated test environment: launch a test instance of a first application; and obtain a data access signature of the first application based on identifying at least one application state of the first application and account data retrieved by the first application from a user account at a protected data resource in the at least one application state; receive, from a client device associated with the user account, an indication of access permissions for the first application to access the user account for retrieving account data; detect a change in the data access signature of the first application; and in response to detecting the change in the data access signature of the first application, notify the user of the detected change. In some implementations, the processor may be further configured to store the data access signature in association with the access permissions for the first application to access the user account. In some implementations, the at least one application state of the first application may comprise an execution state of the first application. In some implementations, the data access signature may indicate, for the at least one application state, one or more first types of account data which are accessed by the first application in the application state. In some implementations, detecting a change in the data access signature may comprise detecting that, in the at least one application state, the first application retrieves a type of account data that is different from the one or more first types. In some implementations, the data access signature may indicate, for the at least one application state, a first frequency of retrieval of account data from the user account. In some implementations, detecting a change in the data access signature may comprise detecting that, in the at least one application state, the first application retrieves account data from the user account more frequently than the first frequency. In some implementations, the processor may be further configured to: identify an application category for the first application; and assign, to the first application, a risk score that is based on the data access signature for the first application. In some implementations, the processor may be further configured to determine a ranking of the first application relative to one or more other applications of the application category based on the risk score. In some implementations, notifying the user of the detected change may comprise notifying the user of the determined ranking of the first application. In another aspect, the present disclosure describes a processor-implemented method for evaluating a security level of a third-party application. The method includes: in an automated test environment: launching a test instance of a first application; and obtaining a data access signature of the first application based on identifying at least one application state of the first application and account data retrieved by the first application from a user account at a protected data resource in the at least one application state; receiving, from a client device associated with the user account, an indication of access permissions for the first application to access the user account for retrieving account data; detecting a change in the data access signature of the first application; and in response to detecting the change in the data access signature of the first application, notifying the user of the detected change. Other example embodiments of the present disclosure will be apparent to those of ordinary skill in the art from a review of the following detailed descriptions in conjunction with the drawings. In the present application, the term “and/or” is intended to cover all possible combinations and sub-combinations of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, and without necessarily excluding additional elements. In the present application, the phrase “at least one of . . . or . . . ” is intended to cover any one or more of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, without necessarily excluding any additional elements, and without necessarily requiring all of the elements. Access control is an essential element of database security. Various security controls may be implemented for a database to safeguard the data and any operations within the database from unauthorized access. An access control system for a database typically performs functions of authentication and access approval to ensure that only authorized users can gain access to the database. For example, a private database may store account data for a plurality of user accounts, and an access control system for the database may enforce security policies to restrict access to the user account data. An access control system may enable users to define permissions for others to access their data. In particular, users may specify which subjects are allowed to access their data and what privileges are given to those subjects. For example, account data for user accounts in a database may be accessible to only those entities that have been assigned access rights by the users associated with the accounts. The access control system for the database may limit the scope of permitted access operations based on the permissions that are defined by the users. In some contexts, users may wish to allow third-party applications access to their data in a protected database. For example, a user may provide consent for third-party applications on their device to gain direct access to their account data. The concept of “open banking” is an example of a secure, standardized release of private user data to third-parties. Open banking allows users to grant third-party developers access to their banking data. Banks that allow such third-party access may benefit from having a larger ecosystem of applications and services that customers can use to access a wide range of banking functions. In particular, banks would not have to assume all responsibility for applications development by themselves; instead, third-party developers that are granted access to user data can develop applications that are suited for use by the banks' customers. Generally, delegating access of user account data to third-party applications raises concerns about the security of the data and the safety level of the applications. For example, where a third-party application requests to access highly sensitive user data or to perform database operations that result in permanent changes to the user data, a balance between security of the user data and ease of control of third-party access will be desired. As different applications generally have different demands for and use of private user data, users that provide consent for third-party applications to access their private data may not fully appreciate the risks involved in granting such access. For example, it may be difficult for users to gauge the risks of security threats, such as data leakage and unauthorized transactions, or redundant collection of data arising from third-party access of user account data. To address such security concerns relating to third-party applications, it can be useful to evaluate the data security of the applications. The present disclosure provides techniques for assessing the security of third-party applications (or services) that are granted access, or request to gain access, to private user data stored in a protected data resource. Specifically, systems and methods are described for testing third-party applications to identify potential security risks associated with the consumption and handling of private user data by the applications. In some embodiments of the present disclosure, fake account data is generated and used by an application evaluation system in assessing third-party applications. When the system receives, from a third-party application, a request to obtain account data for a user account associated with a protected data resource, the system generates fake data for at least a portion of the requested account data and provides a first data set containing the generated fake data to the third-party application. The system then monitors use of the first data set by the third-party application. If a trigger condition indicating misuse (or other data security threat) of the account data is detected, the system generates a notification identifying the misuse/thread and transmit the notification to a computing device associated with an application user. In some embodiments of the present disclosure, a data access pattern associated with a third-party application is obtained by an application evaluation system. The system may perform automated test processes on a third-party application to obtain the data access pattern. For example, the test processes may be performed in an isolated testing environment, or sandbox. The system monitors for data access requests by the third-party application and identifies application states of the third-party application associated with such data access requests. A data access pattern is derived based on the data access operations by the third-party application and the associated application states, as determined by the system. The data access pattern for the third-party application is presented on a client device associated with a user. For example, the data access pattern may be displayed on a client device associated with a user when the user consents to or is requested to consent to data sharing with the third-party application. In some embodiments of the present disclosure, techniques are described for detecting changes in a third-party application's behavior in accessing private user data. In an automated test environment, an application evaluation system launches a test instance of a third-party application and obtains a data access signature of the third-party application. The data access signature is a representation of the behavior of the third-party application in accessing data that a user has consented to share with the third-party application. In particular, the data access signature is determined based on application states of the third-party application and account data that is retrieved by the third-party application from a user account of a protected data resource in those application states. When the system receives, from a client device associated with the user account, consent for sharing data with the third-party application, the system begins to monitor the third-party application in order to identify any changes to the data access signature. If a change in the data access signature of the third-party application is detected, the user associated with the user account is automatically notified of the detected change. FIG.1is a schematic diagram of an exemplary operating environment in accordance with embodiments of the present disclosure.FIG.1illustrates a system100for evaluating third-party applications and their behavior in accessing private user data that is stored in a protected data resource150. As illustrated, an access control server140and client device110communicate via the network120. The client device110is a computing device that may be associated with an entity, such as a user or client, having resources associated with the access control server140and/or the protected data resource150. The access control server140is coupled to the protected data resource150, which may be provided in secure storage. The secure storage may be provided internally within the access control server140or externally. The secure storage may, for example, be provided remotely from the access control server140. The protected data resource150stores secure data. In particular, the protected data resource150may include records for a plurality of accounts associated with particular entities. That is, the secure data may comprise account data for one or more specific entities. For example, an entity that is associated with the client device110may be associated with an account having one or more records in the protected data resource150. In at least some embodiments, the records may reflect a quantity of stored resources that are associated with an entity. Such resources may include owned resources and/or borrowed resources (e.g. resources available on credit). The quantity of resources that are available to or associated with an entity may be reflected by a balance defined in an associated record. For example, the secure data in the protected data resource150may include financial data, such as banking data (e.g. bank balance, historical transactions data identifying transactions such as debits from and credits to an account, etc.) for an entity. In particular, the access control server140may be a financial institution (e.g. bank) server and the entity may be a customer of the financial institution which operates the financial institution server. The financial data may, in some embodiments, include processed or computed data such as, for example, an average balance associated with an account, an average spending amount associated with an account, a total spending amount over a period of time, or other data obtained by a processing server based on account data for the entity. The secure data may include personal data, such as personal identification information. The personal identification information may include any stored personal details associated with an entity including, for example, a home, work or mailing address, contact information such as a messaging address (e.g. email address), and/or a telephone number, a government-issued identifier such as a social insurance number (SIN) and/or driver's license number, date of birth, age, etc. In some embodiments, the protected data resource150may be a computer system that includes one or more database servers, computer servers, and the like. In some embodiments, the protected data resource150may be an application programming interface (API) for a web-based system, operating system, database system, computer hardware, or software library. The client device110may be used, for example, to configure a data transfer from an account associated with the client device110. More particularly, the client device110may be used to configure a data transfer from an account associated with an entity operating the client device110. The data transfer may involve a transfer of data between a record in the protected data resource150associated with such an account and another record in the protected data resource150(or in another data resource such as a database associated with a different server, not shown, provided by another financial institution, for example). The other record is associated with a data transfer recipient such as, for example, a bill payment recipient. The data involved in the transfer may, for example, be units of value and the records involved in the data transfer may be adjusted in related or corresponding manners. For example, during a data transfer, a record associated with the data transfer recipient may be adjusted to reflect an increase in value due to the transfer whereas the record associated with the entity initiating the data transfer may be adjusted to reflect a decrease in value which is at least as large as the increase in value applied to the record associated with the data transfer recipient. The system100includes at least one application server180. The application server180may be associated with a third-party application (such as a web or mobile application) that is resident on the client device110. In particular, the application server180connects the client device110to a back-end system associated with the third-party application. The capabilities of the application server180may include, among others, user management, data storage and security, transaction processing, resource pooling, push notifications, messaging, and off-line support of the third-party application. As illustrated inFIG.1, the application server180is connected to the client device110and the access control server140via the network120. The application server180may provide a third-party application that utilizes secure data associated with the protected data resource150. For example, the application server180may provide a personal financial management (PFM) application that utilizes financial data stored in a protected database. When the third-party application requires access to the secure data for one or more of its functionalities, the application server180may communicate with the access control server140over the network120. For example, the access control server140may provide an application programming interface (API) or another interface which allows the application server180to obtain secure data associated with a particular entity (such as a user having an account at the protected data resource150). Such access to secure data may only be provided to the application server180with the consent of the entity that is associated with the data. For example, the client device110may be adapted to receive a signal indicating a user's consent to share data with the application server180and may, in response, send an indication of consent to the access control server140. The access control server140may then configure data sharing with the application server180. For example, the access control server140may provide an access token to the application server180. The access token may be configured to allow the application server180to access secure data (e.g. through the API) associated with the entity that provided consent. The indication of consent may specify a sharing permission, such as type(s) of data that the application server is permitted to access. For example, the protected data resource150may store various types of secure data (e.g., account balance, transactions listing, personal identification data, etc.) and the indication of consent may specify the type(s) of data that the application server180is to be permitted to access. The access control server140may configure data sharing in accordance with the sharing permission. The access token may be issued by the access control server140or may be issued by a separate system (referred to as a token service provider, or TSP), which may issue tokens on behalf of the access control server140. The access token represents the authorization of a specific third-party server to access specific parts of the secure data. The access token may, for example, be an OAuth token or a variation thereof. OAuth is an open standard for token-based authentication and authorization on the Internet. The OAuth 1.0 protocol was published as RFC 5849 and the OAuth 2.0 framework was published as RFC 6749 and bearer token usage as RFC 6750. All of these documents are incorporated herein by reference. The system100also includes an application evaluation server170. The application evaluation server170is configured to assess the behavior of third-party applications. In particular, the application evaluation server170may implement automated tests for assessing the security of third-party applications that are granted access to the protected data resource150. As shown inFIG.1, the application evaluation server may communicate with the application server(s)180via the network120. In the example embodiment ofFIG.1, the application evaluation server170is shown as interposed between the application server(s)180and the access control server140. That is, the application evaluation server170may serve as an intermediary between the application server(s)180and the access control server140. A request for access to the protected data resource150from an application server180may be routed first to the application evaluation server170prior to being transmitted to the access control server140. The application evaluation server170may perform tests to assess the behavior of third-party applications which request to interface with the access control server140prior to forwarding, to the access control server140, any requests by the third-party applications to access the protected data resource150. FIG.14shows example components which may be implemented in an application evaluation server170. The application evaluation server170may include a plurality of data stores, including a test results database1342, an applications database1344, and a test scripts database1346. The test results database1342may store results of various tests performed by the application evaluation server170on third-party applications. The applications database1344may store data associated with one or more third-party applications which are to be tested or have already been tested by the application evaluation server170. For example, the applications database1344may contain, for each application that requests access to the protected data resource150, application data such as unique identifier of application, type of application, account data requested by the application, provider or developer of the application, etc. The test scripts database1346may store various scripts to be executed in testing third-party applications. Such test scripts may, for example, be predefined and stored in the application evaluation server170. In some embodiments, the application evaluation server170may locally host one or more sandbox testing environments, such as a sandbox1330. Alternatively, or additionally, the sandbox1330may be at least partially hosted at a site external to the application evaluation server170. For example, the sandbox1330may be executed within a cloud computing architecture. The sandbox1330represents a testing environment which allows for isolated testing of software, such as a third-party application. In particular, the application evaluation server170may implement a sandbox model of testing to evaluate the behavior of third-party applications in accessing user data at the protected data resource150. As shown inFIG.14, the sandbox1330may include resource components, such as API simulators1332and an operating system emulator1334, which provide various resources for executing one or more third-party applications and simulating a run-time environment. For example, an API simulator1332may simulate a real API associated with an access control server140and/or protected data resource150. The sandbox1330may also include a plurality of test accounts1336, which may be created for testing purposes. The test accounts1336may, for example, contain fake or partially fake account data which may be provided to third-party applications that request access to account data from the protected data resource150. The application evaluation server170may be configured to create test instances of third-party applications that are being evaluated. As illustrated inFIG.14, the sandbox1330may include test instances1338of one or more third-party applications. In at least some embodiments, the test instances1338may be executed concurrently, to facilitate concurrent testing of multiple different applications that request access to the protected data resource150. In some embodiments, the access control server140and the application evaluation server170may be implemented by a single computing system. In particular, a single server may implement both functions of evaluating third-party application behavior and controlling access to a protected data resource150. For example, when a new third-party application requests to access user account data at a protected data resource150, a server associated with the protected data resource150may first test the behavior of the requesting application. If the requesting application's behavior is determined to be satisfactory based on the test results, the server may grant, to the requesting application, access to user account data at the protected data resource150. The client device110, the access control server140, the application evaluation server170, and the application server180may be in geographically disparate locations. Put differently, the client device110may be remote from one or both of the access control server140, the application evaluation server170, and the application server180. The client device110, the access control server140, the application evaluation server170, and the application server180are computer systems. The client device110may take a variety of forms including, for example, a mobile communication device such as a smartphone, a tablet computer, a wearable computer such as a head-mounted display or smartwatch, a laptop or desktop computer, or a computing device of another type. The network120is a computer network. In some embodiments, the network120may be an internetwork such as may be formed of one or more interconnected computer networks. For example, the network120may be or may include an Ethernet network, an asynchronous transfer mode (ATM) network, a wireless network, or the like. FIG.2is a high-level operation diagram of the example computing device105. In some embodiments, the example computing device105may be exemplary of one or more of the client device110, the access control server140, and the third-party application server180. The example computing device105includes a variety of modules. For example, as illustrated, the example computing device105, may include a processor200, a memory210, an input interface module220, an output interface module230, and a communications module240. As illustrated, the foregoing example modules of the example computing device105are in communication over a bus250. The processor200is a hardware processor. The processor200may, for example, be one or more ARM, Intel x86, PowerPC processors or the like. The memory210allows data to be stored and retrieved. The memory210may include, for example, random access memory, read-only memory, and persistent storage. Persistent storage may be, for example, flash memory, a solid-state drive or the like. Read-only memory and persistent storage are a computer-readable medium. A computer-readable medium may be organized using a file system such as may be administered by an operating system governing overall operation of the example computing device105. The input interface module220allows the example computing device105to receive input signals. Input signals may, for example, correspond to input received from a user. The input interface module220may serve to interconnect the example computing device105with one or more input devices. Input signals may be received from input devices by the input interface module220. Input devices may, for example, include one or more of a touchscreen input, keyboard, trackball or the like. In some embodiments, all or a portion of the input interface module220may be integrated with an input device. For example, the input interface module220may be integrated with one of the aforementioned exemplary input devices. The output interface module230allows the example computing device105to provide output signals. Some output signals may, for example allow provision of output to a user. The output interface module230may serve to interconnect the example computing device105with one or more output devices. Output signals may be sent to output devices by an output interface module230. Output devices may include, for example, a display screen such as, for example, a liquid crystal display (LCD), a touchscreen display. Additionally, or alternatively, output devices may include devices other than screens such as, for example, a speaker, indicator lamps (such as for, example, light-emitting diodes (LEDs)), and printers. In some embodiments, all or a portion of the output interface module230may be integrated with an output device. For example, the output interface module230may be integrated with one of the aforementioned example output devices. The communications module240allows the example computing device105to communicate with other electronic devices and/or various communications networks. For example, the communications module240may allow the example computing device105to send or receive communications signals. Communications signals may be sent or received according to one or more protocols or according to one or more standards. For example, the communications module240may allow the example computing device105to communicate via a cellular data network, such as for example, according to one or more standards such as, for example, Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Evolution Data Optimized (EVDO), Long-term Evolution (LTE) or the like. The communications module240may allow the example computing device105to communicate using near-field communication (NFC), via Wi-Fi™, using Bluetooth™ or via some combination of one or more networks or protocols. In some embodiments, all or a portion of the communications module240may be integrated into a component of the example computing device105. For example, the communications module may be integrated into a communications chipset. Software comprising instructions is executed by the processor200from a computer-readable medium. For example, software may be loaded into random-access memory from persistent storage of memory210. Additionally, or alternatively, instructions may be executed by the processor200directly from read-only memory of memory210. FIG.3depicts a simplified organization of software components stored in memory210of the example computing device105. As illustrated these software components include an operating system280and an application270. The operating system280is software. The operating system280allows the application270to access the processor200, the memory210, the input interface module220, the output interface module230and the communications module240. The operating system280may be, for example, Apple's iOS™, Google's Android™, Linux, Microsoft's Windows™, or the like. The application270adapts the example computing device105, in combination with the operating system280, to operate as a device performing particular functions. For example, the application270may cooperate with the operating system280to adapt a suitable embodiment of the example computing device105to operate as the client device110, the access control server140, the application evaluation server170, and the application server(s)180. While a single application270is illustrated inFIG.3, in operation, the memory210may include more than one application270, and different applications270may perform different operations. For example, in embodiments where the computing device105is functioning as a client device110, the application270may comprise a value transfer application which may, for example, be a personal banking application. The value transfer application may be configured for secure communications with the access control server140and may provide various banking functions such as, for example, the ability to display a quantum of value in one or more accounts (e.g., display balances), configure transfers of value (e.g., bill payments and other transfers), and other account management functions. The value transfer application may allow data sharing with third-party application servers to be configured. The client device110may also include a PFM application, which may configure the client device110for communication with at least one application server180. Reference is made toFIG.4, which shows, in flowchart form, an example method400for evaluating the data security of a third-party application. Specifically, the method400allows for assessing the data security of a third-party application that has been granted access, or attempts to gain access, to a protected data resource. For example, the operations of the method400may be performed in testing various third-party applications on a user's device for potential data security vulnerabilities. Operations402and onward may be performed by one or more processors of a computing device such as, for example, the processor200(FIG.2) of one or more suitably configured instances of the example computing device105(FIG.2). The method400may be performed, for example, by a server system that is communicably connected to application servers associated with various third-party applications. More generally, an application evaluation system (such as the application evaluation server170ofFIG.1) may be configured to perform all or some of the operations of method400. In operation402, the server receives, from a first application, a request to obtain first account data for a user account associated with a protected data resource. The request may be received directly from the first application (e.g. a third-party application resident on a client device), or it may be received from an application server, such as application server180ofFIG.1, that is associated with the first application. For example, the request from the first application may be in the form of a request to an API (“API call”) for interfacing with the protected data resource. As a user interacts with the first application, the user may select one or more functionalities of the first application that require account data for an account of the user. For example, a user may input a request on a PFM application to aggregate financial transactions data for a bank account associated with the user. In response to receiving the user input, the first application may generate a request to retrieve certain account information (e.g. historical listing of transactions) for the user's bank account, and transmit the request to the server (or an access control entity for the bank database). The request by the first application to obtain first account data for a user account may include, at least, identifying information for the user account, an identifier of the first application, login credentials of the user requesting to obtain the first account data, and the type(s) of account data that is requested. The request may be transmitted directly from the user's client device, or it may be formatted at an application server associated with the first application and transmitted to the application evaluation server and/or access control server. In operation404, the server generates fake data for at least a portion of the requested first account data. Specifically, the server generates data that is to be used for evaluating the first application. In at least some embodiments, the type(s) of data that is generated by the server depends on the specific account data that is requested by the first application. In particular, the server may generate fake data that is of the same type as parts or all of the requested first account data, such that the generated data can be provided to the first application in place of real account data. For example, if the first application requests to obtain transactions data associated with an account of the user, the server may generate fake transactions data that can be provided to the first application. The fake transactions data may, for example, include a historical listing of fake transactions. Each fake transaction may be associated with a transaction amount and a timestamp (e.g. date, time). The transaction amount may be positive for transactions that increase a balance of the user's account and negative for transactions that decrease the balance. The fake transaction data may also include other information, such as a transaction type (e.g. wire transfer, check deposit, withdrawal, cash deposit, etc.), a transaction identifier, and identities of any recipients. Additionally, or alternatively, the fake data generated in operation404may include other fake account information such as personal data (e.g. name, date of birth, social insurance number, address, contact information, account number, etc.) and account balance. In some embodiments, the server may generate fake data for only a subset of the requested first account data. That is, fake data may be generated for only a portion of the account data that is requested by the first application. For example, the server may identify a certain subset of the first account data (e.g. account data fields) for which fake data is to be generated, and generate fake data for only the identified subset. In at least some embodiments, the fake data generated by the server for account data requested by the first application may be uniquely associable with the first application. In particular, the server may generate different sets of fake account data for different third-party applications that are evaluated by the server. For example, for any two different applications that are tested by the application evaluation server, the sets of fake account data which are generated for the two applications may be distinct. The unique mapping of fake data sets to third-party applications may allow for detection, by an application evaluation entity, of a source of a security breach (e.g. data leak). The sets of fake data which are generated for various different third-party applications may be stored by the server, either locally or at a remote hosting site. In operation406, the server provides, to the first application, a first data set in response to the request. The first data set includes, at least, the fake data that is generated by the server in operation404. For example, the first data set may include fake historical transactions data associated with the user account. The fake historical transactions data may include data for at least one fake transfer operation of transferring value to or from the user account. The data for a fake transfer operation may include, among others, a value transferred by the fake transfer operation, a date associated with the fake transfer operation, a transfer type of the fake transfer operation, and a transfer identifier associated with the fake transfer operation. In at least some embodiments, the first data set may comprise both real and fake account data. Specifically, the first data set that is provided to the first application may include both real account data for a user that has elected to share account information with the first application and fake account data generated by the server. In particular, fake account data may be combined with real account data for the user. For example, the first data set may include one or more fake transactions which are inserted into a historical listing of real transactions for a user account. The fake transactions may be uniquely associated with the first application. That is, the set of fake transactions generated by the server and included in the first data set may be unique to the first application. Thus, the fake transactions generated for the first application may not be provided to any other third-party application. In this way, the fake transactions (and more generally, fake account data) which are provided to third-party applications may effectively act as a watermark on real account data that would allow for determining the source of a security breach, such as data leak or misuse. In operation408, the server monitors use of the first data set by the first application. Based on the monitoring, the server may determine when and how the data items of the first data set are utilized by the first application. In at least some embodiments, monitoring use of the first data set may include obtaining output data generated by the first application and evaluating the output data to determine whether different portions of the first data set have an effect on the output data. Specifically, the server may collect various data that is output by the first application and determine whether at least some of the fake data items of the first data set have an effect on the output data. For example, the output data may include display data generated by the first application. The display data may include, for example, data presented in application pages of a graphical user interface associated with the first application. The server may “crawl” the application pages to search for the fake data items of the first data set. In particular, the server may search the display data associated with one or more of the application pages to try to locate different ones of the fake data items. The search of display data may span across all application pages of the first application or only a subset of the application pages. For example, the server may iterate through all possible application pages or only a predetermined subset of all application pages, and search the display data associated with said application pages for the fake data items. As another example, the output data may include output code generated by the first application. That is, the server may obtain output code associated with the first application and evaluate the output code to determine whether any of the fake data items affects the output code. In particular, the server may obtain code that is outputted by the first application based on utilizing the first data set. For example, the output code of the first application may be searched to determine whether it contains one or more of the fake data items of the first data set. In at least some embodiments, monitoring use of the first data set may include performing a search of resources in a network based on a search query that includes portions of the first data set. For example, the server may perform keyword searches of a network, such as the Internet or the dark web, using search queries that include one or more of the fake data items. That is, the server may “crawl” the network to search for any of the fake data items which were provided to the first application. The server may, for example, search the content of web pages using keywords associated with the fake data items. The search may be a generic web search, or it may be a directed search of predetermined resources (e.g. web pages associated with certain entities, web pages where application data was previously leaked, etc.). In operation410, the server detects a trigger condition indicating a misuse of account data. In particular, based on monitoring use of the first data set by the first application, the server may detect an indication that a data security breach (e.g. data leak) and/or redundant data access has occurred. For example, the server may perform searches of a network for one or more of the fake data items when monitoring use of the first data set by the first application, in operation408. If the server locates at least one of the fake data items, it may determine that data has been leaked to the network. As explained above, the fake data provided to the first application may be unique to that application. If a fake data item that is unique to the first application is found during a search of the network, the server may identify the first application as being the source of the data leak. In at least some embodiments, a trigger condition may be associated with one or more predefined thresholds. By way of example, the server may determine that a data breach has occurred if the server detects at least a threshold quantity of fake data in a search of a network. A threshold may be defined as, for example, a percentage (e.g. certain percentage of the first data set or the fake data portion of the first data set) value or a frequency (e.g. number of times that parts of the fake data are located in the search of the network). If the percentage of the fake data detected during the search is greater than a threshold percentage, or if the number of times that the fake data is detected during the search exceeds a threshold frequency, the server may determine that a security breach (and/or misuse of account data) has occurred. As another example, the server may evaluate output data generated by first application, in operation408. If the server detects that one or more of the fake data items does not affect the output data, it may determine that the first application is retrieving more data than is required for its operations. The server may, for example, collect display data for various different application pages of the first application to determine whether the display content of the first application is affected by the fake data items. For example, the server may perform crawling, screen-scraping, optical character recognition, etc. on the application pages of the first application. Alternatively, or additionally, the server may obtain output code generated by the first application. The output code may be program code that is generated when the first application is executed using the first data subset as input. The output data of the first application can then be searched for the fake data items. In particular, search queries that include the fake data items can be used to search the output data. For example, the output data may be searched for each fake data item that is included in the first data set. If the server detects that one or more fake data items is not being utilized by the first application in generating output data, it may identify a misuse of the retrieved data (e.g. retrieval of redundant data). For example, if a certain fake data item is not found in a search of the output data of the first application, the fake data item may be determined to be redundant data, or data that is not required to be retrieved by the first application. When monitoring the output data of the first application, the server may be configured to evaluate only certain subsets of the output data. For example, the server may collect display output data for a predetermined set of application pages of the first application and determine, based on the collected output data, whether the fake data, or portions thereof, is affecting the outputs of the first application. If, after searching output data associated with the predetermined set of application pages, the server determines that certain fake data values are not affecting the outputs of the first application, the server may conclude that the fake data values represent redundant data which ought not to be retrieved by the first application. In other words, the evaluation of whether certain fake data retrieved by the first application are redundant is limited to a subset of all possible permutations of outputs of the first application. In response to detecting a trigger condition indicating misuse of account data, the server generates a notification identifying the misuse of the account data, in operation412. For example, the notification may indicate a type of data security threat associated with the first application, such as a data leak or retrieval of redundant data. The notification may identify the first application as being a potential source of the data security threat, a timestamp (e.g. date, time) associated with the detection, a network in which the fake data was detected, and any portions of the fake data that are determined to not be utilized by the first application. In some embodiments, the notification may also provide a risk score indicating a level of risk associated with the first application. A risk score for a third-party application may be obtained based on, for example, a frequency of detection of data security threat, types and quantity of data that is determined to have been leaked, type and quantity of account data that is accessed by the application, identity of the network(s) in which leaked data was detected, etc. The notification generated in operation412may also provide suggestions on how to remedy the detected data security vulnerabilities. For example, the notification may provide information about alternative applications having similar functionalities as the first application, suggest modifications to user consent for the first application to access the account data, or recommend revoking data access privileges granted to the first application. In operation414, the server transmits the notification to a computing device associated with an application user. That is, the application evaluation server may issue a notification regarding any data security threat or misuse of user account data. For example, the notification may be sent to the client device from which the third-party request to obtain account data originated. In this way, the user of the client device may be notified of potential security vulnerabilities associated with granting the first application access to the user's account data. In at least some embodiments, the server may perform other actions based on detecting misuse of account data. For example, the server may automatically revoke access to all or parts of the user account data for the first application. That is, the access permissions for the first application to access user account data may be modified. As another example, the server may update a risk score associated with the first application to reflect the misuse of account data. If the server detects a data leak or redundant retrieval of data by the first application, a risk score of the first application may be updated by the server (e.g. risk score is increased) to indicate an elevated risk associated with allowing the first application to access user account data. Reference is now made toFIG.5, which shows, in flowchart form, an example method500for generating fake data for use in evaluating the data security of a third-party application. The method500may be implemented by one or more processors of a computing device, such as an application evaluation server. In at least some embodiments, the operations of method500may be implemented as part of method400ofFIG.4. In particular, the operations502to506may be performed as sub-routines of operation404of method400. As described above, an application evaluation server may generate fake data for use in testing the data security of third-party applications. In particular, the fake data may be included in a data set that is provided to a third-party application in response to a request by the application to obtain user account data. If the fake data portion is easily recognizable, the third-party application server may be configured to identify and remove the fake data portion from the data set. The operations of method500are designed to prevent detection of fake data by third-party applications. In particular, these operations may make it difficult for a third-party to determine which portions of the account data set that is provided to the third-party application are fake and which portions correspond to real user account data. In operation502, the server generates a first set of fake transfer operations representing transfer of a predetermined first amount from the user account. The first set may include one or more fake credit transactions for the user account, totaling a value equal to the first amount. That is, the total transfer value represented by the first set of fake transfer operations is equal to the first amount. In some embodiments, the first set may include at least a threshold number of fake credit transactions. For example, the first set may be required to include more than one credit transaction. In operation504, the server generates a second set of fake transfer operations, with a total transfer value that is equal to the first amount. The second set may include one or more fake debit transactions for the user account. The second set represents fake transfer operations which are designed to offset the effect of the first set of fake transfer operations on a balance associated with the user account. That is, the first and second sets of fake transfer operations are generated such that the net effect of the fake transfer operations on the actual balance of the user account is zero. In some embodiments, the second set may include at least a predefined number of fake debit transactions. For example, the second set may be required to include more than one debit transaction. Once the first and second sets of fake transfer operations have been generated, the server may insert the fake transfer operations in the transactions data for a real user account, in operation506. For example, the fake transfer operations may be included in a historical listing of transactions for a user account. The transfer operations may be associated with different transaction types, amounts, and/or timestamps (e.g. date, time). Reference is made toFIG.6, which shows, in flowchart form, another example method600for evaluating the data security of a third-party application that has been granted access, or attempt to gain access, to a protected data resource. Specifically, the method600allows for testing third-party applications to determine whether they collect more data than is required when obtaining account data for a user account. The method600may be implemented by one or more processors of a computing device. Specifically, the operations602and onward may be performed by an application evaluation server for evaluating the behavior of a third-party application in collecting and utilizing user account data. The operations of method600may be performed as alternatives or in addition to the operations of method400described above. In operation602, the server receives, from a first application, a request to obtain account data for a user account associated with a protected data resource. The operation602may be performed in a similar manner as operation402of method400. In particular, the request may include, at least, identifying information for the user account, an identifier of the first application, login credentials of the user requesting to obtain the first account data, and the type(s) of account data that is requested. Upon receiving the request, the server generates a first subset of fake account data and provides the first subset to the first application, in operation604. The server then generates a second subset of fake account data and provides the second subset to the first application, in operation606. The first subset and the second subset contain almost identical data but vary in one aspect. In particular, the first subset may represent control data and the second subset may represent test data, where the control data and the test data differ in a single data parameter. For example, the second subset may be generated by simply changing one data value of the first subset. The server may be configured to automatically monitor outputs of the first application, in operation608. For example, after the two subsets of fake account data have been provided to the first application, the server may perform crawling, screen-scraping, optical character recognition, or other form of monitoring of application output data associated with the first application. In particular, the server monitors outputs of the first application that are based on using the first subset of fake account data and compares those outputs with outputs of the first application that are based on using the second subset of fake account data. That is, the server obtains two sets of output data corresponding to the first and second subsets of fake account data. The server then determines, in operation610, whether the output data sets are different. Specifically, the server attempts to determine whether the data item that was changed between the two subsets of fake data is of a type that is actually being used by the first application. If the two subsets of fake account data produce different output data sets, it may be an indication that the data item that was varied actually affects output of the first application. If, however, the two subsets of fake account data produce the same output data sets, the server may determine that the output of the first application is not affected by a change of the data item. In other words, the server may identify the data item as representing redundant data that is collected by the first application. Upon determining that the output data sets do not differ, the server may notify an application user (or other entity associated with the first application) that at least some account data collected by the first application is not used. In operation612, the server generates a notification that at least a portion of the data retrieved by the first application is redundant, and in operation614, the server transmits the notification to an application user (e.g. a client device associated with the application user). By iteratively changing data items in the control data set, the application evaluation server may identify data that is actually used by the first application (i.e. affects output of the first application) and data that is not used by the first application. In particular, if the server determines that a certain test data set produces an application output that is different from an output corresponding to the control data set, operations606to610may be repeated for evaluating a different test data set to determine whether another data item may be redundant. Reference is made toFIG.7, which shows, in flowchart form, an example method700for evaluating the data security of a third-party application, based on determining a pattern of data access by the third-party application. The method700may be implemented by one or more processors of a computing device, such as an application evaluation server. The operations702and onward may be performed as alternatives or in addition to the operations of methods400and600described above. When user data is shared with third-party applications, different applications may access the data in different ways. For example, some applications may access the user data immediately when the application is run on a client device, while other applications may only access user data when a particular functionality that requires user data is selected. Some applications may use access data even when the application is not in use. For example, an application may periodically (e.g. daily, monthly, or at other scheduled intervals) retrieve user data from a remote data resource. While such periodic retrieval of data may allow the application to process user data more efficiently, frequent sharing of user data may increase the risk of data security breaches—transmission of data, even in an encrypted format, can elevate the risk of interception of the data by malicious parties. Further, fresh data is inherently riskier and more vulnerable than stale data, as stale data (e.g. old transaction data, etc.) may be of lesser value than updated data if such data were to fall into the wrong hands. As described below, an application evaluation server may determine a data access pattern for a third-party application. A data access pattern describes a pattern of access operation, performed by a third-party application, for accessing account data of a user account in a protected data resource. Accordingly, a data access pattern for a third-party application is associated with one or more user accounts that the application accesses in order to retrieve account data. Understanding the pattern of data access by a third-party application may enable an application evaluation entity to assess risk levels associated with the application and its various operations, anticipate potential data security vulnerabilities, and provide recommendations for mitigating security risks or alternatives for the application. An application evaluation server may determine a data access pattern by running automated tests on the application. The tests may be performed, for example, in a sandboxed testing environment. The sandbox may, for example, include one or more emulators including an operating system (e.g. iOS, Android, etc.) emulator. In operation702, the server launches, in an automated test environment, a test instance of a first application. For example, the server may execute a virtual machine which simulates the operations of the first application. In particular, the test instance of the first application is configured to perform data access operations of accessing account data of at least one user account in a protected data resource. In operation704, the server detects at least one data retrieval operation by the first application of retrieving data from a protected data resource. Specifically, the server receives, from the first application, one or more requests to retrieve account data from the user account. The requests may be received directly from a client device on which the first application resides, or from a third-party server associated with the first application. A request for data retrieval may include, for example, identity of the first application, type(s) of account data that is requested, and identifying information for the user account. For each of the at least one data retrieval operation, the server identifies an application state of the first application at a time of detecting the at least one data retrieval operation, in operation706. When the server receives a request for data retrieval from the first application, the server determines a corresponding application state of the first application at a time of detecting the data retrieval request. For example, for each data retrieval operation, a mapping of the data retrieval operation to an application state may be determined by the server. In some embodiments, the server may determine the type(s) of account data that are obtained by the at least one data retrieval operation. An application state may, for example, be an execution state of the first application. That is, the application state may be an indicator of whether the first application is being executed (i.e. application is in use) or not executed (non-use). The server may determine, in operation706, whether or not the first application is being executed when the at least one data retrieval operation is detected. Additionally, or alternatively, an application state may indicate a particular application page, feature, or function that is accessed on the first application. For example, an application state may identify an application page of the first application that a user is currently viewing or otherwise interacting with. In operation708, the server determines a data access pattern for the first application of accessing the protected data resource. The data access pattern is determined based on monitoring the at least one data retrieval operation by the first application and application states of the first application associated with the at least one data retrieval operation. The data access pattern may be used in categorizing the first application based on its data retrieval behavior. By way of example, the first application may be categorized as: an application that only retrieves data when the application is executed; an application that periodically retrieves data, but at a low retrieval frequency; an application that periodically retrieves data at a high retrieval frequency; or an application that only retrieves data when certain functionalities of the application that require user data are selected, for example, by user input. In some embodiments, the server may be configured to put the test instance of the first application through various different operations to determine whether the first application retrieves user data in response to performing such operations. In particular, the server may cause the test instance to perform a plurality of predetermined operations of the first application and monitor for data retrieval activities. A data access pattern for the first application may be generated based on the monitoring of operations which trigger data retrieval by the first application. For example, the server may perform “crawling” operations on the first application (i.e. navigating to various screens or pages of the first application) and monitor whether the first application retrieves user data in response to one or more of the navigation operations. In operation710, the data access pattern for the first application is presented on a client device associated with a user. For example, the data access pattern may be presented as part of data security evaluation results of the first application. Reference is made toFIG.8, which shows, in flowchart form, another example method800for evaluating the data security of a third-party application, based on determining a pattern of data access by the third-party application. Specifically, the operations of method800may be performed in determining a data access pattern of accessing, by a first application, account data for a user account in a protected data resource. The method800may be implemented by one or more processors of a computing device, such as an application evaluation server. In some cases, the evaluation of data access pattern information for a third-party application may be performed using fake data in place of real account data. In particular, a test user account may be created for use in evaluating a data access pattern. The server creates a test user account associated with the protected data resource, in operation802. The test user account includes fake data. For example, the fake data may comprise fake user account data, such as fake historical transactions data and fake personal data (e.g. fake name, address, etc.). The fake historical transactions data may include, for example, data for at least one fake transfer operation of transferring value to or from the test user account. In some embodiments, the fake historical transactions data may comprise a plurality of fake credit and debit transactions that are generated by the server. In operation804, the server receives, from the first application, a request to obtain account data for a user account associated with the protected data resource. As a response to the request, the server may provide, to the first application, a first data set that includes at least the fake data of the test user account, in operation806. In particular, the first data set may include fake user account data that is generated by the server. The server may then determine, in operation808, a data access pattern for the first application based on requests, by the first application, to obtain account data and one or more application states of the first application associated with such requests. For example, the server may determine mappings of account data retrieval requests to application states for the first application. The server then presents the data access pattern for the first application on a client device associated with a user, in operation810. Operations808and810may be performed in a similar manner as operations708and710of method700, respectively. Reference is made toFIG.9, which shows, in flowchart form, another example method900for evaluating the data security of a third-party application, based on determining a pattern of data access by the third-party application. The operations of method900may be performed by one or more processors of a computing system, such as an application evaluation server. In operation902, the server launches a test instance of a first application in an automated test environment. The server then monitors for requests, from the first application, to retrieve user data from a protected data resource, in operation904. The operations902and904may be performed in a similar manner as operations702and704of method700, respectively. Upon receiving a data retrieval request from the first application, the server determines an application execution state of the first application. That is, in operation906, the server determines whether the first application is being executed or not being executed. If the first application is being executed at the time of the data retrieval request, the server may be configured to identify those operations which cause the first application to retrieve user data, in operation908. For example, the server may identify one or more functionalities of the first application which initiate data retrieval operations by the first application. In some embodiments, the data retrieval operations may be performed in response to a user-initiated action in the first application. For example, the first application may receive a user selection of a functionality associated with the first application that requires user data. In response to receiving such user input, the first application may generate a request to obtain certain user data, such as account data for a user account in a protected data resource. In operation912, the server may monitor data access by the first application to track application usage. If an application is categorized as one that accesses data when the application is being executed (i.e. in use), the server may determine whether the application is being used or has recently been used. Usage data may be useful, for example, for notifying a user of an application that is no longer being used but that still has access to the user's data. If a user is no longer using an application, it may be preferred to cancel that application's access privileges for accessing the user's data. For example, the server may detect non-use of a third-party application by confirming that the application has not accessed user data for at least a predefined threshold time period. Based on the monitoring, the server may provide, to the user, a notification indicating application usage data for the first application, or take an action regarding the first application's data access privileges. For example, the server may revoke or modify the access permissions for the first application to access account data for the user. If, on the other hand, the server determines that the first application is not being executed at the time of the data retrieval request, the server may determine a frequency of data retrieval. Upon determining that the first application accesses user data during periods of non-use, the server may obtain a retrieval frequency or period (e.g. whether the first application retrieves user data daily, weekly, monthly, etc.). The server then determines a data access pattern for the first application of accessing the protected data resource, in operation914, and presents the data access pattern on a client device associated with a user. In at least some embodiments, the data access pattern may include data relating to, for example, the operations which cause the first application to retrieve user data (identified in operation908), application usage data for the first application (obtained in operation912), and/or a frequency of data retrieval by the first application during periods of non-use (determined in operation910). Reference is made toFIG.10, which shows, in flowchart form, an example method1000for ranking third-party applications based on their respective data access patterns. The operations of method1000may be performed by one or more processors of a computing system, such as an application evaluation server. The method1000may be implemented as part of or in addition to the methods described above. In operation1002, the server determines a data retrieval pattern for a first application. The data retrieval pattern may be obtained according to techniques described above with respect to methods700,800and900. Based on the data retrieval pattern, the first application may be associated with a risk score. In operation1004, the server assigns a risk score for the first application. Generally, the server assigns a better (e.g. lower) risk score for applications that are deemed to have safe data access/retrieval patterns. For example, an application that only retrieves data when a user selects a certain functionality that requires user data would be assigned a better risk score than an application that retrieves user data periodically, even during periods of non-use, at a high frequency. The risk score determined for the first application may be displayed to a user. The risk score information may be displayed, for example, when a user is prompted to consent to data sharing with the first application. In some embodiments, the server may compare the data retrieval pattern of the first application with the patterns of other applications. Specifically, the server may generate a ranking of applications based on the “risk level” associated with the applications' data retrieval patterns. Such ranking may be useful for a device user in deciding whether to use a certain application or whether to opt for an alternative application that serves similar functions and has a better lower risk level for its data access behavior. When a user consents to share their data with a third-party application, it is desirable to monitor for any changes to the application's (or associated third-party server's) behavior. If the data access behavior of the application changes after user consent is given, the user should be notified of the changes such that they can be kept up-to-date on the various parameters of access (e.g. frequency, type(s) of data retrieved, etc.) of user data by the application. In this way, the user can exercise informed control of access to the user's data by third-parties. Reference is made toFIG.11, which shows, in flowchart form, an example method1100for evaluating the data security of a third-party application. Specifically, the method1100allows for monitoring changes in data access behavior by a third-party application. The operations of method1100may be performed by one or more processors of a computing system, such as an application evaluation server. In operation1102, the server launches a test instance of the first application in an automated test environment, such as a sandbox. The server then obtains a data access signature of the first application, in operation1104. The data access signature is a representation of the behavior of the first application in accessing data that the user has consented to share with the application. The data access signature is obtained based on identifying at least one application state of the first application and user account data that is retrieved by the first application in the at least one application state. In at least some embodiments, the at least one application state of the first application comprises an execution state of the first application. That is, the application state indicates whether the first application is being executed (i.e. application has been launched or is in use) or not being executed (non-use). The data access signature may thus indicate, for at least one application execution state of the first application, account data which is retrieved by the first application. In some embodiments, the data access signature may indicate the types of account data which are accessed by the first application in various application execution states. By way of example, certain data may be accessed when the first application is not running, certain data may be accessed when the first application is launched, and certain data may be accessed when a particular graphical user interface (e.g. display of an application page) of the application is displayed. In some embodiments, the data access signature may indicate, for one or more application states of the first application, a frequency of retrieval of account data by the first application. For example, the server may determine a schedule of data retrievals by the first application, which may include, for example, determining period of time between consecutive data retrieval operations, any predefined times for data retrievals, etc. In at least some embodiments, the data access signature may include a data access pattern, as described above with reference to methods700to1000inFIGS.7to10. Generally, a data access pattern may be determined by performing automated test processes on the application in a sandboxed test environment. For example, fake user data may be provided to an application, and an application evaluation entity may monitor the application state(s) (e.g. execution state) together with data retrieval operations by the application. In some embodiments, the data access signature may be determined using artificial intelligence. For example, in a test environment, inputs and outputs of the first application may be monitored in order to obtain a data access signature. In operation1106, the server receives, from a client device associated with the user account, an indication of access permissions for the first application to access a user account for retrieving account data. That is, the server receives consent from the user for the first application to access user account data. When the data access signature for the first application is determined, it may be stored together with the access permissions, i.e. user's consent. The access permissions for the first application may be stored in association with the data access signature by the server. The server then continues to monitor the first application to determine whether its data access behavior changes. In operation1108, the server detects a change in the data access signature of the first application.FIG.12shows an example method for monitoring for changes to the data access signature of an application. Operations1202to1208may, for example, be performed as sub-routines of the operation1108of method1100, by an application evaluation server. When monitoring, in operation1202, data retrieval operations by the first application, the server may detect that the first application retrieves a type of account data that it did not retrieve at the time of the user's consent. In particular, for at least one of the application states of the first application, the server may determine that the first application retrieves data that is of a different type than the type(s) of data which were accessed at the time of consent. In response to detecting such retrieval to a new type of account data, the server may notify a user of a change in access signature of the first application. Additionally, the server may detect that the data access signature has changed upon detecting that, in at least one of the application states, the first application retrieves account data from the user account more frequently than a frequency at which data was retrieved (or proposed to be retrieved) at the time of user consent. Thus, in response to detecting the change in the access signature, the server notifies the user of the detected change, in operations1110and1208. In some embodiments, the notification may indicate, among other information, the nature of the change in the data access signature (or data access behavior of the first application) and a time of detection of the change. Reference is made toFIG.13, which shows, in flowchart form, an example method1300for ranking third-party applications based on their risk scores. The risk scores are determined based on, for example, automated test processes for evaluating the data security of the third-party applications. The operations of method1300may be performed by one or more processors of a computing system, such as an application evaluation server. In operation1302, the server launches a test instance of a first application in an automated test environment. The server then obtains a data access signature of the first application, in operation1304. The data access signature is based on at least one application state and account data retrieved by the first application in the at least one application state. For example, the data access signature may indicate mappings of data retrieval operations by the first application to the application states in which those data retrieval operations are performed. The server identifies an application category for the first application, in operation1306. The categorization may, for example, be based on the application's purpose or features (e.g. “personal financial manager” may be one category). The server may then assign, to the first application, a risk score that is based on the data access signature for the first application, in operation1308. Further, in some embodiments, the applications of a given category may be ranked based on their respective risk scores, in operation1310. Applications that access less data or that access data only when needed (e.g. when a functionality requiring user data is selected) or that access data less frequently may generally receive more favorable scores. In operation1302, the server notifies a user of the determined ranking of the first application relative to other applications of its own category. For example, the ranking of the first application may be included in a notification of a detected change in data access signature for the first application. The ranking information may be useful for a user in deciding whether to replace the first application with an alternative in the same category with a better rank. The example embodiments of the present application have been described above with respect to third-party applications that are resident on a user's client device. It should be noted, however, that the disclosed systems and methods may be applicable more generally for managing user account access requests by various different types of third-party applications or services. For example, the third-party applications may be cloud-based applications that are available to users on-demand via a computer network (e.g. Internet), or web-based applications that are hosted on the web and run in a web browser. The various embodiments presented above are merely examples and are in no way meant to limit the scope of this application. Variations of the innovations described herein will be apparent to persons of ordinary skill in the art, such variations being within the intended scope of the present application. In particular, features from one or more of the above-described example embodiments may be selected to create alternative example embodiments including a sub-combination of features which may not be explicitly described above. In addition, features from one or more of the above-described example embodiments may be selected and combined to create alternative example embodiments including a combination of features which may not be explicitly described above. Features suitable for such combinations and sub-combinations would be readily apparent to persons skilled in the art upon review of the present application as a whole. The subject matter described herein and in the recited claims intends to cover and embrace all suitable changes in technology. | 92,809 |
11861018 | DETAILED DESCRIPTION Methods and systems provided herein advantageously enable a networked, cloud-based server device to dynamically access, diagnose and assess security attributes, including resilience and vulnerability attributes, of a software application that is under execution. Solutions herein provide dynamic application security testing by subjecting the software application, while under execution, to directed attack vectors from a scanning application, identifying vulnerabilities, and generating a dynamic security vulnerability score. As referred to herein, a software application includes web-based application programs as deployed, software as a service (SaaS), a cloud managed service provided application program. In particular, methods and systems herein assess a dynamic security vulnerability during execution of software application or program in its running state. As used herein, the term “security vulnerability” means a programming error, feature or attribute that produces unintended behavior(s) and results in an application which may enable malicious code to bypass security features built into the application, whereupon, once the application's security features are bypassed, the malicious code can use the application as a gateway for appropriating or corrupting sensitive, protected, or confidential data. The term “dynamic” as used herein refers to actions performed during real-time application program execution in one or more processors of a computing device for its intended purpose. Dynamic security vulnerability or risk can be diagnosed and scored or ranked by utilizing various inputs, in some embodiments attack vectors as provided herein, to induce unexpected execution results in order to quantify a security risk associated with a particular aspect of a software product, such as a security risk associated with exploitation of a security vulnerability that is inherent to the software application. In this manner, dynamic assessment and security risk scoring associated with exploitation of a security vulnerability for a software application can contribute to more effectively identifying, prioritizing, managing and pre-empting security risks to an enterprise organization. Furthermore, a dynamic security vulnerability score as proposed herein may be used to determine whether and to what extent to trust a web-based software application including software as a service (SaaS) applications, a website or similar infrastructure and software components. In other embodiments, the system can identify which of the various factors used in generating the security reliance score would have the most impact on the security vulnerability diagnostic score, thus assisting and directing administrators or others to evaluate and improve the impact of changes within an enterprise. In accordance with a first example embodiment, a method of dynamic testing and diagnostic assessment of security vulnerability of cloud- or web-based enterprise software applications is provided. The method comprises directing, to a software program under execution, a series of attack vectors; diagnosing a set of results associated with the software execution as constituting one of a security vulnerability and not a security vulnerability, the set of results produced based at least in part on the attack vectors; and assessing a dynamic security vulnerability score for the software program based at least in part on the diagnosing. In general, a higher dynamic security vulnerability score may be calculated or assessed in instances where a lower dynamic vulnerability score indicates lower security risk in terms of higher resilience to potential dynamic security threats. On the other hand, a lower score may be merited where the assessment indicates an increased security risk or a lessened resilience to potential software security threats. In some embodiments, the dynamic security vulnerability score may be an aggregation of the set of results that constitute a security vulnerability that is attributable to the series of attack vectors. In one variation, the dynamic security vulnerability score may be based on a weighted aggregation of the set of results constituting the security vulnerability that is attributable to the respective ones of the series of attack vectors. In this variation, reported vulnerabilities from different attack vectors would be weighted differently when assessing the score, since errors from certain attack vectors might be considered as having more serious potential and consequences for security violations than others. In some practical uses of the methods and systems herein, results of the diagnostic assessment and scoring may be used to certify a web-based software application, or a provider of such application, under prevailing and pre-established proprietary, industry or governmental standards pertaining to software security vulnerability. In accordance with a second example embodiment, a non-transitory medium storing instructions executable in a processor of a server computing device is provided. The instructions are executable to assess a dynamic security vulnerability score for a software application under execution by directing, to the software program under execution, a series of attack vectors; diagnosing a set of results associated with the software execution as constituting one of a security vulnerability and not a security vulnerability, the set of results produced based at least in part on the attack vectors; and assessing a dynamic security vulnerability score for the software program based at least in part on the diagnosing. In accordance with a third example embodiment, a server computing system for dynamic testing and diagnostic assessment of security vulnerability of cloud- or web-based enterprise software applications is provided. The system comprises a server computing device that includes a memory storing instructions and one or more processors for executing the instructions stored thereon direct, to a software program under execution, a series of attack vectors; diagnose a set of results associated with the software execution as constituting one of a security vulnerability and not a security vulnerability, the set of results produced based at least in part on the attack vectors; and assess a dynamic security vulnerability score for the software program based at least in part on the diagnosing. One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device. Furthermore, one or more embodiments described herein may be implemented through the use of logic instructions that are executable by one or more processors of a computing device, including a server computing device. These instructions may be carried on a computer-readable medium. In particular, machines shown with embodiments herein include processor(s) and various forms of memory for storing data and instructions. Examples of computer-readable mediums and computer storage mediums include portable memory storage units, and flash memory. A server computing device as described herein utilizes processors, memory, and logic instructions stored on computer-readable medium. Embodiments described herein may be implemented in the form of computer processor-executable logic instructions or programs stored on computer memory mediums. System Description FIG.1illustrates, in an example embodiment, cloud-based system100for dynamic security diagnostic assessment of web-based enterprise software applications currently under execution. Server computing system or device101includes software security dynamic assessment module105embodied according to computer processor-executable instructions stored within a non-transitory memory. Server101is in communication via communication network104with computing device102. Computing device102, which may be a server computing device in some embodiments, may host enterprise software program or application106for execution thereon. Software program106in another embodiment may be a web-based application program. Database103, for example storing enterprise data accessible to software application106under execution, is communicatively accessible to computing device102. FIG.2illustrates, in an example embodiment, architecture200of server computing system101hosting software security dynamic assessment module105for security diagnostic assessment of enterprise software applications. Server computing system or device101, also referred to herein as server101, may include processor201, memory202, display screen203, input mechanisms204such as a keyboard or software-implemented touchscreen input functionality, and communication interface207for communicating via communication network104. Memory202may comprise any type of non-transitory system memory, storing instructions that are executable in processor201, including such as a static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), or a combination thereof. Software security dynamic assessment module105includes processor-executable instructions stored in memory202of server101, the instructions being executable in processor201. Software security dynamic assessment module105may comprise portions or sub-modules including attack vectors module210, dynamic vulnerability diagnostic module211and dynamic vulnerability scoring module212. Processor201uses executable instructions of attack vectors module210to direct, to a software program under execution, a series of attack vectors. In an embodiment, the software program comprises a cloud based software program that is communicative accessible to the security assessing server during the execution. The scanning application at server101directing the attack vectors may have no foreknowledge of the execution attributes of the software application under execution. For example, the scanning application may not have, nor does it need, access to source code of the application under execution, but is configured by way of the attack vectors to detect vulnerabilities by actually performing attacks. Identifying and targeting the application may be based partly on having acquired no prior knowledge of execution attributes and source code of the software application. The terms “application” and “program” are used interchangeably herein. A series of attack descriptions, or an attack vectors as referred to herein, constituted of script code in some embodiments, can be accessed from a data store such as a database or from memory202of server device101. The attack description may be constituted of a data set, constituted of script code in some embodiments, that encodes an attack or attempt to exploit or otherwise compromise a security vulnerability of the software program106under execution. For example, in embodiments, the attack description can include an identifier of a class or type of attack, a data value or group of data values that will be included within the attack data set, a reference to a particular attack data set, or a copy of an attack data set. In an embodiment, one or more attack vectors of the series comprises a data set that encodes an attempt to exploit a security vulnerability aspect of the software application under execution. In some variations, the data set may include one or more of an identifier of a class and a type of attack, a data value, a group of data values, a reference to a predetermined attack data set, and a copy of an attack data set. Processor201uses executable instructions stored in dynamic vulnerability diagnostic module211to diagnose a set of results associated with the software execution as whether respective ones of the results constitute a dynamic security vulnerability or not, the set of results being produced based at least in part on the attack vectors as directed to the software program during execution. In some aspects, the security vulnerability may relate to one or more of a cross-site scripting, a SQL injection, a path disclosure, a denial of service, a memory corruption, a code execution, a cross-site request forgery, a PHP injection, a Javascript injection and a buffer overflow. In some embodiments, diagnosing a security vulnerability comprises the software application providing an error response indicating that at least one attack vector in the series of attack vectors successfully exploited a security vulnerability of the application. In some cases, based on a result of the dynamic testing, a scanner in accordance with server101deploying the attack vectors may not report a dynamic security vulnerability for the application. In such cases, the application would have nullified the attack data set, thus pre-empting or preventing a security vulnerability, and accordingly provided an error response to indicate that a requested service or operation could not be executed because some input, for instance the attack data set, was improper. The dynamic security vulnerability diagnosis in this case would not report a security vulnerability for the application because the application did not use the attack data set in a manner that would allow exploitation of the targeted security vulnerability. Processor201uses executable instructions stored in dynamic vulnerability scoring module212to assess a dynamic security vulnerability score for the software program based at least in part on the diagnosing. In some embodiments, the dynamic security vulnerability score may be an aggregation of the set of results constituting a security vulnerability that is attributable to the series of attack vectors. In one variation, the dynamic security vulnerability score may be based on a weighted aggregation of the set of results constituting the security vulnerability that is attributable to the respective ones of the series of attack vectors. In this variation, reported vulnerabilities from different attack vectors would be weighted differently when assessing the score, as some errors from different attack vectors might be considered as having more serious potential and consequences for security violations than others. In embodiments, a higher security vulnerability diagnostic score may be determined or assigned in instances where the particular attribute contributes to, or indicates, a lower security risk or greater resilience to potential security threats. On the other hand, a lower score may be merited where assessment of a given attribute contributes to, or indicates, an increased security risk or a lessened resilience to potential software security threats. It is contemplated that a security vulnerability score or similar security assessment may be applied to, and associated with a particular software provider, or even a SaaS enterprise user, in accordance with dynamic testing techniques as provided herein. In some aspects, security performance indicators may be assigned or determined for a given corps of programmers, or even for individual programmers, who deploy the web-based software, or contributed in definable ways to development of the software application. Such performance indicators may be assigned or derived at least in part based on the software security dynamic testing and assessment techniques disclosed herein. Software security performance indicators may be tracked and updated, for example using key performance indicator (KPI) measurements of dynamic security vulnerability instances. In some aspects, the dynamic vulnerability scores may be correlated with performance criteria in accordance with pre-established proprietary, industry or governmental standards. Where such pre-established standards provide for certifications, such certifications may be applied or awarded to those software applications that merit, in accordance with the pre-established standards, requirements for software security vulnerability or resilience attributes based on the dynamic testing and scoring techniques disclosed herein. In such certification context, assigning a certification status to the software program may be based at least in part on the dynamic security vulnerability score in conjunction with the pre-established certification standard. In embodiments, a higher dynamic security vulnerability diagnostic score may be determined or assigned in instances where the particular attribute contributes to, or indicates, a lower security risk or greater resilience to potential security threats. On the other hand, a lower score may be merited where assessment of a given attribute contributes to, or indicates, an increased security risk or a lessened resilience to potential software security threats. In related embodiments, higher dynamic security vulnerability scores may be correlated with a higher potential for compromise of sensitive enterprise data by way of data corruption or unauthorized appropriation, a level of control ceded to an attacker, an amount of financial damage caused to an enterprise using, selling or distributing the software program, and a level of commercial integrity harm to an enterprise using, distributing or selling the software program. Based on such correlating, monetary premiums of a risk insurance policy may be assessed for an enterprise using, selling or distributing the web-based software program, commensurate with the potential harm to the enterprise, including monetary and commercial reputation or integrity harm considerations. In certain aspects, dynamic security vulnerability scores as proposed herein may be used to determine whether and to what extent to trust an enterprise web-based software application, website or similar infrastructure and software components. In related embodiments, the techniques disclosed herein may be used to identify which of the various factors used in generating the dynamic security score would have the most critical software security impact, thus assisting and directing system administrators and others evaluate and improve the impact of changes. Methodology FIG.3illustrates, in an example embodiment, method300of operation of a server computing system101for dynamic security diagnostic assessment of web-based software applications, method300being performed by one or more processors201of server computing device101. In describing the example ofFIG.3, reference is made to the examples ofFIG.1andFIG.2for purposes of illustrating suitable components or elements for performing a step or sub-step being described. Examples of method steps described herein relate to the use of server101for implementing the techniques described. According to one embodiment, the techniques are performed by software security dynamic assessment module105of server101in response to the processor201executing one or more sequences of software logic instructions that constitute software security dynamic assessment module105. In embodiments, software security dynamic assessment module105may include the one or more sequences of instructions within sub-modules including attack vectors module210, dynamic vulnerability diagnostic module211and dynamic vulnerability scoring module212. Such instructions may be read into memory202from machine-readable medium, such as memory storage devices. In executing the sequences of instructions contained in attack vectors module210, dynamic vulnerability diagnostic module211and dynamic vulnerability scoring module212of software security dynamic assessment module105in memory202, processor201performs the process steps described herein. In alternative implementations, at least some hard-wired circuitry may be used in place of, or in combination with, the software logic instructions to implement examples described herein. Thus, the examples described herein are not limited to any particular combination of hardware circuitry and software instructions. At step310, processor201executes instructions of attack vectors module210to direct, from security assessing server101, a series of attack vectors to software program under execution106at computing device102. In an embodiment, the software program comprises a cloud based software program that is communicative accessible to the security assessing server during the execution. The scanning application at server101directing the attack vectors may have no foreknowledge of the execution attributes of the software application under execution. For example, the scanning application may not have access to source code of the application under execution, but is configured by way of the attack vectors to detect vulnerabilities by actually performing attacks. Identifying and targeting the application may be based partly on having acquired no prior knowledge of execution attributes and source code of the software application. In some embodiments, a series of attack descriptions, or an attack vectors as referred to herein, constituted of script code, can be accessed from a data store such as a database or from memory202of server device101. the attack description may be constituted of as a data set that encodes an attack or attempt to exploit a security vulnerability of the software program106under execution. For example, in embodiments, the attack description can include an identifier of a class or type of attack, a data value or group of data values that will be included within the attack data set, a reference to a particular attack data set, or a copy of an attack data set. In an embodiment, one or more attack vectors of the series may include a data set that encodes an attempt to exploit a security vulnerability aspect of the software application under execution. In some variations, the data set may include one or more of an identifier of a class and a type of attack, a data value, a group of data values, a reference to a predetermined attack data set, and a copy of an attack data set. At step320, processor201of server computing device101executes instructions included in dynamic vulnerability diagnostic module211to diagnose a set of results associated with the software execution as to whether respective ones of the results constitute a security vulnerability or not, the set of results being produced based at least in part on the attack vectors. In some aspects, the security vulnerability may relate to one or more of a cross-site scripting, a SQL injection, a path disclosure, a denial of service, a memory corruption, a code execution, a cross-site request forgery, a PHP injection, a Javascript injection and a buffer overflow. In some embodiments, diagnosing a security vulnerability comprises the software application providing an error response indicating that at least one attack vector in the series of attack vectors successfully exploited a security vulnerability of the application. In some cases, based on a result of the dynamic testing, a scanner in accordance with server101deploying the attack vectors may not report a dynamic security vulnerability for the application. In such cases, the application would have nullified the attack data set, thus pre-empting or preventing a security vulnerability, and accordingly provided an error response to indicate that a requested service or operation could not be executed because some input, for instance the attack data set, was improper. The dynamic security vulnerability diagnosis in this case would not report a security vulnerability for the application because the application did not use the attack data set in a manner that would allow exploitation of the targeted security vulnerability. At step330, processor201executes instructions included in dynamic vulnerability scoring module212, to assess a dynamic security vulnerability score for the software program based at least in part on the diagnosing In some embodiments, the dynamic security vulnerability score may be an aggregation of the set of results constituting a security vulnerability that is attributable to the series of attack vectors. In one variation, the dynamic security vulnerability score may be based on a weighted aggregation of the set of results constituting the security vulnerability that is attributable to the respective ones of the series of attack vectors. In this variation, reported vulnerabilities from different attack vectors would be weighted differently when assessing the score, as some errors from different attack vectors might be considered as having more serious potential and consequences for security violations than others. In embodiments, a higher security vulnerability diagnostic score may be determined or assigned in instances where the particular attribute contributes to, or indicates, a lower security risk or greater resilience to potential security threats. On the other hand, a lower score may be merited where assessment of a given attribute contributes to, or indicates, an increased security risk or a lessened resilience to potential software security threats. It is contemplated that a security vulnerability score or similar security assessment may be applied to, and associated with a particular software provider, or even a SaaS enterprise user, in accordance with dynamic testing techniques as provided herein. In some aspects, security performance indicators may be assigned or determined for a given corps of programmers, or even for individual programmers, who deploy the web-based software, or contributed in definable ways to development of the software application. Such performance indicators may be assigned or derived at least in part based on the software security dynamic testing and assessment techniques disclosed herein. Software security performance indicators may be tracked and updated, for example using key performance indicator (KPI) measurements of dynamic security vulnerability instances. In some embodiments, the dynamic vulnerability scores may be correlated with performance criteria in accordance with pre-established proprietary, industry or governmental standards. Where such pre-established standards provide for certifications, such certifications may be applied or awarded to those software applications that merit, in accordance with the pre-established standards, requirements for software security vulnerability or resilience attributes based on the dynamic testing and scoring techniques disclosed herein. In such certification context, assigning a certification status to the software program may be based at least in part on the dynamic security vulnerability score in conjunction with the pre-established certification standard. In related embodiments, higher dynamic security vulnerability scores may be correlated with a higher potential for compromise of sensitive enterprise data by way of data corruption or unauthorized appropriation, a level of control ceded to an attacker, an amount of financial damage caused to an enterprise using, selling or distributing the software program, and a level of commercial integrity harm to an enterprise using, distributing or selling the software program. Based on such correlating, monetary premiums of a risk insurance policy may be assessed for an enterprise using, selling or distributing the web-based software program, commensurate with the potential harm to the enterprise, including monetary and commercial reputation or integrity harm considerations. It is contemplated that embodiments described herein extend to individual elements and concepts described herein, as well as for embodiments to include combinations of elements recited anywhere in this application. Although embodiments are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to only such example embodiments. As such, many modifications and variations will be apparent to practitioners skilled in the art. Accordingly, it is intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment can be combined with other individually described features, or parts of other embodiments, even if the other features and embodiments make no mention of the particular feature. Thus, the absence of describing combinations should not preclude the inventors from claiming rights to such combinations. | 28,654 |
11861019 | DETAILED DESCRIPTION Introduction Events can occur on computer systems that may be indicative of security threats to those systems. While in some cases a single event may be enough to trigger detection of a security threat, in other cases individual events may be innocuous on their own but be indicative of a security threat when considered in combination. For instance, the acts of opening a file, copying file contents, and opening a network connection to an Internet Protocol (IP) address may each be normal and/or routine events on a computing device when each act is considered alone, but the combination of the acts may indicate that a process is attempting to steal information from a file and send it to a server. Digital security systems have accordingly been developed that can observe events that occur on computing devices, and that can use event data about one or more event occurrences to detect and/or analyze security threats. However, many such digital security systems are limited in some ways. For example, some digital security systems only execute locally on individual computing devices. While this can be useful in some cases, local-only digital security systems may miss broader patterns of events associated with security threats that occur across a larger set of computing devices. For instance, an attacker may hijack a set of computing devices and cause each one to perform events that are innocuous individually, but that cause harmful results on a network, server, or other entity when the events from multiple computing devices are combined. Local-only security systems may accordingly not be able to detect a broader pattern of events across multiple computing devices. Some digital security systems do cause event data to be reported to servers or other network elements, such that network and/or cloud processing can be used to analyze event data from one or more computing devices. However, many such cloud-based systems can become overloaded with event data reported by individual computing devices, much of which may be noise and thus be irrelevant to security threat detection. For example, many systems do not have ways of limiting the event data that is initially reported to the cloud. Many systems also do not provide indications to the cloud about reasons why specific event data has been sent to the cloud. Additionally, many systems only hold reported event data for a certain period of time before it is deleted for storage space and/or other reasons. However, that period of time may be too long or too short depending on how relevant the data is to detection of security threats. As an example, a web server may have a temporary parent process that spawns one or more child processes that then run for months or years. Many existing digital security systems may delete event data about that parent process after a threshold period of time, even though event data about the parent process may continue to be relevant to understanding how a child process was spawned on the web server months or years later. As another example, many existing systems would store event data that is likely to be noise for the same amount of time as other event data that may be much more likely to be relevant to security threat detection. It can also be difficult to keep local components and network components of a digital security system synchronized such that they use the same data types, and/or are looking for the same types of events or patterns of events. For example, in many systems a locally-executing security application is coded entirely separately from a cloud processing application. In some cases, the two may use different data types to express event data, such that a digital security system may need to devote time and computing resources to conversion operations that allow a cloud processing application to operate on event data reported by a local application. Additionally, because locally-executing applications are often coded separately from cloud processing applications, it can take significant time and/or resources to separately recode and update each type of component to look for new types of security threats. Further, if local applications report event data to cloud elements of a network in a first format but are later updated to report similar event data in a second format, the cloud elements may need to be specially coded to maintain compatibility with both the first format and the second format to be able to evaluate old and new event data. Additionally, many digital security systems are focused on event detection and analysis, but do not allow specialized configurations to be sent to components that change how the components operate for testing and/or experimentation purposes. For example, in many systems there may be no mechanism for instructing local components on a set of computing devices to at least temporarily report additional event data to the cloud about a certain type of event that an analyst suspects may be part of a security threat. Described herein are systems and methods for a distributed digital security system that can address these and other deficiencies of digital security systems. Distributed Security System FIG.1depicts an example of a distributed security system100. The distributed security system100can include distributed instances of a compute engine102that can run locally on one or more client devices104and/or in a security network106. As an example, some instances of the compute engine102can run locally on client devices104as part of security agents108executing on those client devices104. As another example, other instances of the compute engine102can run remotely in a security network106, for instance within a cloud computing environment associated with the distributed security system100. The compute engine102can execute according to portable code that can run locally as part of a security agent108, in a security network106, and/or in other local or network systems that can also process event data as described herein. A client device104can be, or include, one or more computing devices. In various examples, a client device104can be a work station, a personal computer (PC), a laptop computer, a tablet computer, a personal digital assistant (PDA), a cellular phone, a media center, an Internet of Things (IoT) device, a server or server farm, multiple distributed server farms, a mainframe, or any other sort of computing device or computing devices. In some examples, a client device104can be a computing device, component, or system that is embedded or otherwise incorporated into another device or system. In some examples, the client device104can also be a standalone or embedded component that processes or monitors incoming and/or outgoing data communications. For example, the client device104can be a network firewall, network router, network monitoring component, a supervisory control and data acquisition (SCADA) component, or any other component. An example system architecture for a client device104is illustrated in greater detail inFIG.15, and is described in detail below with reference to that figure. The security network106can include one or more servers, server farms, hardware computing elements, virtualized computing elements, and/or other network computing elements that are remote from the client devices104. In some examples, the security network106can be considered to be a cloud or a cloud computing environment. Client devices104, and/or security agents108executing on such client devices104, can communicate with elements of the security network106through the Internet or other types of network and/or data connections. In some examples, computing elements of the security network106can be operated by, or be associated with, an operator of a security service, while the client devices104can be associated with customers, subscribers, and/or other users of the security service. An example system architecture for one or more cloud computing elements that can be part of the security network106is illustrated in greater detail inFIG.16, and is described in detail below with reference to that figure. As shown inFIG.1, instances of the compute engine102can execute locally on client devices104as part of security agents108deployed as runtime executable applications that run locally on the client devices104. Local instances of the compute engine102may execute in security agents108on a homogeneous or heterogeneous set of client devices104. One or more cloud instances of the compute engine102can also execute on one or more computing elements of the security network106, remote from client devices104. The distributed security system100can also include a set of other cloud elements that execute on, and/or are stored in, one or more computing elements of the security network106. The cloud elements of the security network106can include an ontology service110, a pattern repository112, a compiler114, a storage engine116, a bounding service118, and/or an experimentation engine120. As described further below, local and/or cloud instances of the compute engine102, and/or other elements of the distributed security system100, can process event data122about single events and/or patterns of events that occur on one or more client devices104. Events can include any observable and/or detectable type of computing operation, behavior, or other action that may occur on one or more client devices104. Events can include events and behaviors associated with Internet Protocol (IP) connections, other network connections, Domain Name System (DNS) requests, operating system functions, file operations, registry changes, process executions, hardware operations, such as virtual or physical hardware configuration changes, and/or any other type of event. By way of non-limiting examples, an event may be that a process opened a file, that a process initiated a DNS request, that a process opened an outbound connection to a certain IP address, that there was an inbound IP connection, that values in an operating system registry were changed, or be any other observable or detectable occurrence on a client device104. In some examples, events based on other such observable or detectable occurrences can be physical and/or hardware events, for instance that a Universal Serial Bus (USB) memory stick or other USB device was inserted or removed, that a network cable was plugged in or unplugged, that a cabinet door or other component of a client device104was opened or closed, or any other physical or hardware-related event. Events that occur on client devices104can be detected or observed by event detectors124of security agents108on those client devices104. For example, a security agent108may execute at a kernel-level and/or as a driver such that the security agent108has visibility into operating system activities from which one or more event detectors124of the security agent108can observe event occurrences or derive or interpret the occurrences of events. In some examples, the security agent108may load at the kernel-level at boot time of the client device104, before or during loading of an operating system, such that the security agent108includes kernel-mode components such as a kernel-mode event detector124. In some examples, a security agent108can also, or alternately, have components that operate on a computing device in a user-mode, such as user-mode event detectors124that can detect or observe user actions and/or user-mode events. Examples of kernel-mode and user-mode components of a security agent108are described in greater detail in U.S. patent application Ser. No. 13/492,672, entitled “Kernel-Level Security Agent” and filed on Jun. 8, 2012, which issued as U.S. Pat. No. 9,043,903 on May 26, 2015, and which is hereby incorporated by reference. When an event detector124of a security agent108detects or observes a behavior or other event that occurs on a client device104, the security agent108can place corresponding event data122about the event occurrence on a bus126or other memory location. For instance, in some examples the security agent108may have a local version of the storage engine116described herein, or have access to other local memory on the client device104, where the security agent108can at least temporarily store event data122. The event data122on the bus126, or stored at another memory location, can be accessed by other elements of the security agent108, including a bounding manager128, an instance of the compute engine102, and/or a communication component130that can send the event data122to the security network106. The event data122can be formatted and/or processed according to information stored at, and/or provided by, the ontology service110, as will be described further below. The event data122may also be referred to as a “context collection” of one or more data elements. Each security agent108can have a unique identifier, such as an agent identifier (AID). Accordingly, distinct security agents108on different client devices104can be uniquely identified by other elements of the distributed security system100using an AID or other unique identifier. In some examples, a security agent108on a client device104can also be referred to as a sensor. In some examples, event data122about events detected or observed locally on a client device104can be processed locally by a compute engine102and/or other elements of a local security agent108executing on that client device104. However, in some examples, event data122about locally-occurring events can also, or alternately, be sent by a security agent108on a client device104to the security network106, such that the event data122can be processed by a cloud instance of the compute engine102and/or other cloud elements of the distributed security system100. Accordingly, event data122about events that occur locally on client devices104can be processed locally by security agents108, be processed remotely via cloud elements of the distributed security system100, or be processed by both local security agents108and cloud elements of the distributed security system100. In some examples, security agents108on client devices104can include a bounding manager128that can control how much event data122, and/or what types of event data122, the security agents108ultimately send to the security network106. The bounding manager128can accordingly prevent the security network106from being overloaded with event data122about every locally-occurring event from every client device104, and/or can limit the types of event data122that are reported to the security network106to data that may be more likely to be relevant to cloud processing, as will be described further below. In some examples, a bounding manager128can also mark-up event data122to indicate one or more reasons why the event data122is being sent to the security network106, and/or provide statistical information to the security network106. The bounding manager128, and operations of the bounding manager128, are discussed further below with respect toFIGS.6and7. Cloud elements such as the compiler114, the bounding service118, and/or the experimentation engine120can generate configurations132for other elements of the distributed security system100. Such configurations132can include configurations132for local and/or cloud instances of the compute engine102, configurations132for local bounding managers128, and/or configurations132for other elements. Configurations132can be channel files, executable instructions, and/or other types of configuration data. The ontology service110can store ontological definitions134that can be used by elements of the distributed security system100. For example, rules and other data included in configurations132for the compute engine102, bounding manager128, and/or other elements can be based on ontological definitions134maintained at the ontology service110. As discussed above, a piece of event data122that is generated by and/or processed by one or more components of the distributed security system100can be a “context collection” of data elements that is formatted and/or processed according to information stored at, and/or provided by, the ontology service110. The ontological definitions134maintained at the ontology service can, for example, include definitions of context collection formats136and context collection interfaces138. The ontology service110can also store interface fulfillment maps140. Each interface fulfillment map140can be associated with a specific pairing of a context collection format136and a context collection interface138. An ontological definition134of a context collection format136can define data elements and/or a layout for corresponding event data122. For example, an ontological definition134of a context collection format136can identify specific types of information, fields, or data elements that should be captured in event data122about a type of event that occurs on a client device104. For example, although any number of attributes about an event that occurs on a client device104could be captured and stored in event data122, an ontological definition134of a context collection format136can define which specific attributes about that event are to be recorded into event data122for further review and processing. Accordingly, event data122can be considered to be a context collection associated with a particular context collection format136when the event data122includes data elements as defined in an ontological definition134of that particular context collection format136. As an example, if a buffer on a client device104includes information about four different processes associated with an event, and the four processes were spawned by a common parent process, an ontological definition134of a context collection format136associated with that event may indicate that only a process ID of the common parent process should be stored in event data122for that event, without storing process IDs of the four child processes in the event data122. However, as another example, an ontological definition134of a different context collection format136may indicate that a set of process IDs, including a parent process ID and also a set of child process IDs, should be stored in event data122to indicate a more complex structure of parent-child process relationships associated with an event. A context collection format136may also, or alternately, indicate other types of data elements or fields of information that should be captured about an event, such as a time, event type, network address of other network-related information, client device104information, and/or any other type of attribute or information. Various client devices104and/or other elements of the distributed security system100may capture or process event data122based on the same or different context collection formats136. For example, a first security agent108on a first client device104that detects a network event may capture event data122about the network event including an associated process ID according to a first context collection format136for network events. However, a second security agent108on a second client device104may detect the same type of network event, but may capture event data122about the network event including an associated process ID as well as additional attributes such as an associated time or network address according to a second context collection format136for network events. In this example, the first security agent108and the second security agent108may transmit event data122for the same type of network event to the security network106based on different context collection formats136. However, a cloud instance of the compute engine102, or other elements of the distributed security system100, may nevertheless be configured to process event data122based on different context collection formats136when the event data122satisfies the same context collection interface138. An ontological definition134of a context collection interface138can indicate a set of one or more data elements that a component of the distributed security system100expects to be present within event data122in order for the component to consume and/or process the event data122. In particular, an ontological definition134of a context collection interface138can define a minimum set of data elements, such that event data122that includes that minimum set of data elements may satisfy the context collection interface138, although additional data elements beyond the minimum set may or may not also be present in that event data122. As an example, if an ontological definition134of a context collection interface138specifies that data elements A and B are to be present in event data122, a first piece of event data122that includes data elements A and B may satisfy the context collection interface138, and a second piece of event data122that includes data elements A, B, and C may also satisfy the context collection interface138. However, in this example, a third piece of event data122that includes data elements A and C would not satisfy the context collection interface138, because the third piece of event data122does not include data element B specified by the ontological definition134of the context collection interface138. The ontology service110can also generate and/or maintain interface fulfillment maps140. In some examples, an interface fulfillment map140may also be referred to as a context collection implementation. An interface fulfillment map140can be provided in the ontology service110for individual pairs of context collection formats136and context collection interfaces138. An interface fulfillment map140associated with a particular context collection format136and a particular context collection interface138can indicate how event data122, formatted according to the particular context collection format136, satisfies the particular context collection interface138. Accordingly, event data122formatted according to a particular context collection format136may satisfy a particular context collection interface138if the event data122includes the data elements specified by the ontological definition134of the particular context collection interface138, and if an interface fulfillment map140exists at the ontology service110that is associated with both the particular context collection format136and the particular context collection interface138. For example, when an ontological definition134of a particular context collection interface138specifies that data elements A and B are to be present for event data122to match the particular context collection interface138, the ontology service110can have a first interface fulfillment map140associated with the particular context collection interface138and a first context collection format136, and a second interface fulfillment map140associated with the particular context collection interface138and a second context collection format136. The first interface fulfillment map140can indicates that a specific first portion, such as one or more specific bits, of event data122formatted according to the first context collection format136maps to data element A of the context collection interface138, and that a specific second portion of that event data122maps to data element B of the context collection interface138. The second interface fulfillment map140may indicate that a different portion of event data122formatted according to the second context collection format136maps to data element A of the context collection interface138, and that a different second portion of that event data122maps to data element B of the context collection interface138. The ontology service110can provide interface fulfillment maps140to compute engines102, bounding managers128, and/or other elements of the distributed security system100. As discussed above, an element of the distributed security system100may consume or process event data122according to a context collection interface138. For example, elements of the distributed security system100can be configured, for instance via configurations132, to process event data122based in part on whether event data122satisfies particular context collection interfaces138. Accordingly, when an element, such as a compute engine102or a bounding manager128, receives event data122formatted according to a particular context collection format136, the element can use an interface fulfillment map140that corresponds to that particular context collection format136and the context collection interface138to determine whether the received event data122satisfies the context collection interface138, and/or to locate and identify specific portions of the event data122that match the data elements specified by the ontological definition134of the context collection interface138. For example, a configuration132for a compute engine102can be based on a context collection interface138that specifies that a process ID for a network event should be included in event data122. The compute engine102can accordingly use that configuration132and corresponding interface fulfillment maps140to process event data122that the compute engine102receives for network events that is formatted according to any context collection format136that includes at least the process ID expected by the context collection interface138. Accordingly, if the compute engine102receives first event data122about a first network event is formatted based on a first context collection format136that includes a process ID, and also receives second event data122about a second network event is formatted based on a second context collection format136that includes a process ID as well as execution time data, the compute engine102can nevertheless process both the first event data122and the second event data122because both include at least the process ID specified by the context collection interface138. As such, the compute engine102can use the same configuration132to process event data122in varying forms that include at least common information expected by a context configuration interface138, without needing new or updated configurations132for every possible data type or format for event data122. In some examples, an ontological definition134can define authorization levels for individual fields or other data elements within event data122. For example, an ontological definition134of a context collection format136can define authorization levels on a field-by-field or element-by-element basis. As will be described further below, in some examples different users or elements of the distributed security system100may be able to access or retrieve information from different sets of fields within the same event data122, for example as partial event data122, based on whether the user or element has an authorization level corresponding to the authorization levels of individual fields of the event data122. The ontological definitions134can be used, either directly or indirectly, consistently by multiple elements throughout the distributed security system100. For example, an ontological definition134can be used by any runtime element of the distributed security system100, and the ontological definition134may be agnostic as to whether any particular runtime element of the distributed security system100is running according to a C++ runtime, a Java runtime, or any other runtime. In some examples, new and/or edited data types defined by ontological definitions134at the ontological service110can be used by multiple elements of the distributed security system100without manually recoding those elements individually to use the new and/or edited data types or adjusting the ontological definitions134to work with different types of runtimes. As an example, when a new ontological definition134for a new context collection format136is defined at the ontology service110, a compiler114or other element can automatically generate new configurations132for compute engines102, event detectors124, or other elements that can generate new or refined event data122, such that the new or refined event data122is formatted to include data elements based on the new context collection format136. For instance, as will be discussed below, a compute engine102and/or other elements of the distributed security system100can process incoming event data122to generate new event data122, for example by refining and/or combining received event data122using refinement operations and/or composition operations. Accordingly, an ontological definition134can define a context collection format136indicating which types of data elements should be copied from received event data122and be included in new refined event data122according to a refinement operation, or be taken from multiple pieces of received event data122and used to generate new combined event data122according to a composition operation. In other examples, when a new ontological definition134for a new context collection format136is defined at the ontology service110, new interface fulfillment maps140that correspond to the new context collection format136and one or more context collection interfaces138can be generated and provided to elements of the distributed security system100. As another example, when a new ontological definition134for a new context collection interface138is defined at the ontology service110, the compiler114can automatically generate configurations132for local and cloud instances of the compute engine102. The configurations132can indicate expected data elements according to the new context collection interface138, such that the compute engine102can process any type of event data122that is based on any context collection format136that includes at least those expected data elements when a corresponding interface fulfillment map140exists, even though no new source code has been written for the compute engine102that directly indicates how to process each possible type or format of event data122that may include those expected data types. Similarly, the bounding service118can generate configurations132for bounding managers128based at least in part on the ontological definition134of a new context collection interface138, such that the bounding manager128can also process event data122that matches the new context collection interface138when a corresponding interface fulfillment map140exists. In other examples, when a new ontological definition134for a new context collection format136is defined at the ontology service110, new interface fulfillment maps140that correspond to the new context collection format136and one or more context collection interfaces138can be generated and provided to elements of the distributed security system100. Accordingly, a new context collection interface138can be used by both the compute engine102and the bounding manager128based on a corresponding interface fulfillment map140, without directly recoding either of the compute engine102or the bounding manager128or regardless of whether instances of the compute engine102and/or the bounding manager128execute using different runtimes. In some examples, a user interface associated with the ontology service110can allow users to add and/or modify ontological definitions134. In some examples, elements of the distributed security system100may, alternately or additionally, access the ontology service110to add and/or modify ontological definitions134used by those elements, such that other elements of the distributed security system100can in turn be configured to operate according to the ontological definitions134stored at the ontology service110. For example, as will be described in further detail below, a compiler114can generate configurations132for instances of the compute engine102based on text descriptions of types of events and/or patterns of events that are to be detected and/or processed using the distributed security system100. If the compiler114determines that such a configuration132would involve the compute engine102generating new types of event data122that may include new data elements or a different arrangement of data elements, for example using refinement operations or composition operations as discussed below with respect toFIGS.2and3, the compute engine102can add or modify ontological definitions134of corresponding context collection formats136at the ontological service110. Other elements of the distributed security system100can in turn obtain the new or modified ontological definitions134and/or interface fulfillment maps140from the ontological service110to understand how to interpret those new types of event data122. In some examples, one or more elements of the distributed security system100can store local copies or archives of ontological definitions134and/or interface fulfillment maps140previously received from the ontology service110. However, if an element of the distributed security system100receives data in an unrecognized format, the element can obtain a corresponding ontological definition134or interface fulfillment map140from the ontology service110such that the element can understand and/or interpret the data. The ontology service110can also store archives of old ontological definitions134and/or interface fulfillment maps140, such that elements of the distributed security system100can obtain copies of older ontological definitions134or interface fulfillment maps140if needed. For instance, if for some reason a particular security agent108running on a client device104has not been updated in a year and is using an out-of-date configuration132based on old ontological definitions134, that security agent108may be reporting event data122to the security network106based on an outdated context collection format136that more recently-updated cloud elements of the distributed security system100do not directly recognize. However, in this situation, cloud elements of the distributed security system100can retrieve old ontological definitions134from the ontology service110and thus be able to interpret event data122formatted according to an older context collection format136. The pattern repository112can store behavior patterns142that define patterns of one or more events that can be detected and/or processed using the distributed security system100. A behavior pattern142can identify a type of event, and/or a series of events of one or more types, that represent a behavior of interest. For instance, a behavior pattern142can identify a series of events that may be associated with malicious activity on a client device104, such as when malware is executing on the client device104, when the client device104is under attack by an adversary who is attempting to access or modify data on the client device104without authorization, or when the client device104is subject to any other security threat. In some examples, a behavior pattern142may identify a pattern of events that may occur on more than one client device104. For example, a malicious actor may attempt to avoid detection during a digital security breach by causing different client devices104to perform different events that may each be innocuous on their own, but that can cause malicious results in combination. Accordingly, a behavior pattern142can represent a series of events associated with behavior of interest that may occur on more than one client device104during the behavior of interest. In some examples, cloud instances of the compute engine102may be configured to identify when event data122from multiple client devices104collectively meets a behavior pattern142, even if events occurring locally on any of those client devices104individually would not meet the behavior pattern142. In some examples, a “rally point” or other behavior identifier may be used to link event data122associated with multiple events that may occur on one or more client devices104as part of a larger behavior pattern142. For example, as will be described below, a compute engine102can create a rally point306when first event data122associated with a behavior pattern142is received, to be used when second event data122that is received at a later point in time that is also associated with the behavior pattern142. Rally points are discussed in more detail below with respect toFIG.3in association with composition operations. The compiler114can generate configurations132for cloud and/or local instances of the compute engine102. In some examples, the compiler114can generate configurations132based at least in part on ontological definitions134from the ontology service110and/or behavior patterns142from the pattern repository112. For example, a behavior pattern142may indicate logic for when event data122about a pattern of events can be created and/or processed. In some examples, the compiler114can generate configurations132for the compute engine102using a fundamental model that includes refinements and/or compositions of behavioral expressions, as will be discussed further below. Although a configuration132for the compute engine102can include binary representations of instructions, those instructions can be generated by the compiler114such that the instructions cause the compute engine102to process and/or format event data122based on corresponding context collection formats136and/or context collection interfaces138defined by ontological definitions134. When generating configurations132, the compiler114can also perform type-checking and safety check instructions expressed in the configurations132, such that the instructions are safe to be executed by other runtime components of the distributed security system100according to the configurations132. The storage engine116can process and/or manage event data122that is sent to the security network106by client devices104. In some examples, the storage engine116can receive event data122from security agents108provided by an operator of a security service that also runs the security network106. However, in other examples, the storage engine116can also receive and process event data122from any other source, including security agents108associated with other vendors or streams of event data122from other providers. As will be explained in more detail below, the storage engine116can sort incoming event data122, route event data122to corresponding instances of the compute engine102, store event data122in short-term and/or long-term storage, output event data122to other elements of the distributed security system100, and/or perform other types of storage operations. The storage engine116, and operations of the storage engine116, are discussed further below with respect toFIGS.8-13. The bounding service118can generate configurations132for bounding managers128of local security agents108. For example, the bounding service118can generate new or modified bounding rules that can alter how much, and/or what types of, event data122a bounding manager128permits a security agent108to send to the security network106. The bounding service118can provide the bounding rules to bounding managers128in channel files or other types of configurations132. In some examples, a user interface associated with the bounding service118can allow users to add and/or modify bounding rules for bounding managers128. In some examples, bounding rules can be expressed through one or more selectors602, as discussed further below with respect toFIG.6. The experimentation engine120can create configurations132for elements of the distributed security system100that can at least temporarily change how those elements function for experimentation and/or test purposes. For example, the experimentation engine120can produce a configuration132for a bounding manager128that can cause the bounding manager128to count occurrences of a certain type of event that is expected to be relevant to an experiment, or to cause a security agent108to send more event data122about that event type to the security network106than it otherwise would. This can allow the security network106to obtain different or more relevant event data122from one or more client devices104that can be used to test hypotheses, investigate suspected security threats, test how much event data122would be reported if an experimental configuration132was applied more broadly, and/or for any other reason. The experimentation engine120, and operations of the experimentation engine120, are discussed further below with respect toFIG.14. Compute Engine An instance of the compute engine102, in the security network106or in a security agent108, can perform comparisons, such as string match comparisons, value comparisons, hash comparisons, and/or other types of comparisons on event data122for one or more events, and produce new event data122based on results of the comparisons. For example, an instance of the compute engine102can process event data122in an event stream using refinements and/or compositions of a fundamental model according to instructions provided in a configuration132. Refinement operations202and composition operations302that instances of the compute engine102can use are discussed below with respect toFIGS.2-4. FIG.2depicts an example of a refinement operation202that can be performed by an instance of the compute engine102. A refinement operation202can have filter criteria that the compute engine102can use to identify event data122that the refinement operation202applies to. For example, the filter criteria can define target attributes, values, and/or data elements that are to be present in event data122for the refinement operation202to be applicable to that event data122. In some examples, filter criteria for a refinement operation202can indicate conditions associated with one or more fields of event data122, such as the filter criteria is satisfied if a field holds an odd numerical value, if a field holds a value in a certain range of values, or if a field holds a text string matching a certain regular expression. When the compute engine102performs comparisons indicating that event data122matches the filter criteria for a particular refinement operation202, the refinement operation202can create new refined event data204that includes at least a subset of data elements from the original event data122. For example, if the compute engine102is processing event data122as shown inFIG.2, and the event data122includes data elements that match criteria for a particular refinement operation202, the refinement operation202can create refined event data204that includes a least a subset of data elements selected from event data122. In some examples, the data elements in the refined event data204can be selected from the original event data122based on a context collection format136. A refinement operation202can accordingly result in a reduction or a down-selection of event data122in an incoming event stream to include refined event data204containing a subset of data elements from the event data122. As a non-limiting example, event data122in an event stream may indicate that a process was initiated on a client device104. A refinement operation202may, in this example, include filter criteria for a string comparison, hash comparison, or other type of comparison that can indicate creations of web browser processes. Accordingly, the refinement operation202can apply if such a comparison indicates that the created process was a web browser process. The compute engine102can accordingly extract data elements from the event data122indicating that the initiated process is a web browser, and include at least those data elements in newly generated refined event data204. In some examples, new refined event data204can be added to an event stream as event data122, such as the same and/or a different event stream that contained the original event data122. Accordingly, other refinement operations202and/or composition operations302can operate on the original event data122and/or the new refined event data204from the event stream. FIG.3depicts an example of a composition operation302that can be performed by an instance of the compute engine102. A composition operation302can have criteria that the compute engine102can use to identify event data122that the composition operation302applies to. The criteria for a composition operation302can identify at least one common attribute that, if shared by two pieces of event data122, indicates that the composition operation302applies to those two pieces of event data122. For example, the criteria for a composition operation302can indicate that the composition operation302applies to two pieces of event data122when the two pieces of event data122are associated with child processes that have the same parent process. The compute engine102can accordingly use comparison operations to determine when two pieces of event data122from one or more event streams meet criteria for a composition operation302. When two pieces of event data122meet the criteria for a composition operation302, the composition operation302can generate new composition event data304that contains data elements extracted from both pieces of event data122. In some examples, the data elements to be extracted from two pieces of event data122and used to create the new composition event data304can be based on a context collection format136. As an example, when first event data122A and second event data122B shown inFIG.3meet criteria of the composition operation302, the composition event data304can be generated based on a context collection format136to include data elements from the first event data122A and from the second event data122B. In some examples, the context collection format136for the composition event data304can include a first branch of data elements extracted from the first event data122A, and include a second branch of data elements extracted from the second event data122B. Accordingly, while the first event data122A and the second event data122B may be formatted according to a first context collection format136, or according to different context collection formats136, the composition event data304can be generated based on another context collection format136that is different from the context collection formats136of the first event data122A and the second event data122B, but identifies at least a subset of data elements from each of the first event data122A and the second event data122B. In some examples, new composition event data304created by a composition operation302can be added to an event stream as event data122, such as the same and/or a different event stream that contained original event data122used by the composition operation302. Accordingly, other refinement operations202and/or composition operations302can operate on the original event data122and/or the new composition event data304from the event stream. A composition operation302can be associated with an expected temporally ordered arrival of two pieces of event data122. For example, the composition operation302shown inFIG.3can apply when first event data122A arrives at a first point in time and second event data122B arrives at a later second point in time. Because the first event data122A may arrive before the second event data122B, a rally point306can be created and stored when the first event data122A arrives. The rally point306can then be used if and when second event data122B also associated with the rally point306arrives at a later point in time. For example, a composition operation302can be defined to create new composition event data304from a child process and its parent process, if the parent process executed a command line. In this example, a rally point306associated with a first process can be created and stored when first event data122A indicates that the first process runs a command line. At a later point, new event data122may indicate that a second process, with an unrelated parent process different from the first process, is executing. In this situation, the compute engine102can determine that a stored rally point306associated with the composition does not exist for the unrelated parent process, and not generate new composition event data304via the composition operation302. However, if further event data122indicates that a third process, a child process of the first process, has launched, the compute engine102would find the stored rally point306associated with the first process and generate the new composition event data304via the composition operation302using the rally point306and the new event data122about the third process. In particular, a rally point306can store data extracted and/or derived from first event data122. The rally point306may include pairs and/or tuples of information about the first event data122and/or associated processes. For example, when the first event data122A is associated with a child process spawned by a parent process, the data stored in association with a rally point306can be based on a context collection format136and include data about the child process as well as data about the parent process. In some examples, the data stored in association with a rally point306may include at least a subset of the data from the first event data122A. A rally point306can be at least temporarily stored in memory accessible to the instance of the compute engine102, for example in local memory on a client device104or in cloud storage in the security network106. The rally point306can be indexed in the storage based on one or more composition operations302that can use the rally point306and/or based on identities of one or more types of composition event data304that can be created in part based on the rally point306. When second event data122B is received that is associated with the composition operation302and the rally point306, the compute engine102can create new composition event data304based on A) data from the first event data122that has been stored in the rally point306and B) data from the second event data122B. In some examples, the rally point306, created upon the earlier arrival of the first event data122A, can be satisfied due to the later arrival of the second event data122, and the compute engine102can delete the rally point306or mark the rally point306for later deletion to clear local or cloud storage space. In some examples, a rally point306that has been created and stored based on one composition operation302may also be used by other composition operations302. For example, as shown inFIG.3, a rally point306may be created and stored when first event data122A is received with respect to a first composition operation302that expects the first event data122A followed by second event data122B. However, a second composition operation302may expect the same first event data122A to followed by another type of event data122that is different from the second event data122B. In this situation, a rally point306that is created to include data about the first event data122A, such as data about a child process associated with the first event data122A and a parent process of that child process, can also be relevant to the second composition operation302. Accordingly, the same data stored for a rally point306can be used for multiple composition operations302, thereby increasing efficiency and reducing duplication of data stored in local or cloud storage space. In some examples, the compute engine102can track reference counts of rally points306based on how many composition operations302are waiting to use those rally points306. For instance, in the example discussed above, a rally point306that is generated when first event data122A arrives may have a reference count of two when the first composition operation302is waiting for the second event data122B to arrive and the second composition operation302is waiting for another type of event data122to arrive. In this example, if the second event data122B arrives and the first composition operation302uses data stored in the rally point306to help create new composition event data304, the reference count of the rally point306can be decremented from two to one. If the other type of event data122expected by the second composition operation302arrives later, the second composition operation302can also use the data stored in the rally point306to help create composition event data304, and the reference count of the rally point306can be decremented to zero. When the reference count reaches zero, the compute engine102can delete the rally point306or mark the rally point306for later deletion to clear local or cloud storage space. In some examples, a rally point306can be created with a lifetime value. In some cases, first event data122A expected by a composition operation302may arrive such that a rally point306is created. However, second event data122A expected by the composition operation302may never arrive, or may not arrive within a timeframe that is relevant to the composition operation302. Accordingly, if a rally point306is stored for longer than its lifetime value, the compute engine102can delete the rally point306or mark the rally point306for later deletion to clear local or cloud storage space. Additionally, in some examples, a rally point306may be stored while a certain process is running, and be deleted when that process terminates. For example, a rally point306may be created and stored when a first process executes a command line, but the rally point306may be deleted when the first process terminates. However, in other examples, a rally point306associated with a process may continue to be stored after the associated process terminates, for example based on reference counts, a lifetime value, or other conditions as described above. In some situations, a composition operation302that expects first event data122A followed by second event data122B may receive two or more instances of the first event data122A before receiving any instances of the second event data122B. Accordingly, in some examples, a rally point306can have a queue of event data122that includes data taken from one or more instances of the first event data122A. When an instance of the second event data122B arrives, the compute engine102can remove data from the queue of the rally point306about one instance of the first event data122A and use that data to create composition event data304along with data taken from the instance of the second event data122B. Data can be added and removed from the queue of a rally point306as instances of the first event data122A and/or second event data122B arrive. In some examples, when the queue of a rally point306is empty, the compute engine102can delete the rally point306or mark the rally point306for later deletion to clear local or cloud storage space. FIG.4depicts a flowchart of example operations that can be performed by an instance of the compute engine102in the distributed security system100. At block402, the compute engine102can process an event stream of event data122. The event data122may have originated from an event detector124of a security agent108that initially detected or observed the occurrence of an event on a client device104, and/or may be event data122that has been produced using refinement operations202and/or composition operations302by the compute engine102or a different instance of the compute engine102. In a local instance of the compute engine102, in some examples the event stream may be received from a bus126or local memory on a client device104. In a cloud instance of the compute engine102, in some example the event stream may be received via the storage engine116. At block404, the compute engine102can determine whether a refinement operation202applies to event data122in the event stream. As discussed above, the event data122may be formatted according to a context collection format136, and accordingly contain data elements or other information according to an ontological definition134of the context collection format136. A refinement operation202may be associated with filter criteria that indicates whether information in the event data122is associated with the refinement operation202. If information in the event data122meets the filter criteria, at block406the compute engine102can generate refined event data204that includes a filtered subset of the data elements from the event data122. The compute engine102can add the refined event data204to the event stream and return to block402so that the refined event data204can potentially be processed by other refinement operations202and/or composition operations302. At block408, the compute engine102can determine if a composition operation302applies to event data122in the event stream. As discussed above with respect toFIG.3, the compute engine102may have criteria indicating when a composition operation302applies to event data122. For example, the criteria may indicate that the composition operation302applies when event data122associated with a child process of a certain parent process is received, and/or that the composition operation302expects first event data122of a child process of the parent process to be received followed by second event data122of a child process of the parent process. If a composition operation302is found to apply to event data122at block408, the compute engine102can move to block410. At block410, the compute engine102can determine if a rally point306has been generated in association with the event data122. If no rally point306has yet been generated in association with the event data122, for example if the event data122is the first event data122A as shown inFIG.3, the compute engine102can create a rally point306at block412to store at least some portion of the event data122, and the compute engine102can return to processing the event stream at block402. However, if at block410the compute engine102determines that a rally point306associated with the event data122has already been created and stored, for example if the event data122is the second event data122B shown inFIG.3and a rally point306was previously generated based on earlier receipt of the first event data122A shown inFIG.3, the rally point306can be satisfied at block414. The compute engine102can satisfy the rally point at block414by extracting data from the rally point306about other previously received event data122, and in some examples by decrementing a reference count, removing data from a queue, and/or deleting the rally point306or marking the rally point306for later deletion. At block416, the compute engine102can use the data extracted from the rally point306that had been taken from earlier event data122, along with data from the newly received event data122, to generate new composition event data304. The compute engine102can add the composition event data304to the event stream and return to block402so that the composition event data304can potentially be processed by refinement operations202and/or other composition operations302. At block418, the compute engine102can generate a result from event data122in the event stream. For example, if the event stream includes, before or after refinement operations202and/or composition operations302, event data122indicating that one or more events occurred that match a behavior pattern142, the compute engine102can generate and output a result indicating that there is a match with the behavior pattern142. In some examples, the result can itself be new event data122specifying that a behavior pattern142has been matched. For example, if event data122in an event stream originally indicates that two processes were initiated, refinement operations202may have generated refined event data204indicating that those processes include a web browser parent process that spawned a notepad child process. The refined event data122may be reprocessed as part of the event stream by a composition operation302that looks for event data122associated with child processes spawned by web browser parent process. In this example, the composition operation302can generate composition event data304that directly indicates that event data122associated with one or more child processes spawned by the same parent web browser process has been found in the event stream. That new composition event data304generated by the composition operation may be a result indicating that there has been a match with a behavior pattern142associated with a web browser parent process spawning both a child notepad process. In some examples, when a result indicates a match with a behavior pattern142, the compute engine102, or another component of the distributed security system100, can take action to nullify a security threat associated with the behavior pattern142. For instance, a local security agent108can block events associated with malware or cause the malware to be terminated. However, in other examples, when a result indicates a match with a behavior pattern142, the compute engine102or another component of the distributed security system100can alert users, send notifications, and/or take other actions without directly attempting to nullify a security threat. In some examples, the distributed security system100can allow users to define how the distributed security system100responds when a result indicates a match with a behavior pattern142. In situations in which event data122has not matched a behavior pattern142, the result generated at block418can be an output of the processed event stream to another element of the distributed security system100, such as to the security network106and/or to another instance of the compute engine102. As shown inFIG.4a compute engine102can process event data122in an event stream using one or more refinement operations202and/or one or more composition operations302in any order and/or in parallel. Accordingly, the order of the refinement operation202and the composition operation302depicted inFIG.4is not intended to be limiting. For instance, as discussed above, new event data122produced by refinement operations202and/or composition operations302can be placed into an event stream to be processed by refinement operations202and/or composition operations302at the same instance of the compute engine102, and/or be placed into an event stream for another instance of the compute engine102for additional and/or parallel processing. FIG.5depicts an example of elements of a compiler114processing different types of data to generate a configuration132for instances of the compute engine102. As shown inFIG.5, the compiler114can receive at least one text source502that includes a description of an event or pattern of events to be detected by the compute engine102. The compiler114can identify a behavior pattern142, or a combination of behavior patterns142, from the pattern repository112, and use those one or more behavior patterns142to build instructions for the compute engine102in a configuration132that cause the compute engine102to look for, refine, and/or combine event data122to determine whether event data122matches target behavior of interest. For example, the compiler114can generate instructions for the compute engine102that cause the compute engine102to use refinement operations202and/or composition operations302to make corresponding comparisons on event data122. The compiler114can generate the instructions in the configuration132such that the compute engine102processes and/or generates event data122according to ontological definitions134. In some examples, the compiler114can accordingly decompose a comprehensive text description of a behavior of interest, and decompose that comprehensive description into smaller refinements and/or compositions that together make up the overall behavior of interest. The compiler114can generate instructions for these smaller refinements and compositions that can cause a compute engine102to perform matching operations to determine when such smaller refinements and compositions apply within a stream of event data122. Based on such matches, the instructions can also cause the compute engine102to use refinement operations202and/or composition operations302to iteratively build event data122that ultimately matches the full behavior of interest when that behavior of interest has occurred. Accordingly, a user can provide a text description of a behavior of interest, and the compiler114can automatically generate a corresponding executable configuration132for instances of compute engine102, without the user writing new source code for the compute engine102. A front-end parser504of the compiler114can transform the text source502into language expressions of an internal language model506. A language transformer508of the compiler114can then use a series of steps to transform the language expressions of the internal language model506into a fundamental model510. The fundamental model510can express operations, such as refinement operations202and/or composition operations302, that can be executed by the compute engine102as described above with respect toFIGS.2and3. For example, the language transformer508can resolve behavior references in language expressions of the language model506to identify and/or index behaviors described by the text source502. Next, the language transformer508can eliminate values and/or computations in behavioral expressions that rely on optionality, by creating distinct and separate variants of the behavioral expressions that can be followed depending on whether a particular value is present at runtime. The language transformer508can also eliminate conditional expressions in the behavioral expressions by transforming the conditional expressions into multiple distinct behavioral expressions. Additionally, the language transformer508can eliminate Boolean expressions in logical expressions within behavioral expressions, by transforming them into multiple alternative behavioral expressions for the same fundamental behavior. Finally, the language transformer508can perform refinement extraction and composition extraction to iteratively and/or successively extract fundamental refinements and/or fundamental compositions from the behavioral expressions until none are left. The extracted fundamental refinements and fundamental compositions can define a fundamental model510for the compute engine102, and can correspond to the refinement operations202and/or composition operations302discussed above with respect toFIGS.2and3. After the language transformer508has generated a fundamental model510containing fundamental refinements and/or fundamental compositions, a dispatch builder512of the compiler114can generate one or more dispatch operations514for the compute engine102based on the fundamental model510. Overall, the dispatch builder512can transform declarative definitions of behaviors in the fundamental model510into a step-by-step execution dispatch model expressed by dispatch operations514. For example, the dispatch builder512can identify and extract public behaviors from the fundamental model510that have meaning outside a runtime model. The dispatch builder512can also transform refinements from the fundamental model510by extracting logical conditions from behavior descriptions of the fundamental model510and converting them into logical pre-conditions of execution steps that build behaviors through refinement. Similarly, the dispatch builder512can transform compositions from the fundamental model510by extracting and transforming descriptive logical conditions into pre-conditions for execution. The dispatch builder512may also transform identified compositions into a form for the storage engine116in association with rally points306. Once the dispatch builder512has extracted and/or transformed public behaviors, refinements, and/or compositions, the dispatch builder512can combine the dispatches by merging corresponding execution instructions into a set of dispatch operations514. In some examples, the dispatch builder512can express the combined dispatches using a dispatch tree format that groups different operations by class for execution. After the dispatch builder512has generated dispatch operations514from the fundamental model510, for example as expressed in a dispatch tree, a back-end generator516of the compiler114can transform the dispatch operations514into an execution structure518using a pre-binary format, such as a JavaScript Object Notation (JSON) representation. The pre-binary format can be a flat, linear representation of an execution structure518. In some examples, a three-address code form can be used to flatten the execution structure518, such that a hierarchical expression can be converted into an expression for at least temporary storage. For example, the back-end generator516can flatten a dispatch tree produced by the dispatch builder512to flatten and/or rewrite the dispatch tree to have a single level with inter-tree references. The back-end generator516may also transform random-access style references in such inter-tree references to a linearized representation suitable for binary formats. The back-end generator516can build context collection formats136by transforming references into context collection formats136for new behavior production into indexed references. The back-end generator516can also construct a three-address form for the execution structure518by decomposing and transforming multi-step expressions into instructions that use temporary registers. The back-end generator516can additionally construct the execution structure518in a pre-binary format, such as a JSON format, by transforming each type of instruction to a representation in the pre-binary format. After the back-end generator516has generated an execution structure518using the pre-binary format, such as a JSON format, a serializer520of the compiler114can generate configuration132for the compute engine102by converting the execution structure518from the pre-binary format into a binary format. The compiler114can output the generated configuration132to instances of the compute engine102. A compute engine102can then follow instructions in the configuration132to execute corresponding operations, such as refinement operations202and/or composition operations302, as described above with respect toFIGS.2-4. The generated configuration132may accordingly be an executable configuration132that any instance of the compute engine102can use to execute instructions defined in the configuration132, even though the compute engine102itself has already been deployed and/or is unchanged apart from executing the new executable configuration132. As an example, in the process ofFIG.5, a user may provide a text description of a behavior of interest via a user interface associated with the pattern repository112or other element of the distributed security system100. The description of the behavior of interest may indicate that the user wants the distributed security system100to look for network connections to a target set of IP addresses. In these examples, the compiler114can generate instructions for refinement operations202and/or composition operations302that would cause the compute engine102to review event data122for all network connections, but generate new event data122, such as refined event data204and/or composition event data304, when the event data122is specifically for network connections to one of the target set of IP addresses. That new event data122indicating that there has been a match with the behavior of interest can be output by the compute engine102as a result, as discussed above with respect to block418ofFIG.4. Additionally, when an initial text description of a behavior of interest involves a set of events that may occur across a set of client devices104, the compiler114can generate instructions for local instances of the compute engine102to perform refinement operations202and/or composition operations302on certain types of event data122locally, and instructions for cloud instances of the compute engine102to perform refinement operations202and composition operations302on event data122reported to the security network106from multiple client devices104to look for a broader pattern of events across the multiple client devices104. Accordingly, although the compiler114can generate configurations132that can be executed by both local and cloud instances of the compute engine102, which specific instructions from a configuration132that a particular instance of the compute engine102executes may depend on where that instance is located and/or what event data122it receives. Bounding Manager FIG.6depicts an example data flow in a bounding manager128of a security agent108. The bounding manager128can be a gatekeeper within a local security agent108that controls how much and/or what types of event data122the security agent108sends to the security network106. Although event detectors124, a compute engine102, and/or other elements of the security agent108add event data122to a bus126or other memory location such that a communication component130can send that event data122to the security network106, a bounding manager128may limit the amount and/or types of event data122that is ultimately sent to the security network106. For example, a bounding manager128can intercept and/or operate on event data122on a bus126and make a determinization as to whether the communication component130should, or should not, actually send the event data122to the security network106. For example, when a security agent108is processing networking events associated with one or more processes running on a client device104, a bounding manager128in the security agent108may limit event data122that is sent to the security network106to only include information about unique four-tuples in network connection events, data about no more than a threshold number of networking events per process, data about no more than a threshold number of networking events per non-browser process, no more than a threshold number of networking events per second, or data limited by any other type of limitation. As another example, if a security agent108detects three hundred networking events per minute that occur on a client device104, but the bounding manager128is configured to allow no more than one hundred networking events per minute to be sent to the security network106, the bounding manager128may accordingly limit the security agent108to sending event data122about a sample of one hundred networking events drawn from the full set of three hundred networking events, and thereby avoid submitting event data122about the full set of three hundred networking events to the security network106. This can reduce how much event data122cloud elements of the distributed security system100store and/or process, while still providing event data122to the cloud elements of the distributed security system100that may be relevant to, and/or representative of, activity of interest that is occurring on the client device104. In some examples, event data122intercepted and operated on by the bounding manager128can be original event data122about events observed or detected on the client device104by one or more event detectors124of the security agent108. In other examples, event data122intercepted and operated on by the bounding manager128can be event data122produced by an instance of the compute engine102, such as event data122produced by refinement operations202and/or composition operations302. In some examples, the bounding manager128can be an enhancer located on a bus126that can intercept or operate on event data122from the bus126before the event data122reaches other elements of the security agent108that may operate on the event data122. A bounding manager128can operate according to bounding rules provided by the bounding service118in one or more configurations132. Bounding rules can be defined through one or more selectors602that can be implemented by a bounding manager128as will be discussed further below, such that a bounding manager128can apply bounding rules by processing event data122from an event stream using one or more associated selectors602. As discussed above, a bounding manager128can be provided with a configuration132generated based on an ontological definition134of a context collection interface138, such that the bounding manager128can process event data122formatted using any context collection format136that includes at least the data elements of the context collection interface138, if an interface fulfillment map140corresponds to the context collection format136and the context collection interface138. In some examples, configurations132for a bounding manager128can be sent from the security network106as one or more channel files. In some examples, the distributed security system100can use different categories of channel files, including global channel files, customer channel files, customer group channel files, and/or agent-specific channel files. Global channel files can contain global bounding rules that are to be applied by bounding managers128in all security agents108on all client devices104. Customer channel files can contain customer-specific bounding rules that are to be applied by bounding managers128in security agents108on client devices104associated with a particular customer. For example, a particular customer may want more information about a certain type of event or pattern of events that the customer believes may be occurring on the customer's client devices104. Corresponding customer-specific bounding rules can thus be generated that may cause bounding managers128to allow more event data122about that type of event or pattern of events to be sent to cloud elements of the distributed security system100. The customer-specific bounding rules can be pushed, via customer channel files, to security agents108executing on the customer's client devices104. Customer group channel files can be similar channel files containing bounding rules that are specific to a particular group or type of customers. Agent-specific channel files can contain bounding rules targeted to specific individual security agents108running on specific individual client devices104. For example, if it is suspected that a particular client device104is being attacked by malware or is the focus of another type of malicious activity, agent-specific channel files can be generated via the bounding service118and be sent to the security agent108running on that particular client device104. In this example, the agent-specific channel files may provide a bounding manager128with new or adjusted bounding rules that may result in more, or different, event data122being sent to the security network106that may be expected to be relevant to the suspected malicious activity. In some examples, an agent-specific channel file can include an AID or other unique identifier of a specific security agent108, such that the agent-specific channel file can be directed to that specific security agent108. Accordingly, a bounding service118can use different types of channel files to provide bounding managers128of different security agents108with different sets of bounding rules. For example, a bounding service118may provide all security agents108with general bounding rules via global channel files, but may also use customer, customer group, and/or agent-specific channel files to provide additional targeted bounding rules to subsets of security agents108and/or individual security agents108. In such cases, a bounding manager128may operate according to both general bounding rules as well as targeted bounding rules. In some examples, a bounding manager128can restart, or start a new instance of the bounding manager128, that operates according to a new combination of bounding rules when one or more new channel files arrive. In some examples, a bounding service118or other cloud element of the distributed security system100can also, or alternately, send specialized event data122to a client device104as a configuration132for a bounding manager128. In these examples, the specialized event data122can include data about new bounding rules or modifications to bounding rules. A bounding manager128can intercept or receive the specialized event data122as if it were any other event data122, but find the data about new or modified bounding rules and directly implement those new or modified bounding rules. For example, although configurations132for a bounding manager128provided through one or more channel files make take seconds or minutes for a bounding manager128to begin implementing, for instance if the bounding manager128need to receive and evaluate new channel files, determine how new channel files interact with previous channel files, and/or restart the bounding manager128or start a new instance of the bounding manager128in accordance with a changed set of channel files, or if the bounding service118itself takes time to build and deploy channel files, the bounding manager128may be configured to almost immediately implement new or modified bounding rules defined via specialized event data122. As an example, a bounding service118can provide specialized event data122to a local security agent108that causes that security agent's bounding manager128to directly turn off or turn on application of a particular bounding rule or corresponding selector602, and/or directly adjust one or more parameters of one or more selectors602. As noted above, bounding rules can be defined through one or more selectors602that a bounding manager128can apply by processing event data122from an event stream using one or more selectors602associated with the event data122. Each selector602can be associated with reporting criteria604, markup606, and/or a priority value608. Each selector602can be an algorithm that can generate an independent reporting recommendation610about whether a piece of event data122should be sent to the security network106. In some examples, different selectors602can operate on the same piece of event data122and provide conflicting reporting recommendations610about that piece of event data122. However, the bounding manager128can include a priority comparer612that can evaluate priority values608associated with the different selectors602and/or their reporting recommendations610to make a final decision about whether or not to send the piece of event data122to the security network106. The bounding manager128can also include a counting engine614that can track statistical data616about event data122. Individual selectors602may operate on event data122, or groups of event data122based on attributes in the event data122. For example, a selector602can be configured to operate on individual event data122or a group of event data122when the event data122includes a certain process ID, is associated with a certain behavior pattern142, includes a certain keyword or other target value, matches a certain event type, and/or matches any other attribute associated with the selector602. As an example, a selector602can be configured to operate on event data122when the event data122is for a DNS request about a specific domain name. However, a piece of event data122may include attributes that match multiple selectors602, such that more than one selector602can operate on that piece of event data122. For example, event data122for a DNS request to a certain domain name may be operated on by a first selector602associated with all networking events, a second selector602associated more specifically with DNS requests, and a third selector602specifically associated with that domain name. A reporting recommendation610generated by a selector602can be based on reporting criteria604associated with that selector602. A selector's reporting recommendation610can be a positive, a negative, or a neutral recommendation. In some examples, reporting criteria604for a selector602can include upper and/or lower bounds of reporting rates or overall counts regarding how much of a certain type of event data122should be sent to the security network106. For example, reporting criteria604can indicate that event data122about a certain type of event should be sent to the security network106at least fifty times an hour, but no more than three hundred times an hour. As another example, reporting criteria604can indicate that a sample of five hundred instances of a certain type of event data122should be sent to the security network106, after which no more instances of that type of event data122need be sent to the security network106. Accordingly, the counting engine614can track statistical data616associated with one or more individual selectors602about how much corresponding event data122has been sent to the security network106, such that a selector602can use the statistics to determine if new event data122meets reporting criteria604when making a reporting recommendation610. A positive reporting recommendation610can indicate that a selector602recommends that a piece of event data122should be sent to the security network106. For example, if reporting criteria604for a selector602indicates that at least fifty pieces of a certain type of event data122should be sent to the security network106over a certain period of time, and statistical data616tracked by the counting engine614indicates that only thirty pieces of that type of event data122has been sent to the security network106over that period of time, the selector602can make a positive reporting recommendation610recommending that a new piece of event data122of that type be sent to the security network106. A negative reporting recommendation610can indicate that a selector602has determined that a piece of event data122should be bounded, and accordingly should not be sent to the security network106. For example, if reporting criteria604for a selector602indicates that five hundred instances of a certain type of event data122should be sent to the security network106overall, and statistical data616tracked by the counting engine614indicates that five hundred instances of that type of event data122have already been sent to the security network106, the selector602can make a negative reporting recommendation610recommending that a new piece of event data122of that type not be sent to the security network106. A neutral reporting recommendation610can indicate that a selector602has no preference about whether or not to send a piece of event data122to the security network106. For example, if reporting criteria604for a selector602indicates that between fifty and one hundred pieces of a certain type of event data122should be sent to the security network106over a certain period of time, and statistical data616tracked by the counting engine614indicates that sixty pieces of that type of event data122has already been sent to the security network106over that period of time, the selector602can make a neutral reporting recommendation610because the statistical data616shows that matching event data122between the upper and lower bounds of the selector's reporting criteria604has already been sent to the security network106during the period of time. In some examples, a selector602may also make a neutral reporting recommendation610if the selector602does not apply to the type of a certain piece of event data122. If a selector602generates a positive reporting recommendation610for a piece of event data122, the selector602can also add markup606associated with the selector602to the event data122. The markup606can be a reason code, alphanumeric value, text, or other type of data that indicates why the selector602recommended that the event data122be sent to the security network106. Each selector602that generates a positive reporting recommendation610for a piece of event data122can add its own unique markup to the event data122. Accordingly, if more than one selector602recommends sending a piece of event data122to the security network106, the piece of event data122can be given markup606indicating more than one reason why the piece of event data122is being recommended to be sent to the security network106. In some examples, markup606from different selectors602can be aggregated into a bitmask or other format that is sent to the security network106as part of, or in addition to, the event data122. Each selector602can also provide a priority value608along with its reporting recommendation610, whether the reporting recommendation610is positive, negative, or neutral. In some examples, the priority value608associated with a selector602can be a static predefined value. For instance, a selector602may be configured to always make a reporting recommendation610with a specific priority value608. In other examples, the priority value608associated with a selector602can be dynamically determined by the selector602based on an analysis of event data122and/or statistical data616. For example, if a selector's reporting criteria604has a lower bound indicating that at least one hundred pieces of a type of event data122should be sent to the security network106per hour, but statistical data616indicates that only ten pieces of that type of event data122have been sent to the security network106during the current hour, the selector602can produce a positive reporting recommendation610with a high priority value608in an attempt to increase the chances that the event data122is ultimately sent to the security network106and the lower bound of the selector's reporting criteria604will be met. In contrast, if the statistical data616instead indicates that seventy-five pieces of that type of event data122have been sent to the security network106during the current hour, and thus that the lower bound of the selector's reporting criteria604is closer to being met, the selector602can produce a positive reporting recommendation610with a lower priority value608. As mentioned above, a priority comparer612can compare priority values608of selectors602or their reporting recommendations610to make an ultimate determination as to whether or not the bounding manager128should send a piece of event data122to the security network106. For example, if a first selector602with a priority value608of “1000” makes a negative reporting recommendation610because a maximum amount of event data122about networking events has already been sent to the security network106in the past day, but a second selector602with a priority value608of “600” makes a positive reporting recommendation610because that selector602recommends sending additional event data122specifically about IP connections, the priority comparer612can determine that the negative reporting recommendation610from the higher-priority first selector602should be followed. Accordingly, in this example, the security agent108would not send event data122to the security network106despite the positive reporting recommendation610from the lower-priority second selector602. In some examples, the priority comparer612can be configured to disregard neutral reporting recommendations610from selectors602regardless of their priority values608. In some examples, the priority comparer612can add a bounding decision value to a bounding state field in event data122. The bounding decision value can be a value, such as binary yes or no value, that expresses the ultimate decision from the priority comparer612as to whether the security agent108should or should not send the event data122to the security network106. The priority comparer612can then return the event data122to a bus126in the security agent108, or modify the event data122in the bus126, such that the event data122can be received by a communication component130of the security agent108. The communication component130can use a Boolean expression or other operation to check if the bounding state field in the event data122indicates that the event data122should or should not be sent to the security network106, and can accordingly follow the bounding decision value in that field to either send or not send the event data122to the security network106. In other examples, the priority comparer612may discard event data122from the bus126that the priority comparer612decides should not be sent to the security network106, such that the communication component130only receives event data122that the priority comparer612has determined should be sent to the security network106. As discussed above, one or more selectors602that made positive reporting recommendations610can have added markup606to the event data122indicating reasons why those selectors602recommended sending the event data122to the security network106. Accordingly, cloud elements of the distributed security system100can review that markup606to determine one or more reasons why the event data122was sent to the security network106, and, in some examples, can store and/or route the event data122within the security network106based on the reasons identified in the markup606. In some examples, if a selector602makes a reporting recommendation610that is overruled by another reporting recommendation610from a higher-priority selector602, the bounding manager128can update data associated with the selector602to indicate why the selector's reporting recommendation410was overruled. For example, a table for a particular selector602may indicate that the particular selector602processed event data122for five hundred events and recommended that three hundred be bounded, but that ultimately event data122for four hundred events was sent to the security network106due to higher-priority selectors602. Accordingly, such data can indicate a full picture of why certain event data122was or was not sent to the security network106because of, or despite, a particular selector's reporting recommendation410. In some examples, the bounding manager128can provide this type of data to the security network106as diagnostic data, as event data122, or as another type of data. While the bounding manager128can cause less than a full set of event data122to be sent to the security network106based on reporting recommendations410as described above, in some situations the bounding manager128can also send statistical data616about a set of event data122to the security network106instead of event data122directly. This can also decrease the amount of data reported to the security network106. For example, the counting engine614can be configured to count instances of certain types of event data122that pass through the bounding manager128. The counting engine614can generate statistical data616that reflects such a count, and emit that statistical data616as event data122, or another type of data or report, that the security agent108can send to the security network106. Accordingly, the security network106can receive a count of the occurrences of a type of event as a summary, without receiving different individual pieces of event data122about individual occurrences of that type of event. As an example, if cloud elements of the distributed security system100are configured to determine how many, and/or how often, files are accessed on one or more client devices104, the cloud elements many not need detailed event data122about every individual file access event that occurs on the client devices104. As another example, registry events may occur thousands of times per minute, or more, on a client device104. While it may be inefficient or costly to send event data122about each individual registry event to the security network106, it may be sufficient to simply send the security network106a count of how many such registry events occurred over a certain period of time. Accordingly, a configuration132may instruct the counting engine614to, based on event data122, generate statistical data616including a count of the number of certain types of event occurrences on a client device104over a period of time. The security agent108can then send the statistical data616reflecting the overall count of such event occurrences to the security network106as event data122, or another type of report, instead of sending event data122about each individual event occurrence to the security network106. In some examples, statistical data616can trigger whether event data122about individual event occurrences or an overall count of those event occurrences is sent to the security network106. For example, the counting engine614can determine if a count of certain event occurrences reaches a threshold over a period of time. If the count reaches the threshold, the counting engine614can cause the security agent108to send the count instead of event data122about individual event occurrences. However, if the count does not reach the threshold, the counting engine614can cause the security agent108to send the event data122about individual event occurrences. In still other examples, the counting engine614can be configured to always cause a count of certain event occurrences to be sent to the security network106, but be configured to wait to send such a count until the count reaches a certain threshold value, on a regular basis, or on demand by the storage engine116or other element of the distributed security system100. In some examples, if a new channel file or other type of configuration132arrives while a counting engine614has already generated counts or other statistical data616, the bounding manager128can initiate a second instance of the counting engine614that operates according to the new configuration132and perform a state transfer from the old instance of the counting engine614to the new instance of the counting engine614. For example, a new agent-specific channel file may arrive that, in combination with previously received global and/or customer channel files, would change how the counting engine614counts events or generates other statistical data616. Rather than terminating the existing instance of the counting engine614that was generating statistical data616based on an old set of configurations132and losing already-generated statistical data616from that instance of the counting engine614, the bounding manager128may initiate a second instance of the counting engine614that generates statistical data616based on the new combination of configurations132. In some examples, a state transfer can then allow the new instance of the counting engine614to take over and build on previously generated statistical data616from the older instance of the counting engine614. In other examples, the new instance of the counting engine614may run in parallel with the older instance of the counting engine614for at least a warm-up period to learn the state of the previously generated statistical data616. For example, due to modified and/or new data types in a new configuration132, previous statistical data616generated by the old instance of the counting engine614may not be directly transferrable to the new instance of the counting engine614that operates based on the new configuration132. However, during a warm-up period, the new instance of the counting engine614can discover or learn information that is transferrable from the older statistical data616. In some examples, configurations132may be provided that define new selectors602, modify existing selectors602, and/or enable or disable specific selectors602. In some examples, a configuration132can enable or disable certain selectors602immediately or for a certain period of time. For example, if the storage engine116or other cloud elements of the distributed security system100are becoming overloaded due to security agents108sending too much event data122to the security network106, the bounding service118can push a configuration132to a security agent108that immediately causes selectors602to provide negative reporting recommendations610or with different priority values608such that the security agent108reduces or even stops sending event data122for a set period of time or until a different configuration132is received. For instance, a configuration132may be used to immediately cause a certain selector602that applies to all types of event data122to provide a negative reporting recommendations610with a highest-possible priority value608for all event data122, such that the priority comparer612will follow that negative reporting recommendation610and block all event data122from being sent to the security network106for a period of time. As another example, a configuration132can be provided that causes the bounding manager128to immediately cause event data122to be sent to the security network106when a particular selector's reporting criteria604is met, without going through the process of the priority comparer612comparing priority values608of different reporting recommendations610about that event data122. In some examples, the bounding service118can provide a user interface that allows users to define new selectors602and/or modify reporting criteria604, markup606, priority values608, and/or other attributes of selectors602for a new configuration132for a bounding manager128. In some examples, the bounding service118can provide templates that allows users to adjust certain values associated with selectors602for bounding managers128of one or more security agents108, and the bounding service118can then automatically create one or more corresponding configurations132for those security agents108, such as global channel files, customer channel files, or agent-specific channel files. Configurations that132that change, enable, or disable selectors602can also be used by the experimentation engine120to adjust reporting levels of certain types of event data122permanently or during a test period. For example, if a certain type of event data122is expected to be relevant to an experiment, the experimentation engine120can cause a configuration132for bounding managers128to be pushed to one or more security agents108that provide new or modified selectors602that at least temporarily increase the amount of that targeted type of event data122that gets sent to the security network106. In some cases, the configuration132can be provided to security agents108of one or more client devices104that are part of an experiment, such as individual client devices104, a random sample of client devices104, or a specific group of client devices104. After a certain period or time, or after enough of the target type of event data122has been collected for the experiment, previous configurations132can be restored to return the security agents108to reporting event data122at previous reporting rates. Additionally, as discussed above, individual selectors602that make positive reporting recommendations610can add corresponding markup606to event data122to indicate reasons why the event data122was recommended to be sent to the security network106. When one or more selectors602are associated with an experiment run via the experimentation engine120, those selectors602can provide markup606indicating that event data122was recommended to be sent to the security network106because it is associated with the experiment. Accordingly, when the event data122arrives at the storage engine116, the event data122can include markup606from one or more selectors602, potentially including selectors602associated with an experiment in addition to selectors602that are not directly associated with the experiment. The storage engine116may use markup606from the experiment selectors602to store or route the event data122to cloud elements associated with the experiment, as well as storing or routing the same event data122to other elements that are not associated with the experiment based on other non-experiment markup606. FIG.7depicts a flowchart of an example process by which a priority comparer612of a bounding manager128can determine whether or not a security agent108should send event data122to the security network106. At block702, the priority comparer612can receive a set of reporting recommendations610produced by different selectors602of the bounding manager128for a piece of event data122. Each reporting recommendation610, or the selector602that produced the reporting recommendation610, can be associated with a priority value608. At block704, the priority comparer612can identify a non-neutral reporting recommendation610that is associated with the highest priority value608among the set of reporting recommendations610. Because reporting criteria604of selectors602that made neutral reporting recommendations610can be satisfied regardless of whether the event data122is ultimately sent to the security network106, the priority comparer612may disregard neutral reporting recommendations610at block704regardless of their priority values608, and only consider priority values608of positive reporting recommendations610and negative reporting recommendations610. At block706, the priority comparer612can determine whether the highest-priority reporting recommendation610is positive. If the highest-priority reporting recommendation610is positive, at block708the priority comparer612can cause the event data122to be sent to the security network106. For example, based on the decision by the priority comparer612, the bounding manager128can release the event data122to a bus126of the security agent108, which in turn can cause the security agent108to send the event data122to the security network106. Here, even if one or more negative reporting recommendations610were also made by selectors602, a positive reporting recommendation610can overrule those negative reporting recommendations610when it has the highest priority value608. The event data122that is sent to the security network106at block708can include markup606associated with at least one selector602indicating why that selector602made a positive reporting recommendation610. If more than one selector602made a positive reporting recommendation610, the event data122that is sent to the security network106can include markup606from a set of selectors602that made positive reporting recommendations610. Accordingly, even though only one reporting recommendation410has the highest priority value608, the event data122ultimately sent to the security network106can include markup606from one or more selectors602. In some examples, if one or more selectors602that made positive reporting recommendations610have not already added corresponding markup606to the event data122, the bounding manager128can add markup606associated with those selectors602before the event data122is sent to security network106at block708. When event data122is sent to the security network106at block708, the counting engine614can also update statistical data616about that type of event data122to indicate how much of, and/or how often, that type of event data122has been sent to the security network106. This updated statistical data616can in turn be used by selectors602to make reporting recommendations on subsequent event data122. If the priority comparer612instead determines at block706that the highest-priority reporting recommendation610is negative, at block710the priority comparer612can cause the bounding manager128to discard the event data122or otherwise prevent the event data122from being sent by the security agent108to the security network106, for example by adding a bounding value to a bounding decision field that causes other elements of the security agent108to not send the event data122to the security network106. In this situation, even if one or more lower-priority selectors602made positive reporting recommendations610and/or added markup606to the event data122about why the event data122should be sent, the higher priority value608of the negative reporting recommendation610can be determinative such that the security agent108does not send the event data122to the security network106. Storage Engine FIG.8depicts an example of data flow in a storage engine116of the security network106. An input event stream802of event data122sent to the security network106by one or more local security agents108can be received by a storage engine116in the security network106, as shown inFIG.1. In some examples, security agents108can send event data122to the security network106over a temporary or persistent connection, and a termination service or process of the distributed security system100can provide event data122received from multiple security agents108to the storage engine116as an input event stream802. The event data122in the input event stream802may be in a random or pseudo-random order when it is received by the storage engine116. For example, event data122for different events may arrive at the storage engine116in the input event stream802in any order without regard for when the events occurred on client devices104. As another example, event data122from security agents108on different client devices104may be mixed together within the input event stream802when they are received at the storage engine116, without being ordered by identifiers of the security agents108. However, the storage engine116can perform various operations to sort, route, and/or store the event data122within the security network106. The storage engine116can be partitioned into a set of shards804. Each shard804can be a virtual instance that includes its own resequencer806, topic808, and/or storage processor810. Each shard804can also be associated with a distinct cloud instance of the compute engine102. For example, if the storage engine116includes ten thousand shards804, there can be ten thousand resequencers806, ten thousand topics808, ten thousand storage processors810, and ten thousand cloud instances of compute engines102. Each shard804can have a unique identifier, and a particular shard804can be associated with one or more specific security agents108. In some examples, a particular instance of the compute engine102can be associated with a specific shard804, such that it is configured to process event data122from specific security agents108associated with that shard804. However, in some examples, cloud instances of the compute engine102can also be provided that are specifically associated with certain rally points306associated with corresponding composition operations302, such that the cloud instances of the compute engine102can execute composition operations302that may expect or process different pieces of event data122generated across one or more client devices104using such rally points306. Resequencers806of one or more shards804can operate in the storage engine116to sort and/or route event data122from the input event stream802into distinct topics808associated with the different shards804. The topics808can be queues or sub-streams of event data122that are associated with corresponding shards804, such that event data122in a topic808for a shard804can be processed by a storage processor810for that shard804. In some examples, event data122from the input event stream802can be received by one resequencer806in a cluster of resequencers806that are associated with different shards804. That receiving resequencer806can determine, based on an AID or other identifier of the security agent108that sent the event data122, whether that resequencer806is part of the shard804that is specifically associated with that security agent108. If the receiving resequencer806is part of the shard804associated with the sending security agent108, the resequencer806can route the event data122to the topic808for that shard804. If the resequencer806that initially receives event data122determines that it is not part of the shard804associated with the sending security agent108, the resequencer806can forward the event data122to a different resequencer806that is part of the shard804associated with the sending security agent108. In some examples, a resequencer806can send event data122to another resequencer806via a remote procedure command (RPC) connection or channel. A resequencer806can determine whether event data122is associated with the shard804of the resequencer806, or is associated with a different shard804, based on an identifier, such as an AID, of the security agent108that sent the event data122. For example, the resequencer806can perform a modulo operation to divide an AID value in event data122by the number of shards804in the storage engine116, find the remainder of the division, and find a shard804with an identifier that matches the remainder. As an example, when there are ten thousand shards804in the storage engine116and a remainder of a modulo operation on a security agent's AID is “60,” the resequencer806can determine that the security agent108is associated with a shard804having an identifier of “60.” If that resequencer806is part of shard “60,” the resequencer806can route the event data122to a topic808associated with shard “60.” However, if the resequencer806is not part of shard “60,” the resequencer806can use an RPC connection or other type of connection to forward the event data122to another resequencer806that is associated with shard “60.” In some examples, if a first resequencer806attempts to forward event data122from a security agent108to a second resequencer806that is part of a different shard804associated with that security agent108, the second resequencer806may be offline or be experiencing errors. In this situation, the storage engine116can reassign the security agent108to the shard804associated with the first resequencer806, or to another backup shard804. Accordingly, the event data122can be processed by elements of a backup shard804without waiting for the second resequencer806to recover and process the event data122. In some examples, a resequencer806may also order event data122by time or any other attribute before outputting a batch of such ordered event data122in a topic808to a corresponding storage processor810. For example, when a resequencer806determines that it is the correct resequencer806for event data122, the resequencer806can temporarily place that event data122in a buffer of the resequencer806. Once the size of data held in the buffer reaches a threshold size, and/or event data122has been held in the buffer for a threshold period of time, the resequencer806can re-order the event data122held in the buffer by time or any other attribute, and output a batch of ordered event data122from the buffer to a topic808. After event data122from the input event stream802has been sorted and partitioned by resequencers806into topics808of different shards804, storage processors810of those different shards804can further operate on the event data122. Example operations of a storage processor810are described below with respect toFIG.10. In some examples, a single processing node812, such as a server or other computing element in the security network106, can execute distinct processes or virtual instances of storage processors810for multiple shards804. After a storage processor810for a shard804has operated on event data122, the storage processor810can output event data122to a corresponding cloud instance of the compute engine102associated with the shard804. In some examples, each storage processor810executing on a processing node812can initiate, or be associated, with a corresponding unique instance of the compute engine102that executes on the same processing node812or a different processing node812in the security network106. As described further below, in some examples the storage processor810can also output event data122to short-term and/or long-term storage814, and/or to an emissions generator816that prepares an output event stream818to which other cloud elements of the distributed security system100can subscribe. FIG.9depicts an example of a storage processor810sending event data122to a corresponding compute engine102. As described above, the compute engine102can process incoming event data122based on refinement operations202, composition operations302, and/or other operations. However, in some examples, the compute engine102may not initially be able to perform one or more of these operations on certain event data122. For example, if a particular operation of the compute engine102compares attributes in event data122about different processes to identify which parent process spawned a child process, the compute engine102may not be able to perform that particular operation if the compute engine102has received event data122about the child process but has not yet received event data122about the parent process. In these types of situations, in which the compute engine104receives first event data122but expects related second event data122to arrive later that may be relevant to an operation, the compute engine104can issue a claim check902to the storage processor810. The claim check902can indicate that the compute engine104is expecting second event data122to arrive that may be related to first event data122that has already arrived, and that the storage processor810should resend the first event data122to the compute engine104along with the second event data122if and when the second event data122arrives. In some examples, the claim check902can identify the first and/or second event data122using a key, identifier, string value, and/or any other type of attribute. Accordingly, once a compute engine102has sent a claim check902for second event data122that may be related to first event data122, the compute engine102may be configured to disregard the first event data122if and until the related second event data122arrives or a threshold period of time passes. For example, if the storage processor810determines that second event data122corresponding to a claim check902has arrived, the storage processor810can send that second event data122to the compute engine104along with another copy of the first event data112such that the compute engine104can process the first event data112and the second event data122together. As another example, the storage processor810may wait for the expected second event data122for a threshold period of time, but then resend the first event data122to the compute engine102if the threshold period of time passes without the expected second event data122arriving. Accordingly, in this situation the compute engine102can move forward with processing the first event data122without the second event data122. In some examples, a claim check902can depend on, or be related to, one or more other claim checks902. For example, when event data122about a child process arrives, a compute engine102may issue a claim check902for event data122about a parent process. However, the compute engine102may additionally issue a separate claim check902for event data about a grandparent process, a parent process of the parent process. Accordingly, in this example, a storage processor810can wait to provide the compute engine102with event data122about the child process, the parent process, and the grandparent process until that event data122has arrived and both related claim checks902have been satisfied. Similarly, if multiple claim checks902have been issued that are waiting for the same expected event data122, a storage processor810can respond to those multiple claim checks902at the same time if and when the expected event data122arrives. In some examples, a storage processor810can generate a dependency graph of pending claim checks902that depend on each other, such that the storage processor810can perform a breadth-first search or other traversal of the dependency graph when event data722arrives to find claim checks902pending against related event data122. In some examples, claim checks902can be processed by the storage engine and/or the compute engine104at runtime, for example when claim checks902are issued, to determine dependencies between claim checks902, and to determine when claim checks902are satisfied. In contrast, in some examples, the rally points306discussed above with respect to composition operations306executed by compute engines102can be evaluated and determined at compile time, such as to generate configurations132for compute engines102that define storage requirements for rally points306and indicate triggers and other instructions for when and how to create rally points306. FIG.10depicts a flowchart of example operations that can be performed by a storage processor810in a storage engine116. At block1002, the storage processor810can receive event data122in a topic808from a resequencer806. At block1004, the storage processor810can perform de-duplication on the event data122from the topic808. For example, if the topic808contains duplicate copies of certain event data122, and/or the storage processor810already operated on another copy of that event certain event data122in the past, the duplicate copy can be discarded from the storage engine116and not be processed further by the distributed security system100. Here, because event data122is sorted and routed into topics808and corresponding storage processors810based on an identifier of the security agent108that sent the event data122, copies of the same event data122can be routed to the same storage processor810. Accordingly, there can be a confidence level that different storage processors810are not operating on separate copies of the same event data122, and that the particular storage processor810associated with event data122from a particular security agent108can safely discard extra copies of duplicated event data122from that particular security agent108. At block1006, the storage processor810can perform batching and/or sorting operations on event data122from a topic808. For example, even if a resequencer806for a shard804released batches of event data122into a topic808, and each individual batch from the resequencer806was sorted by time, a first batch may contain event data122about an event that occurred on a client device104after an event described by event data122in a second batch. Accordingly, the storage processor810can reorder the event data122from the topic if they are not fully in a desired order. The storage processor810can also sort and/or batch event data122from a topic808based on event type, behavior type, and/or any other attribute. At block1008, the storage processor810can detect if any event data122received via the topic808matches a claim check902previously issued by the compute engine102. As discussed above, the compute engine102can issue claim checks902for event data122expected to arrive at later points in time. Accordingly, at block1008, storage processor810can determine if matches are found for any pending claims checks902. If newly received event data122matches an existing claim check902, the storage processor810can retrieve any other event data122that corresponds to the claim check902and prepare to send both the newly received event data122and the other corresponding event data122to the compute engine102at block1010. For example, if a compute engine102, after receiving first event data122, issued a claim check902for second event data122related to the first event data122, and the storage processor810determines at block1008that the second event data122has arrived, the storage processor810can retrieve the first event data122from storage814or other memory and prepare to send both the first event data122and the second event data122to the compute engine102at block1010. As discussed above, in some examples the storage processor810can build a dependency graph or other representation of multiple related claim checks902. Accordingly, at block1008the storage processor810can use a dependency graph or other representation of related claim checks902to determine if related claim checks902have been satisfied. If event data122has arrived that satisfy dependent or related claim checks902, the storage processor810can prepare to send the corresponding related event data122to the compute engine102at block1010. At block1010, the storage processor810can send event data122to the compute engine102. As noted above, the event data122sent at block1010can include both new event data122from a topic as well as any older event data122that is to be resent to the compute engine102based on one or more claim checks902. In some examples, the storage processor810can use an RPC connection or channel to send a batch or stream of event data122to the compute engine102. At block1012, the storage processor810can receive and/or register new claim checks902from the compute engine102. The storage processor810can then return to block1002to receive new event data122from the topic808. The order of the operations shown inFIG.10is not intended to be limiting, as some of the operations may occur in parallel and/or different orders. For example, a storage processor810can receive and/or register new claim checks902from the compute engine102before, after, or while de-duplicating, sorting, and/or batching event data122. FIG.11depicts an example of event data122associated with a storage engine116. As discussed above with respect toFIG.8, event data122that has passed through storage processors810can be stored in short-term and/or long-term storage814. In some examples, cloud instances of the compute engine102that operate on event data122and/or produce new event data122using refinement operations202, composition operations302, and/or other operations can also output processed event data122to be stored in the storage814, either directly or through the storage processors810. The storage814can include one or more memory devices, and the event data122can be stored in a database or other structure in the storage814. Each piece of event data122can be stored in the storage814so that it is available to be retrieved and used by elements of the distributed security system100. For example, when a storage processor810receives a claim check902from a compute engine102for a second piece of event data122that is expected to arrive in relation to a first piece of event data122that has already arrived, the storage processor810may store the first piece of event data122in storage814at least temporarily. When the second piece of event data122arrives and the claim check902is satisfied, or a threshold time period associated with the claim check902expires, the storage processor810can retrieve the first piece of event data122from the storage and resend it to the compute engine102. As another example, compute engines102and/or other elements of the distributed security system100can query the storage814to retrieve stored event data122. For instance, although a certain cloud instance of the compute engine102may be associated with one or more specific security agents108, that cloud instance of the compute engine102may query the storage814to retrieve event data122that originated from other security agents108on client devices104that are not associated with that cloud instance of the compute engine102. Accordingly, a cloud instance of the compute engine102may be able to access event data122from multiple security agents108via the storage814, for instance to detect when events occurring collectively on multiple client devices104match a behavior pattern142. In other examples, elements of the distributed security system100can submit queries to the storage engine116to obtain event data122based on search terms or any other criteria. In some examples, the storage engine116can expose an application programming interface (API) through which elements of the distributed security system100can submit queries to retrieve event data122stored in the storage814. In some examples, rally point identifiers1102can be stored in the storage814in conjunction with pieces of event data122. As noted above, in some examples certain cloud instances of the compute engine102can be associated with certain rally points306, such that the cloud instances of the compute engine102can execute composition operations302associated with those rally points306based on event data122received from one or more client devices104. Event data122can be stored in the storage814association with the rally point identifiers1102that correspond with different rally points306handled by different cloud instances of the compute engine102. Accordingly, based on rally points identifiers1102, stored event data122associated with rally points306can be forwarded to a corresponding cloud instances of the compute engine102or other elements associated with those rally points306. Accordingly, a cloud instance of the compute engine102that executes a composition operation associated with a particular rally point306can receive event data122from the storage engine116that may lead to the creation or satisfaction of that rally point306as discussed above with respect toFIG.3. In some examples, the storage engine116can respond to a query from another element of the distributed security system100by providing filtered event data122that includes less than the full set of fields stored for a piece of event data122. As discussed above, event data122can be formatted according to a context collection format136defined by an ontological definition134, and in some examples the ontological definition134can assign authorization level values to each field of a data type on a field-by-field basis. For instance, some fields can be associated with a high authorization level, while other fields may be associated with one or more lower authorization levels. An element of the distributed security system100, or a user of such an element, that has the high authorization level may accordingly receive all fields of the event data122from the storage engine116, while another element or user with a lower authorization level may instead only receive a subset of the fields of the event data122that corresponds to that element or user's lower authorization level. The storage814can also maintain reference counts1104for each piece of event data122. A reference count1104for a piece of event data122can be a count of how many other pieces of event data122are related to and/or are dependent on that piece of event data122. Processes that occur on client devices104may spawn, or be spawned from, other processes on client devices104. Although a particular process may terminate on a client device104at a point in time, event data122about that particular process may remain relevant to evaluating event data122about parent or child processes of that particular process that may still be executing on the client device104. Accordingly, a reference count1104can be used to count how many other pieces of event data122are related to or dependent on a certain piece of event data122. The storage engine116can be configured to keep event data122that has a reference count1104above zero, while occasionally or periodically deleting event data122that has a reference count1104of zero. As an example, event data122about a browser process may arrive at the storage engine116. At this point, no other process is related to the browser process, so the event data122can be given a reference count1104of zero. However, if additional event data122arrives at the storage engine116indicating that the browser process spawned a notepad process as a child process, the reference count1104of the browser event data122can be incremented to one. If further event data122indicates that the browser process also spawned a command shell prompt as a child process, the reference count1104of the browser event data122can be incremented to two. If event data122then indicates that the notepad process has terminated, the reference count1104of the browser event data122can be decremented down to one. At this point, although the browser event data122is older than the notepad event data122, and/or the browser process may have also terminated, event data122about the browser process can be kept in the storage814because it is still relevant to understanding how the command shell prompt child process was initiated. When event data122indicates that the child command shell prompt has terminated, the reference count1104of the browser event data122can be decremented to zero. At this point, the storage engine116can safely delete the browser event data122because no other event data122is dependent on the browser event data122. In some examples, the storage engine116may also be able to update reference counts1104for event data122by sending heartbeat messages to client devices104. For example, if a particular instance of event data122has been stored in the storage814for at least a threshold period of time, the storage engine116may send a heartbeat message to a corresponding client device104to check if the event data122is still relevant. The storage engine116can update the event data's reference count1104based on a heartbeat response from the client device104. For example, if event data122about a parent process has been stored in the storage814for a period of time, and that period of time is longer than a duration after which parent process and/or its child processes may be expected to have terminated, the storage engine116may send a heartbeat message to a security agent108on a corresponding client device104asking if the parent process and/or its child process are still executing on that client device104. The storage engine116may update the reference count1104associated with the event data122based on a heartbeat response from the client device104, or lack of a heartbeat response, for example by changing the reference count1104to zero if a heartbeat response indicates that the parent process and its child process are no longer executing. FIG.12depicts a flowchart of an example process for cleaning up storage814of a storage engine116based on reference counts1104of event data122. As discussed above, as event data122received by the storage engine116indicates changing relationships or dependencies between different pieces of event data122, reference counts1104of the event data122can be incremented or decremented. Periodically or occasionally the storage engine116can perform a clean-up process to delete event data122that is not related to any other event data122, and thus may be more likely to be noise and/or not relevant to security threats associated with broader behavior patterns142. At block1202, the storage engine116can determine a reference count1104of a piece of event data122stored in the storage814. At block1204, the storage engine116can determine if the reference count1104is zero. If the storage engine116determines at block1204that a reference count1104for event data122is zero, in some examples the storage engine116can delete that event data122from the storage814at block1206. In some examples, the storage engine116can be configured to not delete event data122at block1206unless the event data112has been stored in the storage for more than a threshold period of time. For example, if event data122about a process was recently added to the storage814, its reference count1104may increase above zero if that process spawns child processes, and as such it may be premature to delete the event data122. Accordingly, the storage engine116can determine if the event data122is older than a threshold age value before deleting it at block1206when its reference count1104is zero. However, in these examples, if event data122is older than the threshold age value and has a reference value of zero, the storage engine116can delete the event data122at block1206. If the storage engine116determines at block1204that a reference count1104for event data122is above zero, the storage engine116can maintain the event data122in the storage814at block1208. At block1210, the storage engine116can move to next event data122in the storage814and return to block1202to determine a reference count of that next event data122and delete or maintain the next event data122during a next pass through the flowchart ofFIG.12. FIG.13depicts a flowchart of an example process for an emissions generator816of the storage engine116to generate an output event stream818for one or more consumers. In some examples, event data122processed by one or more shards804or corresponding compute engines102can be passed to the emissions generator816in addition to, or instead of, being stored in the storage814. For example, the emissions generator816can receive copies of event data122being output by storage processors to compute engines102and/or the storage814, as well as new or processed event data122being output by compute engines102back to storage processors810and/or to the storage814. The emissions generator816can be configured to use received event data122to produce and emit output event streams818for consumers. Each output event stream818can contain event data122that matches corresponding criteria, for example based on one or more shared attributes. A consumer, such as the experimentation engine120or another element of the security network106, can subscribe to an output event stream818such that the element receives a live stream of incoming event data122that matches certain criteria. Accordingly, although an element of the security network106can query the storage engine116on demand to obtain stored event data122that matches the query, the element can also subscribe to an output event stream818produced by an emissions generator816to receive event data122that matches certain criteria in almost real time as that event data122is processed through the storage engine116and/or by compute engines102. For example, if a user of the experimentation engine120wants to receive event data122about a certain type of networking event that occurs across a set of client devices104as those events occur, the emissions generator816can generate and provide an output event stream818that includes just event data122for occurrences of that type of networking event that are received by the storage engine116. As an example, an emissions generator816can be configured to produce a customized output event stream818based on criteria indicating that a consumer wants a stream of event data122related to a process with a particular process ID that includes information about that process's parent and grandparent processes, the first five DNS queries the process made, and the first five IP connections the process made. Accordingly, the consumer can subscribe to that output event stream818to obtain matching event data122in almost real time as the event data122arrives at the storage engine116, rather than using API queries to retrieve that from the storage814at later points in time. At block1302, the emissions generator816can receive criteria for an output event stream818. In some examples, the criteria can be default criteria, such that the emissions generator816is configured to produce multiple default output event streams818using corresponding default criteria. However, the emissions generator816can also, or alternately, be configured to produce customized output event streams818using criteria defined by consumers, and as such the criteria received at block1302can be criteria for a customized output event stream818. At block1304, the emissions generator816can receive event data122that has been processed by elements of one or more shards804and/or corresponding compute engines102. In some examples, the emissions generator816can copy and/or evaluate such event data112as the event data122is being passed to the storage814, and/or to or from instances of the compute engine102. At block1306, the emissions generator816can identify event data122that matches criteria for an output event stream818. In some examples, the emissions generator816can produce multiple output event stream818for different consumers, and the emissions generator816can accordingly determine if event data122matches criteria for different output event streams818. At block1308, the emissions generator816can add the matching event data122to a corresponding output event stream818. The output event stream818can be emitted by the storage engine116or otherwise be made available to other elements of the distributed security system100, including consumers who have subscribed to the output event stream818. The emissions generator816can return to loop through block1304to block1308to add subsequent event data122that matches criteria to one or more corresponding output event streams818. If event data122matches criteria for more than one output event stream818at block1306, the emissions generator816can add the matching event data122to multiple corresponding output event streams818. If event data122does not match any criteria for any output event stream818, the emissions generator816can disregard the event data122such that it is not added to any output event streams818. Experimentation Engine FIG.14depicts an example of an experimentation engine120. As discussed above, the experimentation engine120can be used to produce configurations132that may at least temporarily change how other elements of the distributed security system100operate for testing and/or experimentation purposes. The experimentation engine120can include an experimentation user interface1402for users, such as data analysts or other users. In some examples, the experimentation user interface1402can provide text fields, menus, selectable options, and/or other user interface elements that allow users to define experiments, such as by defining what types of event data122are relevant to an experiment and/or over what periods of time such event data122should be collected. The experimentation user interface1402may also include user interface elements that allow users to view event data122, summaries or statistics of event data122, and/or other information related to a pending or completed experiment. The experimentation engine120can include an experimentation processor1404. In some examples, the experimentation processor1404can translate user input about an experiment provided through the experimentation user interface1402into new configurations132for a bounding manager128or other element of the distributed security system100. The experimentation processor1404, and/or experimentation engine120overall, may generate configurations for bounding managers128directly and/or instruct a bounding service118to generate and/or send such configurations132for bounding managers128. In other examples, the experimentation processor1404can translate, or provide, information from user input to the ontology service110and/or pattern repository112, such that a compiler114can generate new executable configurations132for instances of the compute engine102that include new instructions relevant to an experiment. Additionally, the experimentation processor1404, and/or experimentation engine120overall, may request and/or receive incoming event data122that may be relevant to an experiment being run via the experimentation engine120. In some examples, the experimentation engine120may submit a query for relevant event data122to storage814of the storage engine116. In other examples, the experimentation engine120may subscribe to a customized output event stream818produced by an emissions generator816of the storage engine116, for instance using criteria provided by the experimentation engine120. In some examples, the experimentation processor1404can process the incoming event data122to generate summaries of the event data122relevant to an experiment, perform statistical analysis of such relevant event data122, or perform any other processing of event data122as part of an experiment. As discussed above with respect toFIG.6, the experimentation engine120can cause configurations132to be provided to bounding managers128that may provide new or adjusted selectors602for bounding rules. Such configurations132can at least temporarily adjust how selectors602of bounding managers128operate during an experiment, such that the selectors602cause the bounding managers128to permit different amounts and/or types of event data122that may be more relevant to the experiment to be sent to the security network106. For example, the experimentation engine120can cause configurations132to be generated for one or more bounding managers128that include new selectors602for an experiment that can be implemented alongside existing selectors602, and/or that change reporting criteria604, markup606, priority values608, or other attributes of existing selectors602for an experiment. When a bounding manager128determines that one of these new or adjusted selectors602applies to event data122, the selector602associated with the experiment can make a reporting recommendation610and add experiment markup606to the event data122indicating that the event data122is relevant to the experiment. Other selectors602may or may not also make reporting recommendations610and/or add their own markup606. However, if a priority comparer612ultimately determines that the event data122is to be sent to the security network106, the security agent108can send the experiment-relevant event data122, including the experiment markup606added by the experiment's selector602, to the security network106. The storage engine116can accordingly use that experiment markup606to provide the experiment-relevant event data122to the experimentation engine120, for example in response to a query for event data122with that experiment markup606, or as part of an output event stream818produced by the emissions generator816that includes all event data122with the experiment markup606. The storage engine116can also use any non-experiment markup606provided by non-experiment selectors602to also route or store copies of the event data122to other elements of the distributed security system100. In some examples, the experimentation engine120may use templates or other restriction regarding experimental selectors602that can be provided in configurations132for bounding managers128. For example, a template may cause an experimental configuration132for a bounding manager128to include a selector602defined by a user with a high priority value608for a certain type of event data122, but cause that selector602to have reporting criteria604with a default upper bound that is not user-configurable. As an example, a user may attempt to generate a selector602for an experiment that would increase the likelihood of event data122being reported about command line events that include a certain text string. However, if that text string is far more common in command line events than the user expected, for example occurring millions of times per hour across a sample of fifty client devices104associated with the experiment, the template may cause the selector602to have an upper bound in its reporting criteria604that specifies that event data122about no more than ten such events should be sent to the security network106in a minute. As another example, a template or other restriction may limit how high of a priority value608a user can give an experimental selector602. For example, global bounding rules may include selectors602limiting the amount of a certain type of event data122that can be reported to the security network106by any security agent108. A template at the experimentation engine120may restrict experimental selectors602to having priority values608that are always less than the priority values608of such global selectors602, so that experimental selectors602produced via the experimentation engine120do not cause priority comparers612to overrule global selectors602and cause more of a certain type of event data122to be reported to the security network106than the security network106can handle. The experimentation engine120may allow users to indicate specific client devices104, types of client devices104, and/or a number of client devices104that should be part of an experiment. For example, a user can use the experimentation user interface1402to specify that one or more specific client devices104, for instance as identified by a customer number or individual AIDs, are part of an experiment and should receive new configurations132for the experiment. As another example, a user may specify that an experiment should be performed on a random sample of client devices104, such as a set of randomly-selected client devices104of a certain size. As yet another example, a user may specify that an experiment should be performed on a sample of client devices104that have a certain operating system or other attribute. In these examples, the experimentation engine120can cause new configurations132for bounding managers128, compute engines102, and/or other elements of the security agents108on one or more client devices104associated with the experiment to be generated and provided to the client devices104. In some examples, the experimentation engine120can provide targeted bounding rules associated with an experiment to specific security agents108on specific client devices104that are part of an experiment using agent-specific channel files, or by sending specialized event data122to those client devices104that can be processed by their bounding managers128to almost immediately change or adjust selectors602for bounding rules. In other examples, the experimentation engine120may allow users to indicate how much of a sample of event data122they want to receive as part of an experiment or test, or a rate of incoming event data122that should be part of the sample, and the experimentation engine120can cause configurations132to be provided to one or more client devices104in an attempt to obtain that sample of event data122. The experimentation engine120can then monitor incoming event data122associated with the experiment, and determine if the amount or rate of incoming event data122is aligned with the expected sample size or is too large or too small. If the experimentation engine120is receiving too much relevant event data122, the experimentation engine120can automatically cause new configurations132to be pushed out that end the collection of that type of event data122for experimental purposes entirely, or that reduce the amount or rate of that type of event data122being sent to the security network106. If the experimentation engine120is instead receiving too little relevant event data122, the experimentation engine120can automatically cause new configurations132to be pushed out that increase the amount or rate of that type of event data122being sent to the security network106, for example by adding client devices104to a set of client devices104that have been configured to report that type of event data122or by increasing the priority values608of associated selectors602on an existing set of client devices104such that they are more likely to report that type of event data122. As an example, an analyst may want to look for ten thousand instances of an event that occur across a set of a million client devices104. That type of event may never occur, or may infrequently occur, on any individual client device104, such that any individual security agent108would not know when enough event data122has been collected for the experiment. The experimentation engine120can cause configurations132for bounding managers128to be sent to a set of a million client devices104that provide a high priority value608for a selector602associated with the target type of event, to thereby increase the chances that corresponding event data will be sent to the security network106. Once the experimentation engine104has received event data for ten thousand instances of that type of event, the experimentation engine120can cause new configurations132to be sent to the million client devices104that shut down the experiment so that the bounding managers128no longer prioritize sending that type of event data122. As another example, the experimentation engine120can specify that configurations132associated with an experiment are to be used for a certain period of time by bounding managers128, compute engines102, or other elements of the distributed security system100. The elements can accordingly operate at least in part according to the experimental configurations132during that period of time, and then return to operating according to previous configurations132. Accordingly, event data122relevant to an experiment can be received just from a set of client devices104during an experiment, rather than from a broader base of client devices104. Similarly, the experimentation engine120may allow analysts to test out new configurations132on a small number of client devices104, review event data122being returned as part of the test, and determine based on the returned event data122whether to alter the configurations132or provide the configurations132to any or all other security agents108as a non-experimental configuration132. As yet another example, an analyst may use the experimentation engine120to provide new ontological definitions134and/or behavior patterns142, which a compiler114can use to generate new executable configurations132for cloud and/or local instances of the compute engine102. The analyst may suspect that a certain behavior of interest is occurring on client devices104, but be unsure of how prevalent that behavior of interest actually is. Accordingly, the analyst can use the experimentation engine120to cause a new configuration132for the compute engine102to be provided to at least a small experimental set of client devices104and/or cloud instances of the compute engine102, and the experimentation engine120can track how many times the new configuration132causes the compute engines102to detect that behavior of interest. For example, the new configuration132may change filter criteria associated with one or more refinement operations202or context collection formats136used by such refinement operations202to generate refined event data204, and/or similarly change aspects of composition operations302to adjust when or how rally points306are created and/or when or how composition event data304is created. A new configuration132may also be used to adjust which nodes or cloud instances of the compute engine102are configured to process event data122in association with different rally points306. If event data122coming back to the experimentation engine120as part of the experiment shows that the behavior of interest is occurring in the wild less frequently than the analyst expected, the analyst can adjust the ontological definitions134and/or behavior patterns142in an attempt to better describe the behavior of interest or the type of event data122that is collected and processed, such that a second configuration132corresponding to the new ontological definitions134and/or behavior patterns142are provided to the experimental set or a second experimental set. If the second configuration132results in the behavior of interest being detected more often, the analyst may instruct the distributed security system100to provide that second configuration132to any or all compute engines102rather than just the one or more experimental sets. Example System Architecture FIG.15depicts an example system architecture for a client device104. A client device104can be one or more computing devices, such as a work station, a personal computer (PC), a laptop computer, a tablet computer, a personal digital assistant (PDA), a cellular phone, a media center, an embedded system, a server or server farm, multiple distributed server farms, a mainframe, or any other type of computing device. As shown inFIG.15, a client device104can include processor(s)1502, memory1504, communication interface(s)1506, output devices1508, input devices1510, and/or a drive unit1512including a machine readable medium1514. In various examples, the processor(s)1502can be a central processing unit (CPU), a graphics processing unit (GPU), or both CPU and GPU, or any other type of processing unit. Each of the one or more processor(s)1502may have numerous arithmetic logic units (ALUs) that perform arithmetic and logical operations, as well as one or more control units (CUs) that extract instructions and stored content from processor cache memory, and then executes these instructions by calling on the ALUs, as necessary, during program execution. The processor(s)1502may also be responsible for executing drivers and other computer-executable instructions for applications, routines, or processes stored in the memory1504, which can be associated with common types of volatile (RAM) and/or nonvolatile (ROM) memory. In various examples, the memory1504can include system memory, which may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Memory1504can further include non-transitory computer-readable media, such as volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. System memory, removable storage, and non-removable storage are all examples of non-transitory computer-readable media. Examples of non-transitory computer-readable media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium which can be used to store the desired information and which can be accessed by the client device104. Any such non-transitory computer-readable media may be part of the client device104. The memory1504can store data, including computer-executable instructions, for a security agent108as described herein. The memory1504can further store event data122, configurations132, and/or other data being processed and/or used by one or more components of the security agent108, including event detectors124, a compute engine102, and a communication component130. The memory1504can also store any other modules and data1516that can be utilized by the client device104to perform or enable performing any action taken by the client device104. For example, the modules and data can a platform, operating system, and/or applications, as well as data utilized by the platform, operating system, and/or applications. The communication interfaces1506can link the client device104to other elements through wired or wireless connections. For example, communication interfaces1506can be wired networking interfaces, such as Ethernet interfaces or other wired data connections, or wireless data interfaces that include transceivers, modems, interfaces, antennas, and/or other components, such as a Wi-Fi interface. The communication interfaces1506can include one or more modems, receivers, transmitters, antennas, interfaces, error correction units, symbol coders and decoders, processors, chips, application specific integrated circuits (ASICs), programmable circuit (e.g., field programmable gate arrays), software components, firmware components, and/or other components that enable the client device104to send and/or receive data, for example to exchange event data122, configurations132, and/or any other data with the security network106. The output devices1508can include one or more types of output devices, such as speakers or a display, such as a liquid crystal display. Output devices1508can also include ports for one or more peripheral devices, such as headphones, peripheral speakers, and/or a peripheral display. In some examples, a display can be a touch-sensitive display screen, which can also act as an input device1510. The input devices1510can include one or more types of input devices, such as a microphone, a keyboard or keypad, and/or a touch-sensitive display, such as the touch-sensitive display screen described above. The drive unit1512and machine readable medium1514can store one or more sets of computer-executable instructions, such as software or firmware, that embodies any one or more of the methodologies or functions described herein. The computer-executable instructions can also reside, completely or at least partially, within the processor(s)1502, memory1504, and/or communication interface(s)1506during execution thereof by the client device104. The processor(s)1502and the memory1504can also constitute machine readable media1514. FIG.16depicts an example system architecture for one or more cloud computing elements1600of the security network106. Elements of the security network106described above can be distributed among, and be implemented by, one or more cloud computing elements1600such as servers, servers, server farms, distributed server farms, hardware computing elements, virtualized computing elements, and/or other network computing elements. A cloud computing element1600can have a system memory1602that stores data associated with one or more cloud elements of the security network106, including one or more instances of the compute engine102, the ontology service110, the pattern repository112, the compiler114, the storage engine116, the bounding service118, and the experimentation engine120. Although in some examples a particular cloud computing element1600may store data for a single cloud element, or even portions of a cloud element, of the security network106, in other examples a particular cloud computing element1600may store data for multiple cloud elements of the security network106, or separate virtualized instances of one or more cloud elements. For example, as discussed above, the storage engine116can be divided into multiple virtual shards804, and a single cloud computing element1600may execute multiple distinct instances of components of more than one shard804. The system memory1602can also store other modules and data1604, which can be utilized by the cloud computing element1600to perform or enable performing any action taken by the cloud computing element1600. The other modules and data1604can include a platform, operating system, or applications, and/or data utilized by the platform, operating system, or applications. In various examples, system memory1602can be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. Example system memory1602can include one or more of RAM, ROM, EEPROM, a Flash Memory, a hard drive, a memory card, an optical storage, a magnetic cassette, a magnetic tape, a magnetic disk storage or another magnetic storage devices, or any other medium. The one or more cloud computing elements1600can also include processor(s)1606, removable storage1608, non-removable storage1610, input device(s)1612, output device(s)1614, and/or communication connections1616for communicating with other network elements1618, such as client devices104and other cloud computing elements1600. In some embodiments, the processor(s)1606can be a central processing unit (CPU), a graphics processing unit (GPU), both CPU and GPU, or other processing unit or component known in the art. The one or more cloud computing elements1600can also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated inFIG.16by removable storage1608and non-removable storage1610. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory1602, removable storage1608and non-removable storage1610are all examples of computer-readable storage media. Computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the one or more cloud computing elements1600. Any such computer-readable storage media can be part of the one or more cloud computing elements1600. In various examples, any or all of system memory1602, removable storage1608, and non-removable storage1610, store computer-executable instructions which, when executed, implement some or all of the herein-described operations of the security network106and its cloud computing elements1600. In some examples, the one or more cloud computing elements1600can also have input device(s)1612, such as a keyboard, a mouse, a touch-sensitive display, voice input device, etc., and/or output device(s)1614such as a display, speakers, a printer, etc. These devices are well known in the art and need not be discussed at length here. The one or more cloud computing elements1600can also contain communication connections1616that allow the one or more cloud computing elements1600to communicate with other network elements1618. For example, the communication connections1616can allow the security network106to send new configurations132to security agents108on client devices104, and/or receive event data122from such security agents108on client devices104. CONCLUSION Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example embodiments. | 165,612 |
11861020 | DETAILED DESCRIPTION Some computer systems may include non-persistent memory (e.g., volatile memory such as dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), and so forth) and persistent memory (e.g., non-volatile memory such as storage class memory (SCM), direct access storage (DAS) memory, non-volatile dual in-line memory modules (NVDIMM), and/or other forms of flash or solid-state storage). In some examples, some or all of the persistent memory may be used as persistent storage, which operates in a similar manner to disk-based storage (i.e., data remains stored even when the system is powered down). Further, some or all of the persistent memory may be used as an extension of the non-persistent memory. In some examples, the persistent memory may be accessed by a direct access mechanism that allows loads and stores that are similar to those used for regular system memory. Furthermore, some computer systems may implement encryption in storage and memory to protect the data content from unauthorized access. For example, some systems may use software-based encryption to protect the included storage and/or memory. However, such approaches may not be compatible with a direct access mechanism that uses regular loads and stores, may be complex, and/or may impact the performance of the system. Further, such approaches may store the encryption keys in readable format, and therefore may risk exposing the keys to unauthorized access (e.g., by accessing and reading the storage device that includes the keys). In embodiments described herein, a processor may include memory protection logic to provide encryption across non-persistent memory and persistent memory. The memory protection logic may generate a non-persistent key and a persistent key. The non-persistent key may be used for memory portions that operate as non-persistent memory, such as volatile memory (e.g., DRAM) and portions of persistent memory that are used as extensions of the non-persistent memory. Further, the persistent key may be used for portions of persistent memory that are used as persistent storage (e.g., disk-based storage). In some embodiments, in response to a first initialization of a computer system (e.g., in a first boot after manufacture), the memory protection logic may generate an ephemeral component, and may then generate a persistent key using the ephemeral component. In some embodiments, the persistent key is not stored in the system, but instead only the ephemeral component is stored. Later, during subsequent boots, the memory protection logic may access the ephemeral component from storage, and may regenerate the persistent key using the ephemeral component. Accordingly, the persistent key may be available to access data in the persistent memory over time, but is not stored in a manner that could suffer from unauthorized access. In this manner, embodiments described herein may provide improved protection for persistent memory. Various details of some embodiments are described further below with reference toFIGS.26A-31. Further, exemplary systems and architectures are described below with reference toFIGS.1-25. Exemplary Systems and Architectures Although the following embodiments are described with reference to particular implementations, embodiments are not limited in this regard. In particular, it is contemplated that similar techniques and teachings of embodiments described herein may be applied to other types of circuits, semiconductor devices, processors, systems, etc. For example, the disclosed embodiments may be implemented in any type of computer system, including server computers (e.g., tower, rack, blade, micro-server and so forth), communications systems, storage systems, desktop computers of any configuration, laptop, notebook, and tablet computers (including 2:1 tablets, phablets and so forth). In addition, disclosed embodiments can also be used in other devices, such as handheld devices, systems on chip (SoCs), and embedded applications. Some examples of handheld devices include cellular phones such as smartphones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may typically include a microcontroller, a digital signal processor (DSP), network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, wearable devices, or any other system that can perform the functions and operations taught below. Further, embodiments may be implemented in mobile terminals having standard voice functionality such as mobile phones, smartphones and phablets, and/or in non-mobile terminals without a standard wireless voice function communication capability, such as many wearables, tablets, notebooks, desktops, micro-servers, servers and so forth. Referring now toFIG.1, shown is a block diagram of a portion of a system in accordance with an embodiment of the present invention. As shown inFIG.1, system100may include various components, including a processor110which as shown is a multicore processor. Processor110may be coupled to a power supply150via an external voltage regulator160, which may perform a first voltage conversion to provide a primary regulated voltage Vreg to processor110. As seen, processor110may be a single die processor including multiple cores120a-120n. In addition, each core may be associated with an integrated voltage regulator (IVR)125a-125nwhich receives the primary regulated voltage and generates an operating voltage to be provided to one or more agents of the processor associated with the IVR. Accordingly, an IVR implementation may be provided to allow for fine-grained control of voltage and thus power and performance of each individual core. As such, each core can operate at an independent voltage and frequency, enabling great flexibility and affording wide opportunities for balancing power consumption with performance. In some embodiments, the use of multiple IVRs enables the grouping of components into separate power planes, such that power is regulated and supplied by the IVR to only those components in the group. During power management, a given power plane of one IVR may be powered down or off when the processor is placed into a certain low power state, while another power plane of another IVR remains active, or fully powered. Similarly, cores120may include or be associated with independent clock generation circuitry such as one or more phase lock loops (PLLs) to control operating frequency of each core120independently. Still referring toFIG.1, additional components may be present within the processor including an input/output interface (IF)132, another interface134, and an integrated memory controller (IMC)136. As seen, each of these components may be powered by another integrated voltage regulator125x. In one embodiment, interface132may enable operation for an Intel® Quick Path Interconnect (QPI) interconnect, which provides for point-to-point (PtP) links in a cache coherent protocol that includes multiple layers including a physical layer, a link layer and a protocol layer. In turn, interface134may communicate via a Peripheral Component Interconnect Express (PCIe™) protocol. Also shown is a power control unit (PCU)138, which may include circuitry including hardware, software and/or firmware to perform power management operations with regard to processor110. As seen, PCU138provides control information to external voltage regulator160via a digital interface162to cause the voltage regulator to generate the appropriate regulated voltage. PCU138also provides control information to IVRs125via another digital interface163to control the operating voltage generated (or to cause a corresponding IVR to be disabled in a low power mode). In various embodiments, PCU138may include a variety of power management logic units to perform hardware-based power management. Such power management may be wholly processor controlled (e.g., by various processor hardware, and which may be triggered by workload and/or power, thermal or other processor constraints) and/or the power management may be performed responsive to external sources (such as a platform or power management source or system software). InFIG.1, PCU138is illustrated as being present as a separate logic of the processor. In other cases, PCU138may execute on a given one or more of cores120. In some cases, PCU138may be implemented as a microcontroller (dedicated or general-purpose) or other control logic configured to execute its own dedicated power management code, sometimes referred to as P-code. In yet other embodiments, power management operations to be performed by PCU138may be implemented externally to a processor, such as by way of a separate power management integrated circuit (PMIC) or another component external to the processor. In yet other embodiments, power management operations to be performed by PCU138may be implemented within BIOS or other system software. Embodiments may be particularly suitable for a multicore processor in which each of multiple cores can operate at an independent voltage and frequency point. As used herein the term “domain” is used to mean a collection of hardware and/or logic that operates at the same voltage and frequency point. In addition, a multicore processor can further include other non-core processing engines such as fixed function units, graphics engines, and so forth. Such processor can include independent domains other than the cores, such as one or more domains associated with a graphics engine (referred to herein as a graphics domain) and one or more domains associated with non-core circuitry, referred to herein as a system agent. Although many implementations of a multi-domain processor can be formed on a single semiconductor die, other implementations can be realized by a multi-chip package in which different domains can be present on different semiconductor die of a single package. While not shown for ease of illustration, understand that additional components may be present within processor110such as non-core logic, and other components such as internal memories, e.g., one or more levels of a cache memory hierarchy and so forth. Furthermore, while shown in the implementation ofFIG.1with an integrated voltage regulator, embodiments are not so limited. For example, other regulated voltages may be provided to on-chip resources from external voltage regulator160or one or more additional external sources of regulated voltages. Note that the power management techniques described herein may be independent of and complementary to an operating system (OS)-based power management (OSPM) mechanism. According to one example OSPM technique, a processor can operate at various performance states or levels, so-called P-states, namely from P0 to PN. In general, the P1 performance state may correspond to the highest guaranteed performance state that can be requested by an OS. In addition to this P1 state, the OS can further request a higher performance state, namely a P0 state. This P0 state may thus be an opportunistic, overclocking, or turbo mode state in which, when power and/or thermal budget is available, processor hardware can configure the processor or at least portions thereof to operate at a higher than guaranteed frequency. In many implementations, a processor can include multiple so-called bin frequencies above the P1 guaranteed maximum frequency, exceeding to a maximum peak frequency of the particular processor, as fused or otherwise written into the processor during manufacture. In addition, according to one OSPM mechanism, a processor can operate at various power states or levels. With regard to power states, an OSPM mechanism may specify different power consumption states, generally referred to as C-states, C0, C1 to Cn states. When a core is active, it runs at a C0 state, and when the core is idle it may be placed in a core low power state, also called a core non-zero C-state (e.g., C1-C6 states), with each C-state being at a lower power consumption level (such that C6 is a deeper low power state than C1, and so forth). Understand that many different types of power management techniques may be used individually or in combination in different embodiments. As representative examples, a power controller may control the processor to be power managed by some form of dynamic voltage frequency scaling (DVFS) in which an operating voltage and/or operating frequency of one or more cores or other processor logic may be dynamically controlled to reduce power consumption in certain situations. In an example, DVFS may be performed using Enhanced Intel SpeedStep™ technology available from Intel Corporation, Santa Clara, CA, to provide optimal performance at a lowest power consumption level. In another example, DVFS may be performed using Intel TurboBoost™ technology to enable one or more cores or other compute engines to operate at a higher than guaranteed operating frequency based on conditions (e.g., workload and availability). Another power management technique that may be used in certain examples is dynamic swapping of workloads between different compute engines. For example, the processor may include asymmetric cores or other processing engines that operate at different power consumption levels, such that in a power constrained situation, one or more workloads can be dynamically switched to execute on a lower power core or other compute engine. Another exemplary power management technique is hardware duty cycling (HDC), which may cause cores and/or other compute engines to be periodically enabled and disabled according to a duty cycle, such that one or more cores may be made inactive during an inactive period of the duty cycle and made active during an active period of the duty cycle. Power management techniques also may be used when constraints exist in an operating environment. For example, when a power and/or thermal constraint is encountered, power may be reduced by reducing operating frequency and/or voltage. Other power management techniques include throttling instruction execution rate or limiting scheduling of instructions. Still further, it is possible for instructions of a given instruction set architecture to include express or implicit direction as to power management operations. Although described with these particular examples, understand that many other power management techniques may be used in particular embodiments. Embodiments can be implemented in processors for various markets including server processors, desktop processors, mobile processors and so forth. Referring now toFIG.2, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown inFIG.2, processor200may be a multicore processor including a plurality of cores210a-210n. In one embodiment, each such core may be of an independent power domain and can be configured to enter and exit active states and/or maximum performance states based on workload. One or more cores210may be heterogeneous to the other cores, e.g., having different micro-architectures, instruction set architectures, pipeline depths, power and performance capabilities. The various cores may be coupled via an interconnect215to a system agent220that includes various components. As seen, the system agent220may include a shared cache230which may be a last level cache. In addition, the system agent may include an integrated memory controller240to communicate with a system memory (not shown inFIG.2), e.g., via a memory bus. The system agent220also includes various interfaces250and a power control unit255, which may include logic to perform the power management techniques described herein. In addition, by interfaces250a-250n, connection can be made to various off-chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment ofFIG.2, the scope of the present invention is not limited in this regard. Referring now toFIG.3, shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. As shown in the embodiment ofFIG.3, processor300includes multiple domains. Specifically, a core domain310can include a plurality of cores310a-310n, a graphics domain320can include one or more graphics engines, and a system agent domain350may further be present. In some embodiments, system agent domain350may execute at an independent frequency than the core domain and may remain powered on at all times to handle power control events and power management such that domains310and320can be controlled to dynamically enter into and exit high power and low power states. Each of domains310and320may operate at different voltage and/or power. Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core. In general, each of the cores310a-310nmay further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC)340a-340n. In various embodiments, LLC340may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. As seen, a ring interconnect330thus couples the cores together, and provides interconnection between the cores, graphics domain320and system agent domain350. In one embodiment, interconnect330can be part of the core domain. However, in other embodiments the ring interconnect can be of its own domain. As further seen, system agent domain350may include display controller352which may provide control of and an interface to an associated display. As further seen, system agent domain350may include a power control unit355which can include logic to perform the power management techniques described herein. As further seen inFIG.3, processor300can further include an integrated memory controller (IMC)370that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces380a-380nmay be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI) interface may be provided as well as one or more PCIe™ interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more QPI interfaces may also be provided. Although shown at this high level in the embodiment ofFIG.3, understand the scope of the present invention is not limited in this regard. Referring toFIG.4, an embodiment of a processor including multiple cores is illustrated. Processor400includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SoC), or other device to execute code. Processor400, in one embodiment, includes at least two cores-cores401and402, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor400may include any number of processing elements that may be symmetric or asymmetric. In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads. A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor. Physical processor400, as illustrated inFIG.4, includes two cores, cores401and402. Here, cores401and402are considered symmetric cores, i.e., cores with the same configurations, functional units, and/or logic. In another embodiment, core401includes an out-of-order processor core, while core402includes an in-order processor core. However, cores401and402may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a translated ISA, a co-designed core, or other known core. Yet to further the discussion, the functional units illustrated in core401are described in further detail below, as the units in core402operate in a similar manner. As depicted, core401includes two architectural state registers401aand401b, which may be associated with two hardware threads (also referred to as hardware thread slots). Therefore, software entities, such as an operating system, in one embodiment potentially view processor400as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers401a, a second thread is associated with architecture state registers401b, a third thread may be associated with architecture state registers402a, and a fourth thread may be associated with architecture state registers402b. Here, the architecture state registers (401a,401b,402a, and402b) may be associated with processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers401aare replicated in architecture state registers401b, so individual architecture states/contexts are capable of being stored for logical processor401aand logical processor401b. In core401, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block430may also be replicated for threads401aand401b. Some resources, such as re-order buffers in reorder/retirement unit435, branch target buffer and instruction translation lookaside buffer (BTB and I-TLB)420, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB450, execution unit(s)440, and portions of reorder/retirement unit435are potentially fully shared. Processor400often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. InFIG.4, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core401includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. Core401further includes decode module425coupled to a fetch unit to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots401a,401b, respectively. Usually core401is associated with a first ISA, which defines/specifies instructions executable on processor400. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode module425includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, decoder module425, in one embodiment, includes logic designed or adapted to recognize specific instructions, such as transactional instructions. As a result of the recognition by the decoder module425, the architecture or core401takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. In one example, allocator and renamer block430includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads401aand401bare potentially capable of out-of-order execution, where allocator and renamer block430also reserves other resources, such as reorder buffers to track instruction results. The renamer block430may also include a register renamer to rename program/instruction reference registers to other registers internal to processor400. Reorder/retirement unit435includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order. Scheduler and execution unit(s) block440, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units. Lower level data cache and data translation lookaside buffer (D-TLB)450are coupled to execution unit(s)440. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages. Here, cores401and402share access to higher-level or further-out cache410, which is to cache recently fetched elements. Note that higher-level or further-out refers to cache levels increasing or getting further away from the execution unit(s). In one embodiment, higher-level cache410is a last-level data cache-last cache in the memory hierarchy on processor400—such as a second or third level data cache. However, higher level cache410is not so limited, as it may be associated with or includes an instruction cache. A trace cache—a type of instruction cache-instead may be coupled after decoder module425to store recently decoded traces. In the depicted configuration, processor400also includes bus interface405and a power control unit460, which may perform power management in accordance with an embodiment of the present invention. In this scenario, bus interface405is to communicate with devices external to processor400, such as system memory and other components. A memory controller470may interface with other devices such as one or many memories. In an example, bus interface405includes a ring interconnect with a memory controller for interfacing with a memory and a graphics controller for interfacing with a graphics processor. In an SoC environment, even more devices, such as a network interface, coprocessors, memory, graphics processor, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption. Referring now toFIG.5, shown is a block diagram of a micro-architecture of a processor core in accordance with one embodiment of the present invention. As shown inFIG.5, processor core500may be a multi-stage pipelined out-of-order processor. Core500may operate at various voltages based on a received operating voltage, which may be received from an integrated voltage regulator or external voltage regulator. As seen inFIG.5, core500includes front end units510, which may be used to fetch instructions to be executed and prepare them for use later in the processor pipeline. For example, front end units510may include a fetch unit501, an instruction cache503, and an instruction decoder505. In some implementations, front end units510may further include a trace cache, along with microcode storage as well as a micro-operation storage. Fetch unit501may fetch macro-instructions, e.g., from memory or instruction cache503, and feed them to instruction decoder505to decode them into primitives, i.e., micro-operations for execution by the processor. Coupled between front end units510and execution units520is an out-of-order (OOO) engine515that may be used to receive the micro-instructions and prepare them for execution. More specifically OOO engine515may include various buffers to re-order micro-instruction flow and allocate various resources needed for execution, as well as to provide renaming of logical registers onto storage locations within various register files such as register file530and extended register file535. Register file530may include separate register files for integer and floating point operations. For purposes of configuration, control, and additional operations, a set of machine specific registers (MSRs)538may also be present and accessible to various logic within core500(and external to the core). Various resources may be present in execution units520, including, for example, various integer, floating point, and single instruction multiple data (SIMD) logic units, among other specialized hardware. For example, such execution units may include one or more arithmetic logic units (ALUs)522and one or more vector execution units524, among other such execution units. Results from the execution units may be provided to retirement logic, namely a reorder buffer (ROB)540. More specifically, ROB540may include various arrays and logic to receive information associated with instructions that are executed. This information is then examined by ROB540to determine whether the instructions can be validly retired and result data committed to the architectural state of the processor, or whether one or more exceptions occurred that prevent a proper retirement of the instructions. Of course, ROB540may handle other operations associated with retirement. As shown inFIG.5, ROB540is coupled to a cache550which, in one embodiment may be a low level cache (e.g., an L1 cache) although the scope of the present invention is not limited in this regard. Also, execution units520can be directly coupled to cache550. From cache550, data communication may occur with higher level caches, system memory and so forth. While shown with this high level in the embodiment ofFIG.5, understand the scope of the present invention is not limited in this regard. For example, while the implementation ofFIG.5is with regard to an out-of-order machine such as of an Intel® x86 instruction set architecture (ISA), the scope of the present invention is not limited in this regard. That is, other embodiments may be implemented in an in-order processor, a reduced instruction set computing (RISC) processor such as an ARM-based processor, or a processor of another type of ISA that can emulate instructions and operations of a different ISA via an emulation engine and associated logic circuitry. Referring now toFIG.6, shown is a block diagram of a micro-architecture of a processor core in accordance with another embodiment. In the embodiment ofFIG.6, core600may be a low power core of a different micro-architecture, such as an Intel® Atom™_based processor having a relatively limited pipeline depth designed to reduce power consumption. As seen, core600includes an instruction cache610coupled to provide instructions to an instruction decoder615. A branch predictor605may be coupled to instruction cache610. Note that instruction cache610may further be coupled to another level of a cache memory, such as an L2 cache (not shown for ease of illustration inFIG.6). In turn, instruction decoder615provides decoded instructions to an issue queue (IQ)620for storage and delivery to a given execution pipeline. A microcode ROM618is coupled to instruction decoder615. A floating point pipeline630includes a floating point (FP) register file632which may include a plurality of architectural registers of a given bit width such as 128, 256 or 512 bits. Pipeline630includes a floating point scheduler634to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an arithmetic logic unit (ALU)635, a shuffle unit636, and a floating point (FP) adder638. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file632. Of course understand while shown with these few example execution units, additional or different floating point execution units may be present in another embodiment. An integer pipeline640also may be provided. In the embodiment shown, pipeline640includes an integer (INT) register file642which may include a plurality of architectural registers of a given bit width such as 128 or 256 bits. Pipeline640includes an integer execution (IE) scheduler644to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an ALU645, a shifter unit646, and a jump execution unit (JEU)648. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file642. Of course, understand while shown with these few example execution units, additional or different integer execution units may be present in another embodiment. A memory execution (ME) scheduler650may schedule memory operations for execution in an address generation unit (AGU)652, which is also coupled to a TLB654. As seen, these structures may couple to a data cache660, which may be a L0 and/or L1 data cache that in turn couples to additional levels of a cache memory hierarchy, including an L2 cache memory. To provide support for out-of-order execution, an allocator/renamer670may be provided, in addition to a reorder buffer680, which is configured to reorder instructions executed out of order for retirement in order. Although shown with this particular pipeline architecture in the illustration ofFIG.6, understand that many variations and alternatives are possible. Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures ofFIGS.5and6, workloads may be dynamically swapped between the cores for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also). Referring toFIG.7, shown is a block diagram of a micro-architecture of a processor core in accordance with yet another embodiment. As illustrated inFIG.7, a core700may include a multi-staged in-order pipeline to execute at very low power consumption levels. As one such example, core700may have a micro-architecture in accordance with an ARM Cortex A53 design available from ARM Holdings, LTD., Sunnyvale, CA In an implementation, an 8-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. Core700includes a fetch unit710that is configured to fetch instructions and provide them to a decode unit715, which may decode the instructions, e.g., macro-instructions of a given ISA such as an ARMv8 ISA. Note further that a queue730may couple to decode unit715to store decoded instructions. Decoded instructions are provided to an issue logic725, where the decoded instructions may be issued to a given one of multiple execution units. With further reference toFIG.7, issue logic725may issue instructions to one of multiple execution units. In the embodiment shown, these execution units include an integer unit735, a multiply unit740, a floating point/vector unit750, a dual issue unit760, and a load/store unit770. The results of these different execution units may be provided to a writeback (WB) unit780. Understand that while a single writeback unit is shown for ease of illustration, in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown inFIG.7is represented at a high level, a particular implementation may include more or different structures. A processor designed using one or more cores having a pipeline as inFIG.7may be implemented in many different end products, extending from mobile devices to server systems. Referring toFIG.8, shown is a block diagram of a micro-architecture of a processor core in accordance with a still further embodiment. As illustrated inFIG.8, a core800may include a multi-stage multi-issue out-of-order pipeline to execute at very high performance levels (which may occur at higher power consumption levels than core700ofFIG.7). As one such example, processor800may have a microarchitecture in accordance with an ARM Cortex A57 design. In an implementation, a 15 (or greater)-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. In addition, the pipeline may provide for 3 (or greater)-wide and 3 (or greater)-issue operation. Core800includes a fetch unit810that is configured to fetch instructions and provide them to a decoder/renamer/dispatcher unit815coupled to a cache820. Unit815may decode the instructions, e.g., macro-instructions of an ARMv8 instruction set architecture, rename register references within the instructions, and dispatch the instructions (eventually) to a selected execution unit. Decoded instructions may be stored in a queue825. Note that while a single queue structure is shown for ease of illustration inFIG.8, understand that separate queues may be provided for each of the multiple different types of execution units. Also shown inFIG.8is an issue logic830from which decoded instructions stored in queue825may be issued to a selected execution unit. Issue logic830also may be implemented in a particular embodiment with a separate issue logic for each of the multiple different types of execution units to which issue logic830couples. Decoded instructions may be issued to a given one of multiple execution units. In the embodiment shown, these execution units include one or more integer units835, a multiply unit840, a floating point/vector unit850, a branch unit860, and a load/store unit870. In an embodiment, floating point/vector unit850may be configured to handle SIMD or vector data of 128 or 256 bits. Still further, floating point/vector execution unit850may perform IEEE-754 double precision floating-point operations. The results of these different execution units may be provided to a writeback unit880. Note that in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown inFIG.8is represented at a high level, a particular implementation may include more or different structures. Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures ofFIGS.7and8, workloads may be dynamically swapped for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also). A processor designed using one or more cores having pipelines as in any one or more ofFIGS.5-8may be implemented in many different end products, extending from mobile devices to server systems. Referring now toFIG.9, shown is a block diagram of a processor in accordance with another embodiment of the present invention. In the embodiment ofFIG.9, processor900may be a SoC including multiple domains, each of which may be controlled to operate at an independent operating voltage and operating frequency. As a specific illustrative example, processor900may be an Intel® Architecture Core™-based processor such as an i3, i5, i7 or another such processor available from Intel Corporation. However, other low power processors such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, an ARM-based design from ARM Holdings, Ltd. or licensee thereof or a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale, CA, or their licensees or adopters may instead be present in other embodiments such as an Apple A7 processor, a Qualcomm Snapdragon processor, or Texas Instruments OMAP processor. Such SoC may be used in a low power system such as a smartphone, tablet computer, phablet computer, Ultrabook™ computer or other portable computing device, which may incorporate a heterogeneous system architecture having a heterogeneous system architecture-based processor design. In the high level view shown inFIG.9, processor900includes a plurality of core units910a-910n. Each core unit may include one or more processor cores, one or more cache memories and other circuitry. Each core unit910may support one or more instruction sets (e.g., an x86 instruction set (with some extensions that have been added with newer versions); a MIPS instruction set; an ARM instruction set (with optional additional extensions such as NEON)) or other instruction set or combinations thereof. Note that some of the core units may be heterogeneous resources (e.g., of a different design). In addition, each such core may be coupled to a cache memory (not shown) which in an embodiment may be a shared level two (L2) cache memory. A non-volatile storage930may be used to store various program and other data. For example, this storage may be used to store at least portions of microcode, boot information such as a BIOS, other system software or so forth. Each core unit910may also include an interface such as a bus interface unit to enable interconnection to additional circuitry of the processor. In an embodiment, each core unit910couples to a coherent fabric that may act as a primary cache coherent on-die interconnect that in turn couples to a memory controller935. In turn, memory controller935controls communications with a memory such as a DRAM (not shown for ease of illustration inFIG.9). In addition to core units, additional processing engines are present within the processor, including at least one graphics unit920which may include one or more graphics processing units (GPUs) to perform graphics processing as well as to possibly execute general purpose operations on the graphics processor (so-called GPGPU operation). In addition, at least one image signal processor925may be present. Signal processor925may be configured to process incoming image data received from one or more capture devices, either internal to the SoC or off-chip. Other accelerators also may be present. In the illustration ofFIG.9, a video coder950may perform coding operations including encoding and decoding for video information, e.g., providing hardware acceleration support for high definition video content. A display controller955further may be provided to accelerate display operations including providing support for internal and external displays of a system. In addition, a security processor945may be present to perform security operations such as secure boot operations, various cryptography operations and so forth. Each of the units may have its power consumption controlled via a power manager940, which may include control logic to perform the various power management techniques described herein. In some embodiments, processor900may further include a non-coherent fabric coupled to the coherent fabric to which various peripheral devices may couple. One or more interfaces960a-960denable communication with one or more off-chip devices. Such communications may be via a variety of communication protocols such as PCIe™, GPIO, USB, I2C, UART, MIPI, SDIO, DDR, SPI, HDMI, among other types of communication protocols. Although shown at this high level in the embodiment ofFIG.9, understand the scope of the present invention is not limited in this regard. Referring now toFIG.10, shown is a block diagram of a representative SoC. In the embodiment shown, SoC1000may be a multi-core SoC configured for low power operation to be optimized for incorporation into a smartphone or other low power device such as a tablet computer or other portable computing device. As an example, SoC1000may be implemented using asymmetric or different types of cores, such as combinations of higher power and/or low power cores, e.g., out-of-order cores and in-order cores. In different embodiments, these cores may be based on an Intel® Architecture™ core design or an ARM architecture design. In yet other embodiments, a mix of Intel and ARM cores may be implemented in a given SoC. As seen inFIG.10, SoC1000includes a first core domain1010having a plurality of first cores1012a-1012d. In an example, these cores may be low power cores such as in-order cores. In one embodiment, these first cores may be implemented as ARM Cortex A53 cores. In turn, these cores couple to a cache memory1015of core domain1010. In addition, SoC1000includes a second core domain1020. In the illustration ofFIG.10, second core domain1020has a plurality of second cores1022a-1022d. In an example, these cores may be higher power-consuming cores than first cores1012. In an embodiment, the second cores may be out-of-order cores, which may be implemented as ARM Cortex A57 cores. In turn, these cores couple to a cache memory1025of core domain1020. Note that while the example shown inFIG.10includes 4 cores in each domain, understand that more or fewer cores may be present in a given domain in other examples. With further reference toFIG.10, a graphics domain1030also is provided, which may include one or more graphics processing units (GPUs) configured to independently execute graphics workloads, e.g., provided by one or more cores of core domains1010and1020. As an example, GPU domain1030may be used to provide display support for a variety of screen sizes, in addition to providing graphics and display rendering operations. As seen, the various domains couple to a coherent interconnect1040, which in an embodiment may be a cache coherent interconnect fabric that in turn couples to an integrated memory controller1050. Coherent interconnect1040may include a shared cache memory, such as an L3 cache, in some examples. In an embodiment, memory controller1050may be a direct memory controller to provide for multiple channels of communication with an off-chip memory, such as multiple channels of a DRAM (not shown for ease of illustration inFIG.10). In different examples, the number of the core domains may vary. For example, for a low power SoC suitable for incorporation into a mobile computing device, a limited number of core domains such as shown inFIG.10may be present. Still further, in such low power SoCs, core domain1020including higher power cores may have fewer numbers of such cores. For example, in one implementation two cores1022may be provided to enable operation at reduced power consumption levels. In addition, the different core domains may also be coupled to an interrupt controller to enable dynamic swapping of workloads between the different domains. In yet other embodiments, a greater number of core domains, as well as additional optional IP logic may be present, in that an SoC can be scaled to higher performance (and power) levels for incorporation into other computing devices, such as desktops, servers, high performance computing systems, base stations forth. As one such example, 4 core domains each having a given number of out-of-order cores may be provided. Still further, in addition to optional GPU support (which as an example may take the form of a GPGPU), one or more accelerators to provide optimized hardware support for particular functions (e.g. web serving, network processing, switching or so forth) also may be provided. In addition, an input/output interface may be present to couple such accelerators to off-chip components. Referring now toFIG.11, shown is a block diagram of another example SoC. In the embodiment ofFIG.11, SoC1100may include various circuitry to enable high performance for multimedia applications, communications and other functions. As such, SoC1100is suitable for incorporation into a wide variety of portable and other devices, such as smartphones, tablet computers, smart TVs and so forth. In the example shown, SoC1100includes a central processor unit (CPU) domain1110. In an embodiment, a plurality of individual processor cores may be present in CPU domain1110. As one example, CPU domain1110may be a quad core processor having 4 multithreaded cores. Such processors may be homogeneous or heterogeneous processors, e.g., a mix of low power and high power processor cores. In turn, a GPU domain1120is provided to perform advanced graphics processing in one or more GPUs to handle graphics and compute APIs. A DSP unit1130may provide one or more low power DSPs for handling low-power multimedia applications such as music playback, audio/video and so forth, in addition to advanced calculations that may occur during execution of multimedia instructions. In turn, a communication unit1140may include various components to provide connectivity via various wireless protocols, such as cellular communications (including 3G/4G LTE), wireless local area protocols such as Bluetooth™ IEEE 802.11, and so forth. Still further, a multimedia processor1150may be used to perform capture and playback of high definition video and audio content, including processing of user gestures. A sensor unit1160may include a plurality of sensors and/or a sensor controller to interface to various off-chip sensors present in a given platform. An image signal processor (ISP)1170may perform image processing with regard to captured content from one or more cameras of a platform, including still and video cameras. A display processor1180may provide support for connection to a high definition display of a given pixel density, including the ability to wirelessly communicate content for playback on such display. Still further, a location unit1190may include a Global Positioning System (GPS) receiver with support for multiple GPS constellations to provide applications highly accurate positioning information obtained using as such GPS receiver. Understand that while shown with this particular set of components in the example ofFIG.11, many variations and alternatives are possible. Referring now toFIG.12, shown is a block diagram of an example system with which embodiments can be used. As seen, system1200may be a smartphone or other wireless communicator. A baseband processor1205is configured to perform various signal processing with regard to communication signals to be transmitted from or received by the system. In turn, baseband processor1205is coupled to an application processor1210, which may be a main CPU of the system to execute an OS and other system software, in addition to user applications such as many well-known social media and multimedia apps. Application processor1210may further be configured to perform a variety of other computing operations for the device. In turn, application processor1210can couple to a user interface/display1220, e.g., a touch screen display. In addition, application processor1210may couple to a memory system including a non-volatile memory, namely a flash memory1230and a system memory, namely a dynamic random access memory (DRAM)1235. As further seen, application processor1210further couples to a capture device1241such as one or more image capture devices that can record video and/or still images. Still referring toFIG.12, a universal integrated circuit card (UICC)1246comprising a subscriber identity module and possibly a secure storage and cryptoprocessor is also coupled to application processor1210. System1200may further include a security processor1250that may couple to application processor1210. A plurality of sensors1225may couple to application processor1210to enable input of a variety of sensed information such as accelerometer and other environmental information. An audio output device1295may provide an interface to output sound, e.g., in the form of voice communications, played or streaming audio data and so forth. As further illustrated, a near field communication (NFC) contactless interface1260is provided that communicates in a NFC near field via an NFC antenna1265. While separate antennae are shown inFIG.12, understand that in some implementations one antenna or a different set of antennae may be provided to enable various wireless functionality. A power management integrated circuit (PMIC)1215couples to application processor1210to perform platform level power management. To this end, PMIC1215may issue power management requests to application processor1210to enter certain low power states as desired. Furthermore, based on platform constraints, PMIC1215may also control the power level of other components of system1200. To enable communications to be transmitted and received, various circuitry may be coupled between baseband processor1205and an antenna1290. Specifically, a radio frequency (RF) transceiver1270and a wireless local area network (WLAN) transceiver1275may be present. In general, RF transceiver1270may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol. In addition a GPS sensor1280may be present. Other wireless communications such as receipt or transmission of radio signals, e.g., AM/FM and other signals may also be provided. In addition, via WLAN transceiver1275, local wireless communications can also be realized. Referring now toFIG.13, shown is a block diagram of another example system with which embodiments may be used. In the illustration ofFIG.13, system1300may be mobile low-power system such as a tablet computer, 2:1 tablet, phablet or other convertible or standalone tablet system. As illustrated, a SoC1310is present and may be configured to operate as an application processor for the device. A variety of devices may couple to SoC1310. In the illustration shown, a memory subsystem includes a flash memory1340and a DRAM1345coupled to SoC1310. In addition, a touch panel1320is coupled to the SoC1310to provide display capability and user input via touch, including provision of a virtual keyboard on a display of touch panel1320. To provide wired network connectivity, SoC1310couples to an Ethernet interface1330. A peripheral hub1325is coupled to SoC1310to enable interfacing with various peripheral devices, such as may be coupled to system1300by any of various ports or other connectors. In addition to internal power management circuitry and functionality within SoC1310, a PMIC1380is coupled to SoC1310to provide platform-based power management, e.g., based on whether the system is powered by a battery1390or AC power via an AC adapter1395. In addition to this power source-based power management, PMIC1380may further perform platform power management activities based on environmental and usage conditions. Still further, PMIC1380may communicate control and status information to SoC1310to cause various power management actions within SoC1310. Still referring toFIG.13, to provide for wireless capabilities, a WLAN unit1350is coupled to SoC1310and in turn to an antenna1355. In various implementations, WLAN unit1350may provide for communication according to one or more wireless protocols. As further illustrated, a plurality of sensors1360may couple to SoC1310. These sensors may include various accelerometer, environmental and other sensors, including user gesture sensors. Finally, an audio codec1365is coupled to SoC1310to provide an interface to an audio output device1370. Of course understand that while shown with this particular implementation inFIG.13, many variations and alternatives are possible. Referring now toFIG.14, shown is a block diagram of a representative computer system1400such as notebook, Ultrabook™ or other small form factor system. A processor1410, in one embodiment, includes a microprocessor, multi-core processor, multithreaded processor, an ultra low voltage processor, an embedded processor, or other known processing element. In the illustrated implementation, processor1410acts as a main processing unit and central hub for communication with many of the various components of the system1400, and may include power management circuitry as described herein. As one example, processor1410is implemented as a SoC. Processor1410, in one embodiment, communicates with a system memory1415. As an illustrative example, the system memory1415is implemented via multiple memory devices or modules to provide for a given amount of system memory. To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage1420may also couple to processor1410. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a SSD or the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also shown inFIG.14, a flash device1422may be coupled to processor1410, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system. Various input/output (I/O) devices may be present within system1400. Specifically shown in the embodiment ofFIG.14is a display1424which may be a high definition LCD or LED panel that further provides for a touch screen1425. In one embodiment, display1424may be coupled to processor1410via a display interconnect that can be implemented as a high performance graphics interconnect. Touch screen1425may be coupled to processor1410via another interconnect, which in an embodiment can be an I2C interconnect. As further shown inFIG.14, in addition to touch screen1425, user input by way of touch can also occur via a touch pad1430which may be configured within the chassis and may also be coupled to the same I2C interconnect as touch screen1425. For perceptual computing and other purposes, various sensors may be present within the system and may be coupled to processor1410in different manners. Certain inertial and environmental sensors may couple to processor1410through a sensor hub1440, e.g., via an I2C interconnect. In the embodiment shown inFIG.14, these sensors may include an accelerometer1441, an ambient light sensor (ALS)1442, a compass1443and a gyroscope1444. Other environmental sensors may include one or more thermal sensors1446which in some embodiments couple to processor1410via a system management bus (SMBus) bus. As also seen inFIG.14, various peripheral devices may couple to processor1410via a low pin count (LPC) interconnect. In the embodiment shown, various components can be coupled through an embedded controller1435. Such components can include a keyboard1436(e.g., coupled via a PS2 interface), a fan1437, and a thermal sensor1439. In some embodiments, touch pad1430may also couple to EC1435via a PS2 interface. In addition, a security processor such as a trusted platform module (TPM)1438may also couple to processor1410via this LPC interconnect. System1400can communicate with external devices in a variety of manners, including wirelessly. In the embodiment shown inFIG.14, various wireless modules, each of which can correspond to a radio configured for a particular wireless communication protocol, are present. One manner for wireless communication in a short range such as a near field may be via a NFC unit1445which may communicate, in one embodiment with processor1410via an SMBus. Note that via this NFC unit1445, devices in close proximity to each other can communicate. As further seen inFIG.14, additional wireless units can include other short range wireless engines including a WLAN unit1450and a Bluetooth™ unit1452. Using WLAN unit1450, Wi-Fi™ communications can be realized, while via Bluetooth™ unit1452, short range Bluetooth™ communications can occur. These units may communicate with processor1410via a given link. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, can occur via a WWAN unit1456which in turn may couple to a subscriber identity module (SIM)1457. In addition, to enable receipt and use of location information, a GPS module1455may also be present. Note that in the embodiment shown inFIG.14, WWAN unit1456and an integrated capture device such as a camera module1454may communicate via a given link. To provide for audio inputs and outputs, an audio processor can be implemented via a digital signal processor (DSP)1460, which may couple to processor1410via a high definition audio (HDA) link. Similarly, DSP1460may communicate with an integrated coder/decoder (CODEC) and amplifier1462that in turn may couple to output speakers1463which may be implemented within the chassis. Similarly, amplifier and CODEC1462can be coupled to receive audio inputs from a microphone1465which in an embodiment can be implemented via dual array microphones (such as a digital microphone array) to provide for high quality audio inputs to enable voice-activated control of various operations within the system. Note also that audio outputs can be provided from amplifier/CODEC1462to a headphone jack1464. Although shown with these particular components in the embodiment ofFIG.14, understand the scope of the present invention is not limited in this regard. Embodiments may be implemented in many different system types. Referring now toFIG.15A, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown inFIG.15A, multiprocessor system1500is a point-to-point interconnect system, and includes a first processor1570and a second processor1580coupled via a point-to-point interconnect1550. As shown inFIG.15A, each of processors1570and1580may be multicore processors, including first and second processor cores (i.e., processor cores1574aand1574band processor cores1584aand1584b), although potentially many more cores may be present in the processors. Each of the processors can include a PCU or other power management logic to perform processor-based power management as described herein. Still referring toFIG.15A, first processor1570further includes an integrated memory controller (IMC)1572and point-to-point (P-P) interfaces1576and1578. Similarly, second processor1580includes an IMC1582and P-P interfaces1586and1588. As shown inFIG.15, IMCs1572and1582couple the processors to respective memories, namely a memory1532and a memory1534, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor1570and second processor1580may be coupled to a chipset1590via P-P interconnects1562and1564, respectively. As shown inFIG.15A, chipset1590includes P-P interfaces1594and1598. Furthermore, chipset1590includes an interface1592to couple chipset1590with a high-performance graphics engine1538, by a P-P interconnect1539. In turn, chipset1590may be coupled to a first bus1516via an interface1596. As shown inFIG.15A, various input/output (I/O) devices1514may be coupled to first bus1516, along with a bus bridge1518which couples first bus1516to a second bus1520. Various devices may be coupled to second bus1520including, for example, a keyboard/mouse1522, communication devices1526and a data storage unit1528such as a disk drive or other mass storage device which may include code1530, in one embodiment. Further, an audio I/O1524may be coupled to second bus1520. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, Ultrabook™, or so forth. Referring now toFIG.15B, shown is a block diagram of a second more specific exemplary system1501in accordance with an embodiment of the present invention. Like elements inFIG.15AandFIG.15Bbear like reference numerals, and certain aspects ofFIG.15Ahave been omitted fromFIG.15Bin order to avoid obscuring other aspects ofFIG.15B. FIG.15Billustrates that the processors1570,1580may include integrated memory and I/O control logic (“CL”)1571and1581, respectively. Thus, the control logic1571and1581include integrated memory controller units and include I/O control logic.FIG.15Nillustrates that not only are the memories1532,1534coupled to the control logic1571and1581, but also that I/O devices1513are also coupled to the control logic1571and1581. Legacy I/O devices1515are coupled to the chipset1590. One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein. FIG.16is a block diagram illustrating an IP core development system1600that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system1600may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SoC integrated circuit). A design facility1630can generate a software simulation1610of an IP core design in a high-level programming language (e.g., C/C++). The software simulation1610can be used to design, test, and verify the behavior of the IP core. A register transfer level (RTL) design can then be created or synthesized from the simulation model. The RTL design1615is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design1615, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary. The RTL design1615or equivalent may be further synthesized by the design facility into a hardware model1620, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a third-party fabrication facility1665using non-volatile memory1640(e.g., hard disk, flash memory, or any non-volatile storage medium). Alternately, the IP core design may be transmitted (e.g., via the Internet) over a wired connection1650or wireless connection1660. The fabrication facility1665may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with the components and/or processes described herein. FIGS.17A-25described below detail exemplary architectures and systems to implement embodiments of the components and/or processes described herein. In some embodiments, one or more hardware components and/or instructions described herein are emulated as detailed below, or are implemented as software modules. Embodiments of the instruction(s) detailed above are embodied may be embodied in a “generic vector friendly instruction format” which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the writemask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) above may be executed on such systems, architectures, and pipelines, but are not limited to those detailed. An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel®64and IA-32 Architectures Software Developer's Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014). Exemplary Instruction Formats Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed. Generic Vector Friendly Instruction Format A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format. FIGS.17A-17Bare block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention.FIG.17Ais a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; whileFIG.17Bis a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format1700for which are defined class A and class B instruction templates, both of which include no memory access1705instruction templates and memory access1720instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set. While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths). The class A instruction templates inFIG.17Ainclude: 1) within the no memory access1705instruction templates there is shown a no memory access, full round control type operation1710instruction template and a no memory access, data transform type operation1715instruction template; and 2) within the memory access1720instruction templates there is shown a memory access, temporal1725instruction template and a memory access, non-temporal1730instruction template. The class B instruction templates inFIG.17Binclude: 1) within the no memory access1705instruction templates there is shown a no memory access, write mask control, partial round control type operation1712instruction template and a no memory access, write mask control, vsize type operation1717instruction template; and 2) within the memory access1720instruction templates there is shown a memory access, write mask control1727instruction template. The generic vector friendly instruction format1700includes the following fields listed below in the order illustrated inFIGS.17A-17B. Format field1740—a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format. Base operation field1742—its content distinguishes different base operations. Register index field1744—its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32×512, 16×128, 32×1024, 64×1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination). Modifier field1746—its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access1705instruction templates and memory access1720instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations. Augmentation operation field1750—its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field1768, an alpha field1752, and a beta field1754. The augmentation operation field1750allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions. Scale field1760—its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale*index+base). Displacement Field1762A—its content is used as part of memory address generation (e.g., for address generation that uses 2scale*index+base+displacement). Displacement Factor Field1762B (note that the juxtaposition of displacement field1762A directly over displacement factor field1762B indicates one or the other is used)—its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N)—where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale*index+base+scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field1774(described later herein) and the data manipulation field1754C. The displacement field1762A and the displacement factor field1762B are optional in the sense that they are not used for the no memory access1705instruction templates and/or different embodiments may implement only one or none of the two. Data element width field1764—its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes. Write mask field1770—its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field1770allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's1770content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's1770content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's1770content to directly specify the masking to be performed. Immediate field1772—its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate. Class field1768—its content distinguishes between different classes of instructions. With reference toFIGS.17A-B, the contents of this field select between class A and class B instructions. InFIGS.17A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A1768A and class B1768B for the class field1768respectively inFIGS.17A-B). Instruction Templates of Class a In the case of the non-memory access1705instruction templates of class A, the alpha field1752is interpreted as an RS field1752A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round1752A.1and data transform1752A.2are respectively specified for the no memory access, round type operation1710and the no memory access, data transform type operation1715instruction templates), while the beta field1754distinguishes which of the operations of the specified type is to be performed. In the no memory access1705instruction templates, the scale field1760, the displacement field1762A, and the displacement scale filed1762B are not present. No-Memory Access Instruction Templates—Full Round Control Type Operation In the no memory access full round control type operation1710instruction template, the beta field1754is interpreted as a round control field1754A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field1754A includes a suppress all floating point exceptions (SAE) field1756and a round operation control field1758, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field1758). SAE field1756—its content distinguishes whether or not to disable the exception event reporting; when the SAE field's1756content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler. Round operation control field1758—its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field1758allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's1750content overrides that register value. No Memory Access Instruction Templates—Data Transform Type Operation In the no memory access data transform type operation1715instruction template, the beta field1754is interpreted as a data transform field1754B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast). In the case of a memory access1720instruction template of class A, the alpha field1752is interpreted as an eviction hint field1752B, whose content distinguishes which one of the eviction hints is to be used (inFIG.17A, temporal1752B.1and non-temporal1752B.2are respectively specified for the memory access, temporal1725instruction template and the memory access, non-temporal1730instruction template), while the beta field1754is interpreted as a data manipulation field1754C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access1720instruction templates include the scale field1760, and optionally the displacement field1762A or the displacement scale field1762B. Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask. Memory Access Instruction Templates—Temporal Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely. Memory Access Instruction Templates—Non-Temporal Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely. Instruction Templates of Class B In the case of the instruction templates of class B, the alpha field1752is interpreted as a write mask control (Z) field1752C, whose content distinguishes whether the write masking controlled by the write mask field1770should be a merging or a zeroing. In the case of the non-memory access1705instruction templates of class B, part of the beta field1754is interpreted as an RL field1757A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round1757A.1and vector length (VSIZE)1757A.2are respectively specified for the no memory access, write mask control, partial round control type operation1712instruction template and the no memory access, write mask control, VSIZE type operation1717instruction template), while the rest of the beta field1754distinguishes which of the operations of the specified type is to be performed. In the no memory access1705instruction templates, the scale field1760, the displacement field1762A, and the displacement scale filed1762B are not present. In the no memory access, write mask control, partial round control type operation1710instruction template, the rest of the beta field1754is interpreted as a round operation field1759A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler). Round operation control field1759A—just as round operation control field1758, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field1759A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's1750content overrides that register value. In the no memory access, write mask control, VSIZE type operation1717instruction template, the rest of the beta field1754is interpreted as a vector length field1759B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte). In the case of a memory access1720instruction template of class B, part of the beta field1754is interpreted as a broadcast field1757B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field1754is interpreted the vector length field1759B. The memory access1720instruction templates include the scale field1760, and optionally the displacement field1762A or the displacement scale field1762B. With regard to the generic vector friendly instruction format1700, a full opcode field1774is shown including the format field1740, the base operation field1742, and the data element width field1764. While one embodiment is shown where the full opcode field1774includes all of these fields, the full opcode field1774includes less than all of these fields in embodiments that do not support all of them. The full opcode field1774provides the operation code (opcode). The augmentation operation field1750, the data element width field1764, and the write mask field1770allow these features to be specified on a per instruction basis in the generic vector friendly instruction format. The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths. The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code. Exemplary Specific Vector Friendly Instruction Format FIG.18A-18Care block diagrams illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention.FIG.18Ashows a specific vector friendly instruction format1800that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format1800may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields fromFIGS.17A-17Binto which the fields fromFIGS.18A-18Cmap are illustrated. It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format1800in the context of the generic vector friendly instruction format1700for illustrative purposes, the invention is not limited to the specific vector friendly instruction format1800except where claimed. For example, the generic vector friendly instruction format1700contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format1800is shown as having fields of specific sizes. By way of specific example, while the data element width field1764is illustrated as a one bit field in the specific vector friendly instruction format1800, the invention is not so limited (that is, the generic vector friendly instruction format1700contemplates other sizes of the data element width field1764). The generic vector friendly instruction format1700includes the following fields listed below in the order illustrated inFIG.18A. EVEX Prefix (Bytes 0-3)1802—is encoded in a four-byte form. Format Field1740(EVEX Byte 0, bits [7:0])—the first byte (EVEX Byte 0) is the format field1740and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention). The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability. REX field1805(EVEX Byte 1, bits [7-5])—consists of a EVEX.R bit field (EVEX Byte 1, bit [7]—R), EVEX.X bit field (EVEX byte 1, bit [6]—X), and EVEX.B byte 1, bit[5]—B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using is complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B. REX′ field1810—this is the first part of the REX′ field1810and is the EVEX.R′ bit field (EVEX Byte 1, bit [4]—R′) that is used to encode either the upper16or lower 16 of the extended32register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R′Rrrr is formed by combining EVEX.R′, EVEX.R, and the other RRR from other fields. Opcode map field1815(EVEX byte 1, bits [3:0]—mmmm)—its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3). Data element width field1764(EVEX byte 2, bit [7]—W)—is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements). EVEX.vvvv1820(EVEX Byte 2, bits [6:3]-vvvv)—the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain1111b. Thus, EVEX.vvvv field1820encodes the 4 low-order bits of the first source register specifier stored in inverted (is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers. EVEX.U1768Class field (EVEX byte 2, bit [2]-U)—If EVEX.U=0, it indicates class A or EVEX.U0; if EVEX.U=1, it indicates class B or EVEX.U1. Prefix encoding field1825(EVEX byte 2, bits [1:0]-pp)—provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion. Alpha field1752(EVEX byte 3, bit [7]—EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a)—as previously described, this field is context specific. Beta field1754(EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ)—as previously described, this field is context specific. REX′ field1810—this is the remainder of the REX′ field and is the EVEX.V′ bit field (EVEX Byte 3, bit [3]—V′) that may be used to encode either the upper16or lower 16 of the extended32register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V′VVVV is formed by combining EVEX.V′, EVEX.vvvv. Write mask field1770(EVEX byte 3, bits [2:0]-kkk)—its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware). Real Opcode Field1830(Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field. MOD R/M Field1840(Byte 5) includes MOD field1842, Reg field1844, and R/M field1846. As previously described, the MOD field's1842content distinguishes between memory access and non-memory access operations. The role of Reg field1844can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field1846may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand. Scale, Index, Base (SIB) Byte (Byte 6)—As previously described, the scale field's1850content is used for memory address generation. SIB.xxx1854and SIB.bbb1856—the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb. Displacement field1762A (Bytes 7-10)—when MOD field1842contains 10, bytes 7-10 are the displacement field1762A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity. Displacement factor field1762B (Byte 7)—when MOD field1842contains 01, byte 7 is the displacement factor field1762B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between −128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values −128, −64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field1762B is a reinterpretation of disp8; when using displacement factor field1762B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field1762B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field1762B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field1772operates as previously described. Full Opcode Field FIG.18Bis a block diagram illustrating the fields of the specific vector friendly instruction format1800that make up the full opcode field1774according to one embodiment of the invention. Specifically, the full opcode field1774includes the format field1740, the base operation field1742, and the data element width (W) field1764. The base operation field1742includes the prefix encoding field1825, the opcode map field1815, and the real opcode field1830. Register Index Field FIG.18Cis a block diagram illustrating the fields of the specific vector friendly instruction format1800that make up the register index field1744according to one embodiment of the invention. Specifically, the register index field1744includes the REX field1805, the REX′ field1810, the MODR/M.reg field1844, the MODR/M.r/m field1846, the VVVV field1820, xxx field1854, and the bbb field1856. Augmentation Operation Field FIG.18Dis a block diagram illustrating the fields of the specific vector friendly instruction format1800that make up the augmentation operation field1750according to one embodiment of the invention. When the class (U) field1768contains 0, it signifies EVEX.U0 (class A1768A); when it contains 1, it signifies EVEX.U1 (class B1768B). When U=0 and the MOD field1842contains 11 (signifying a no memory access operation), the alpha field1752(EVEX byte 3, bit [7]—EH) is interpreted as the rs field1752A. When the rs field1752A contains a 1 (round1752A.1), the beta field1754(EVEX byte 3, bits [6:4]—SSS) is interpreted as the round control field1754A. The round control field1754A includes a one bit SAE field1756and a two bit round operation field1758. When the rs field1752A contains a 0 (data transform1752A.2), the beta field1754(EVEX byte 3, bits [6:4]—SSS) is interpreted as a three bit data transform field1754B. When U=0 and the MOD field1842contains 00, 01, or 10 (signifying a memory access operation), the alpha field1752(EVEX byte 3, bit [7]—EH) is interpreted as the eviction hint (EH) field1752B and the beta field1754(EVEX byte 3, bits [6:4]—SSS) is interpreted as a three bit data manipulation field1754C. When U=1, the alpha field1752(EVEX byte 3, bit [7]—EH) is interpreted as the write mask control (Z) field1752C. When U=1 and the MOD field1842contains 11 (signifying a no memory access operation), part of the beta field1754(EVEX byte 3, bit [4]—S0) is interpreted as the RL field1757A; when it contains a 1 (round1757A.1) the rest of the beta field1754(EVEX byte 3, bit [6-5]—S2-1) is interpreted as the round operation field1759A, while when the RL field1757A contains a 0 (VSIZE1757.A2) the rest of the beta field1754(EVEX byte 3, bit [6-5]—S2-1) is interpreted as the vector length field1759B (EVEX byte 3, bit [6-5]—L1-0). When U=1 and the MOD field1842contains 00, 01, or 10 (signifying a memory access operation), the beta field1754(EVEX byte 3, bits [6:4]—SSS) is interpreted as the vector length field1759B (EVEX byte 3, bit [6-5]—L1-0) and the broadcast field1757B (EVEX byte 3, bit [4]-B). Exemplary Register Architecture FIG.19is a block diagram of a register architecture1900according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers1910that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format1800operates on these overlaid register file as illustrated in the below tables. AdjustableVector LengthClassOperationsRegistersInstructionA (FIG.1710, 1715,zmm registers (theTemplates that17A;1725, 1730vector length isdo not includeU = 0)64 byte)the vector lengthB (FIG.1712zmm registers (thefield 1759B17B;vector length isU = 1)64 byte)InstructionB (FIG.1717, 1727zmm, ymm, or xmmtemplates that do17B;registers (theinclude theU = 1)vector length is 64vector lengthbyte, 32 byte, or 16field 1759Bbyte) depending onthe vector lengthfield 1759B In other words, the vector length field1759B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field1759B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format1800operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment. Write mask registers1915—in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers1915are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction. General-purpose registers1925—in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15. Scalar floating point stack register file (x87 stack)1945, on which is aliased the MMX packed integer flat register file1950—in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers. Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers. Exemplary Core Architectures, Processors, and Computer Architectures Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Exemplary Core Architectures In-Order and Out-of-Order Core Block Diagram FIG.20Ais a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.FIG.20Bis a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes inFIGS.20A-Billustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. InFIG.20A, a processor pipeline2000includes a fetch stage2002, a length decode stage2004, a decode stage2006, an allocation stage2008, a renaming stage2010, a scheduling (also known as a dispatch or issue) stage2012, a register read/memory read stage2014, an execute stage2016, a write back/memory write stage2018, an exception handling stage2022, and a commit stage2024. FIG.20Bshows processor core2090including a front end unit2030coupled to an execution engine unit2050, and both are coupled to a memory unit2070. The core2090may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core2090may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. The front end unit2030includes a branch prediction unit2032coupled to an instruction cache unit2034, which is coupled to an instruction translation lookaside buffer (TLB)2036, which is coupled to an instruction fetch unit2038, which is coupled to a decode unit2040. The decode unit2040(or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit2040may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core2090includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit2040or otherwise within the front end unit2030). The decode unit2040is coupled to a rename/allocator unit2052in the execution engine unit2050. The execution engine unit2050includes the rename/allocator unit2052coupled to a retirement unit2054and a set of one or more scheduler unit(s)2056. The scheduler unit(s)2056represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s)2056is coupled to the physical register file(s) unit(s)2058. Each of the physical register file(s) units2058represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit2058comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s)2058is overlapped by the retirement unit2054to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit2054and the physical register file(s) unit(s)2058are coupled to the execution cluster(s)2060. The execution cluster(s)2060includes a set of one or more execution units2062and a set of one or more memory access units2064. The execution units2062may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s)2056, physical register file(s) unit(s)2058, and execution cluster(s)2060are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster—and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s)2064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order. The set of memory access units2064is coupled to the memory unit2070, which includes a data TLB unit2072coupled to a data cache unit2074coupled to a level 2 (L2) cache unit2076. In one exemplary embodiment, the memory access units2064may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit2072in the memory unit2070. The instruction cache unit2034is further coupled to a level 2 (L2) cache unit2076in the memory unit2070. The L2 cache unit2076is coupled to one or more other levels of cache and eventually to a main memory. By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline2000as follows: 1) the instruction fetch2038performs the fetch and length decoding stages2002and2004; 2) the decode unit2040performs the decode stage2006; 3) the rename/allocator unit2052performs the allocation stage2008and renaming stage2010; 4) the scheduler unit(s)2056performs the schedule stage2012; 5) the physical register file(s) unit(s)2058and the memory unit2070perform the register read/memory read stage2014; the execution cluster2060perform the execute stage2016; 6) the memory unit2070and the physical register file(s) unit(s)2058perform the write back/memory write stage2018; 7) various units may be involved in the exception handling stage2022; and 8) the retirement unit2054and the physical register file(s) unit(s)2058perform the commit stage2024. The core2090may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core2090includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units2034/2074and a shared L2 cache unit2076, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In-Order Core Architecture FIGS.21A-Billustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application. FIG.21Ais a block diagram of a single processor core, along with its connection to the on-die interconnect network2102and with its local subset of the Level 2 (L2) cache2104, according to embodiments of the invention. In one embodiment, an instruction decoder2100supports the x86 instruction set with a packed data instruction set extension. An L1 cache2106allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit2108and a vector unit2110use separate register sets (respectively, scalar registers2112and vector registers2114) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache2106, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back). The local subset of the L2 cache2104is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache2104. Data read by a processor core is stored in its L2 cache subset2104and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset2104and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction. FIG.21Bis an expanded view of part of the processor core inFIG.21Aaccording to embodiments of the invention.FIG.21Bincludes an L1 data cache2106A part of the L1 cache2104, as well as more detail regarding the vector unit2110and the vector registers2114. Specifically, the vector unit2110is a 16-wide vector processing unit (VPU) (see the 16-wide ALU2128), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit2120, numeric conversion with numeric convert units2122A-B, and replication with replication unit2124on the memory input. Write mask registers2126allow predicating resulting vector writes. FIG.22is a block diagram of a processor2200that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes inFIG.22illustrate a processor2200with a single core2202A, a system agent2210, a set of one or more bus controller units2216, while the optional addition of the dashed lined boxes illustrates an alternative processor2200with multiple cores2202A-N, a set of one or more integrated memory controller unit(s)2214in the system agent unit2210, and special purpose logic2208. Thus, different implementations of the processor2200may include: 1) a CPU with the special purpose logic2208being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores2202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores2202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores2202A-N being a large number of general purpose in-order cores. Thus, the processor2200may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor2200may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units2206, and external memory (not shown) coupled to the set of integrated memory controller units2214. The set of shared cache units2206may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit2212interconnects the integrated graphics logic2208, the set of shared cache units2206, and the system agent unit2210/integrated memory controller unit(s)2214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units2206and cores2202-A-N. In some embodiments, one or more of the cores2202A-N are capable of multi-threading. The system agent2210includes those components coordinating and operating cores2202A-N. The system agent unit2210may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores2202A-N and the integrated graphics logic2208. The display unit is for driving one or more externally connected displays. The cores2202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores2202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Exemplary Computer Architectures FIGS.23-24are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. Referring now toFIG.23, shown is a block diagram of a system2300in accordance with one embodiment of the present invention. The system2300may include one or more processors2310,2315, which are coupled to a controller hub2320. In one embodiment the controller hub2320includes a graphics memory controller hub (GMCH)2390and an Input/Output Hub (IOH)2350(which may be on separate chips); the GMCH2390includes memory and graphics controllers to which are coupled memory2340and a coprocessor2345; the IOH2350is couples input/output (I/O) devices2360to the GMCH2390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory2340and the coprocessor2345are coupled directly to the processor2310, and the controller hub2320in a single chip with the IOH2350. The optional nature of additional processors2315is denoted inFIG.23with broken lines. Each processor2310,2315may include one or more of the processing cores described herein and may be some version of the processor2200. The memory2340may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub2320communicates with the processor(s)2310,2315via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection2395. In one embodiment, the coprocessor2345is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub2320may include an integrated graphics accelerator. There can be a variety of differences between the physical resources2310,2315in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. In one embodiment, the processor2310executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor2310recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor2345. Accordingly, the processor2310issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor2345. Coprocessor(s)2345accept and execute the received coprocessor instructions. Referring now toFIG.24, shown is a block diagram of a SoC2400in accordance with an embodiment of the present invention. Similar elements inFIG.22bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. InFIG.24, an interconnect unit(s)2402is coupled to: an application processor2410which includes a set of one or more cores202A-N and shared cache unit(s)2206; a system agent unit2210; a bus controller unit(s)2216; an integrated memory controller unit(s)2214; a set or one or more coprocessors2420which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit2430; a direct memory access (DMA) unit2432; and a display unit2440for coupling to one or more external displays. In one embodiment, the coprocessor(s)2420include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (Including Binary Translation, Code Morphing, Etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. FIG.25is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof.FIG.25shows a program in a high level language2502may be compiled using an x86 compiler2504to generate x86 binary code2506that may be natively executed by a processor with at least one x86 instruction set core2516. The processor with at least one x86 instruction set core2516represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler2504represents a compiler that is operable to generate x86 binary code2506(e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core2516. Similarly,FIG.25shows the program in the high level language2502may be compiled using an alternative instruction set compiler2508to generate alternative instruction set binary code2510that may be natively executed by a processor without at least one x86 instruction set core2514(e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter2512is used to convert the x86 binary code2506into code that may be natively executed by the processor without an x86 instruction set core2514. This converted code is not likely to be the same as the alternative instruction set binary code2510because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter2512represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code2506. Generating Keys for Persistent Memory In one or more embodiments, a processor may include memory protection logic to provide encryption of data stored in memory. The memory protection logic may generate a non-persistent key and a persistent key during a system boot process (e.g., during system start up). The non-persistent key may be used for memory portions that operate as volatile memory (e.g., DRAM). The persistent key may be used for memory portions that operate as non-volatile storage (e.g., disk-based storage). Various details of some embodiments are described further below with reference toFIGS.26A-31. FIGS.26A-26B—Computing System Including Persistent Memory Referring now toFIG.26A, shown is a block diagram of a system2600in accordance with one or more embodiments. In some embodiments, the system2600may be all or a portion of an electronic device or component. For example, the system2600may be a cellular telephone, a computer, a server, a network device, a system on a chip (SoC), a controller, a wireless transceiver, a power supply unit, etc. Furthermore, in some embodiments, the system2600may be part of a grouping of related or interconnected devices, such as a datacenter, a computing cluster, etc. As shown inFIG.26A, the system2600may include a processor2610operatively coupled to a basic input/output system (BIOS) unit2615, persistent memory2640, and non-persistent memory2650. Further, although not shown inFIG.26A, the system2600may include other components. The BIOS unit2615may include non-volatile memory storing firmware instructions to perform hardware initialization during the booting process (e.g., power-on startup). In one or more embodiments, the non-persistent memory2650may include any type of volatile memory such as dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), and so forth. Further, the persistent memory2640may include non-volatile memory such as SCM, DAS memory, NVDIMM, and/or other forms of flash or solid-state storage. As shown, in some embodiments, the persistent memory2640may be partitioned into a memory expansion portion2660and a persistent storage portion2670. The memory expansion portion2660may function as an additional portion of the non-persistent memory2650. In particular, the data content of the memory expansion portion2660and the non-persistent memory2650is not expected to be preserved when the system is powered down or restarted. In contrast, the persistent storage portion2670may function in a similar manner to disk-based storage, and therefore its data content is expected to remain stored even when after the system is powered down or restarted. Note thatFIG.26Aillustrates the portions2660,2670as two distinct blocks for the purpose of clarity, and embodiments are not limited in this regard. For example, each of the memory expansion portion2660and the persistent storage portion2670may include any number of sub-portions, and may be located in different physical locations of the persistent memory2650. In one or more embodiments, the processor2610may be a hardware processing device (e.g., a central processing unit (CPU), a System on a Chip (SoC), and so forth). As shown, the processor2610can include one or more processing engines2620(also referred to herein as “cores”), a static component2612, memory protection logic2630, memory controller(s)2632, and registers2635. Each processing engine2620can execute software instructions. The registers2635may be hardware control registers of the processor2610(e.g., architectural model-specific registers (MSRs)). In some embodiments, the static component2612may be value that is hard-coded in the processor2610(e.g., in fuses or other components written into the processor2610during manufacture). Further, the static component2612may be a value that is hidden from unauthorized access and/or is only accessible to entities having valid privileges. The memory controller(s)2632may be used to control and/or manage access to the persistent memory2640and/or the non-persistent memory2650. In some examples, the memory controller(s)2632may be a single controller. In other examples, the memory controller(s)2632may be two controllers to separately control the persistent memory2640and the non-persistent memory2650. In some embodiments, the memory protection logic2630may be implemented in hardware, software, firmware, or a combination thereof. For example, referring toFIG.26B, shown is an example embodiment of the memory protection logic2630, including an encryption engine2636and protection microcode2638. The encryption engine2636may be a hardware unit included in processor2610that provides encryption and decryption of data. The protection microcode2638may be instructions (e.g., firmware) of the processor2610that are executable to provide memory protection in accordance with embodiments described herein. Note that the embodiments ofFIG.26Bis provided for the sake of illustration, and embodiments are not limited in this regard. For example, in some embodiments, the memory protection logic2630may be implemented in a single hardware unit, in executable instructions only, and so forth. Referring again toFIG.26A, the memory protection logic2630may provide protection of data stored in the persistent memory2640and/or the non-persistent memory2650. In some embodiments, the memory protection logic2630may generate a non-persistent key for use in encrypting data in the memory expansion portion2660(included in the persistent memory2640) and/or the non-persistent memory2650. The data content of the memory expansion portion2660and the non-persistent memory2650is not expected to be preserved when the system is powered down or restarted, and therefore the key used to encrypt this data content is not be maintained after these events. As used herein, the term “non-persistent key” refers to an encryption key that is not maintained across a shut-down or restart. Stated differently, a new non-persistent key is generated each time that the system boots up. In one or more embodiments, the memory protection logic2630may generate the non-persistent key using the static component2612and/or an ephemeral component (not shown inFIG.26A). For example, the non-persistent key may be generated using a hash function of an ephemeral component. In another example, the non-persistent key may be generated using a hash function of the static component2612and the ephemeral component. In some embodiments, the ephemeral component used to generate the non-persistent key may be a value provided by a user, an output value from a random number generator, a value derived from another source of entropy, or any combination thereof. Further, this ephemeral component may be stored in a persistent storage location (e.g., in persistent memory2640, within flash memory of the memory protection logic2630, or another storage location). In one or more embodiments, the memory protection logic2630may generate a persistent key for use in encrypting data in the persistent storage portion2670(included in the persistent memory2640). The data content of the persistent storage portion2670is expected to be preserved when the system is powered down or restarted, and therefore the key used to encrypt this data content is be maintained after these events. As used herein, the term “persistent key” refers to an encryption key that is maintained across a shut-down or restart. Stated differently, the persistent key is generated in the first instance of system boot-up (e.g., during the first use of the system after manufacture), and is reused for all subsequent instances of system boot-up. In one or more embodiments, the memory protection logic2630may generate the persistent key using the static component2612and/or a different ephemeral component (i.e., different from the ephemeral component used to generate the non-persistent key). For example, the persistent key may be generated using a hash function of the static component2612and a second ephemeral component. In some embodiments, the ephemeral component used to generate the persistent key may be a value provided by a user, an output value from a random number generator, a value derived from another source of entropy, or any combination thereof. Further, this ephemeral component may be stored in a persistent storage location (e.g., in persistent memory2640, within flash memory of the memory protection logic2630, or another storage location). In one or more embodiments, instructions of the BIOS unit2615may be executed during a system boot process to cause the memory protection logic2630to generate the non-persistent key and/or the persistent key (e.g., using “WRMSR” commands). For example, instructions of the BIOS unit2615may be executed to populate input parameters (e.g., first and second ephemeral components, control settings, etc.) in the registers2635, and then cause microcode of the memory protection logic2630(e.g., protection microcode2638shown inFIG.26B) to generate the required keys directly or by invoking a hardware engine (e.g., encryption engine2636shown inFIG.26B). Note that, whileFIG.26Aillustrates the memory controller(s)2632and the memory protection logic2630as integrated into the processor2610, embodiments are not limited in this regard. For example, in some embodiments, the memory controller(s)2632and/or the memory protection logic2630may be implemented on a separate chip communicatively coupled or connected to the processor2610. In another example, in some embodiments, the system2600may include two memory controllers2632to separately control the persistent memory2640and the non-persistent memory2650, and each of the two memory controllers2632may include (or be coupled to) its own memory protection logic2630. In such embodiments, the two memory protection logics2630may use a mechanism or data structure(s) to specify which address ranges of the persistent memory are controlled by each memory protection logic2630. In this manner, each memory protection logic2630may exclude its memory range from access by the other memory protection logic2630. FIG.27—Example Registers for Controlling Memory Protection Referring now toFIG.27, shown is a diagram of an example registers2700,2770, and2780, in accordance with one or more embodiments. The registers2700,2770, and2780may correspond generally to example implementations of the registers2635(shown inFIG.26A). In one or more embodiments, each of the registers2700,2770, and2780may be a hardware register included in a multi-core processor (e.g., in processor2610shown inFIG.26A). In some embodiments, the register2700may be a control register (e.g., “TME_ACTIVATE_MSR”) dedicated for activating and/or controlling the memory protection logic2630shown inFIGS.26A-26B(e.g., by setting the appropriate values in the register fields). As shown inFIG.27, the register2700may include various fields2710-2760. In some implementations, the Enable field2710may be used to enable or disable memory encryption. The Key select field2720may be used to specify whether to create a new key (e.g., after a system boot-up) or to restore the key from storage (e.g., when resuming from system standby). The Save Key field2730may be used to save the key into storage to be used when resume from standby. The Encryption field2740may be used to specify a particular encryption algorithm to use (e.g., one selected from multiple available algorithms). The Other fields2760may include any other fields that may be used to control or configure memory encryption. In one or more embodiments, the Persistent field2750may be used to specify whether to create a new persistent key (e.g., in response to the first instance of booting the system), or to restore an existing persistent key (e.g., in response to any subsequent instance of booting the system after the first instance). In some embodiments, the register2770may be dedicated for storing a first ephemeral component used to generate the non-persistent key. Further, the register2780may be dedicated for storing a second ephemeral component used to generate the persistent key. In some embodiments, the system BIOS (e.g., BIOS2615shown inFIG.26A) may read the first and second ephemeral components from storage, and may populate these components into the registers2770and2780, respectively. Further, the system BIOS may cause the memory protection logic2630(shown inFIGS.26A-26B) to generate the required keys using the components stored in the registers2770and2780. FIG.28—Method for Generating Keys at Boot Time Referring now toFIG.28, shown is a flow diagram of a method2800for generating keys at boot time, in accordance with one or more embodiments. Assume that, in the example ofFIG.28, the method2800is performed for a system in which, if memory protection is activated, the persistent memory is used both for memory expansion and for persistent storage. In various embodiments, the method2800may be performed by processing logic that may include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device), or a combination thereof. In some implementations, the method2800may be performed using one or more components shown inFIGS.26A-26B(e.g., BIOS2615, memory protection logic2630, registers2635, etc.). In firmware or software embodiments, the method2800may be implemented by computer executed instructions stored in a non-transitory machine readable medium, such as an optical, semiconductor, or magnetic storage device. The machine-readable medium may store data, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform a method. For the sake of illustration, the actions involved in the method2800may be described below with reference toFIGS.26A-27, which show examples in accordance with one or more embodiments. However, the scope of the various embodiments discussed herein is not limited in this regard. Block2810may include detecting a system boot event. Diamond2820may include determining whether memory protection is activated in the system. If it is determined at diamond2820that memory protection is not activated in the system, then the method2800may be completed. For example, referring toFIGS.26A-27, instructions of the BIOS2615may execute upon a system boot (e.g., during a start-up process), and may determine whether memory protection is activated for the system2600. In some examples, determining whether memory protection is activated is based on whether the BIOS2615is configured (e.g., via a user setting) to perform memory protection. However, if it is determined at diamond2820that memory protection is activated in the system, then the method2800may continue at diamond2830, including determining whether the current system boot (detected at block2810) is the initial instance of booting the system. If it is determined at2830that the current system boot-up is the first instance of booting the system, then the method2800may continue at block2840, including generating and storing a first ephemeral component. For example, referring toFIGS.26A-27, instructions of the BIOS2615may determine that the current boot-up is the first time that the system has ever booted (e.g., during the initial use of the system after manufacture), and in response may cause the memory protection logic2630to generate and store a first ephemeral component. In some embodiments, the first ephemeral component may be generated using a value provided by a user, a random number (e.g., from a random number generator), a value derived from another source of entropy, or any combination thereof. The first ephemeral component may be stored in a persistent storage location (e.g., in persistent memory2640, within flash memory of the memory protection logic2630, or another storage location). Further, in some embodiments, block2840may include populating the first ephemeral component into the register2770. After block2840, the method2800may continue at block2850(described below). However, if it is determined at diamond2830that the current system boot is not the first instance of booting the system, then the method2800may continue at block2845, including reading the first ephemeral component from storage. For example, referring toFIGS.26A-27, instructions of the BIOS2615may determine that the current boot-up is not the first time that the system has ever booted, and in response may read the first ephemeral component from the persistent storage location (i.e., as stored in block2840). Further, in some embodiments, block2845may include populating the first ephemeral component into the register2770. After block2845, the method2800may continue at block2850, including obtaining a static component from the processor. Block2860may include generating the persistent key using the static component and the first ephemeral component. For example, referring toFIGS.26A-27, instructions of the BIOS2615may cause the memory protection logic2630to generate the persistent key using the first ephemeral component (e.g., from register2770) and the static component2612. Note that, if it was determined at diamond2830that the current system boot-up is the first instance of booting the system, the persistent key is generated for the first time. In contrast, if it was determined at diamond2830that the current system boot-up is not the first instance of booting the system, the same persistent key is being regenerated (i.e., by using the same static and first ephemeral components as the first time that the persistent key was generated). In some embodiments, the memory protection logic2630may use the persistent key to encrypt/decrypt data in the persistent storage portion2670. Block2870may include generating and storing a second ephemeral component. Block2880may include generating the non-persistent key using the second ephemeral component. For example, referring toFIGS.26A-27, instructions of the BIOS2615may cause the memory protection logic2630to generate a new second ephemeral component, and to store the second ephemeral component in a persistent storage location. Further, in some embodiments, block2870may include populating the second ephemeral component into the register2780. The instructions of the BIOS2615may then cause the memory protection logic2630to generate a new non-persistent key using the second ephemeral component (e.g., from register2780). In some embodiments, the memory protection logic2630may use the non-persistent key to encrypt/decrypt data in the memory expansion portion2660(included in the persistent memory2640) and/or the non-persistent memory2650. Note that, in the method2800, a new non-persistent key is generated each time the system is booted up. After block2880, the method2800may be completed. FIG.29—Method for Generating Keys after a Standby State Referring now toFIG.29, shown is a flow diagram of a method2900for generating keys after a standby state, in accordance with one or more embodiments. In various embodiments, the method2900may be performed by processing logic that may include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device), or a combination thereof. In some implementations, the method2900may be performed using one or more components shown inFIGS.26A-26B(e.g., BIOS2615, memory protection logic2630, registers2635, etc.). In firmware or software embodiments, the method2900may be implemented by computer executed instructions stored in a non-transitory machine readable medium, such as an optical, semiconductor, or magnetic storage device. The machine-readable medium may store data, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform a method. For the sake of illustration, the actions involved in the method2900may be described below with reference toFIGS.26A-27, which show examples in accordance with one or more embodiments. However, the scope of the various embodiments discussed herein is not limited in this regard. Block2910may include detecting that the system is returning from a standby state. Diamond2920may include determining whether memory protection is activated in the system. If it is determined at diamond2920that memory protection is not activated in the system, then the method2900may be completed. For example, referring toFIGS.26A-27, instructions of the BIOS2615may be executed to detect that system2600is returning or exiting from a standby state (e.g., sleep state, hibernation state, suspend state, etc.), and may determine whether memory protection is activated for the system2600. However, if it is determined at diamond2920that memory protection is activated in the system, then the method2900may continue at diamond2930, including determining whether the persistent memory includes a persistent storage portion. For example, referring toFIGS.26A-27, instructions of the BIOS2615may determine whether the persistent memory2650does not include any persistent storage portion2670(i.e., the “NO” option from diamond2930), the persistent memory2650includes an existing persistent storage portion2670(i.e., the “YES” option), or if the persistent storage portion2670was added during the standby state (i.e., the “NEW” option). If it is determined at diamond2930that the persistent memory does not includes a persistent storage portion (i.e., the “NO” option), then the method2900continues at block2970(described below). However, if it is determined at diamond2930that the persistent storage portion was added to the persistent memory during the standby state (i.e., the “NEW” option), then the method2900continues at block2940, including generating and storing a new first ephemeral component. In some embodiments, block2940may be performed using the same (or similar) operation to that of block2840(shown inFIG.28and described above). After block2940, the method2900may continue at block2950(described below). Further, if it is determined at diamond2930that the persistent memory includes an existing persistent storage portion (i.e., the “YES” option), then the method2900continues at block2945, including reading the first ephemeral component from storage. In some embodiments, block2945may be performed using the same (or similar) operation to that of block2845(shown inFIG.28). After block2945, the method2900may continue at block2950, including obtaining a static component from the processor. Block2960may include generating the persistent key using the static component and the first ephemeral component. In some embodiments, blocks2950and2960may be performed using the same (or similar) operations to those of blocks2850and2860(shown inFIG.28), respectively. Note that, if it was determined at diamond2930that the persistent storage portion was added to the persistent memory during the standby state, then block2960includes generating a new persistent key to encrypt the persistent storage portion. In contrast, if it was determined at diamond2930that the persistent memory already included an existing persistent storage portion, then block2960includes regenerating the persistent key that was previously used to encrypt that existing persistent storage portion. Block2970may include reading a stored second ephemeral component. Block2980may include generating the non-persistent key using the second ephemeral component. For example, referring toFIGS.26A-27, instructions of the BIOS2615may cause the memory protection logic2630to read the second ephemeral component from the register2780(i.e., stored in block2870shown inFIG.28) and regenerate the previous non-persistent key (i.e., the same non-persistent key that was used prior to the standby state). After block2980, the method2900may be completed. FIG.30—Method for Handling Memory Requests Referring now toFIG.30, shown is a flow diagram of a method3000for handling memory requests, in accordance with one or more embodiments. In various embodiments, the method3000may be performed by processing logic that may include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device), or a combination thereof. In some implementations, the method3000may be performed using one or more components shown inFIGS.26A-26B(e.g., BIOS2615, memory protection logic2630, registers2635, etc.). In firmware or software embodiments, the method3000may be implemented by computer executed instructions stored in a non-transitory machine readable medium, such as an optical, semiconductor, or magnetic storage device. The machine-readable medium may store data, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform a method. For the sake of illustration, the actions involved in the method3000may be described below with reference toFIGS.26A-27, which show examples in accordance with one or more embodiments. However, the scope of the various embodiments discussed herein is not limited in this regard. Block3010may include detecting a request for protected memory. Block3020may include obtaining a key identifier in the request. For example, referring toFIGS.26A-26B, the encryption engine2636may detect a request to access a memory location that is encrypted using a persistent key (e.g., in persistent storage portion2670) or a non-persistent key (e.g., in non-persistent memory2640or memory expansion portion2660), and may read or examine a key identifier in the request. In some embodiments, the key identifier may be a field or bit range of the address field in the request, and may identify a particular encryption key. The key identifier may be used to identify one of multiple encryption keys that are generated by memory protection software, and which are not generated by the memory protection logic2630. Diamond3030may include determining whether the value of key identifier is greater than zero. If it is determined that the value of key identifier is not greater than zero, then at block3070, the persistent key or the non-persistent key is used to handle the request. However, if it is determined that the value of key identifier is greater than zero, then at block3080, the key associated with the key identifier is used to handle the request. For example, referring toFIG.26B, the encryption engine2636may determine whether the key identifier in the request has a binary value of zero (e.g., “00000”). In some embodiments, any request for a memory location encrypted using the persistent key or the non-persistent key will have a key identifier with a binary value of zero, and therefore the persistent key or the non-persistent key (generated by the memory protection logic2630) is used to encrypt or decrypt the data for that request. In contrast, any request for a memory location encrypted with a particular key provided by protection software will have a key identifier that identifies the particular key, and therefore has a binary value greater than zero (e.g., “11001,” “11011,” and so forth). Therefore, if the key identifier has a binary value greater than zero, the key identified by the key identifier (e.g., generated by protection software) is used to encrypt or decrypt the data for that request. After either block3070or block3080, the method3000may be completed. FIG.31—Method for Generating Keys Referring now toFIG.31, shown is a flow diagram of a method3100for generating keys, in accordance with one or more embodiments. In various embodiments, the method3100may be performed by processing logic that may include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device), or a combination thereof. In some implementations, the method3100may be performed using one or more components shown inFIGS.26A-26B(e.g., BIOS2615, memory protection logic2630, registers2635, etc.). In firmware or software embodiments, the method3100may be implemented by computer executed instructions stored in a non-transitory machine readable medium, such as an optical, semiconductor, or magnetic storage device. The machine-readable medium may store data, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform a method. Block3110may include detecting an initialization of a computing system comprising a processor and persistent memory, where the persistent memory is partitioned into a persistent storage portion and a memory expansion portion. Block3120may include, in response to a detection of the initialization, obtaining a first ephemeral component associated with the persistent storage portion. Block3130may include generating a persistent key using the first ephemeral component. Block3140may include obtaining a second ephemeral component associated with the memory expansion portion. Block3150may include generating a non-persistent key using the second ephemeral component. Block3160may include handling memory requests using the persistent key and the non-persistent key. For example, referring toFIGS.26A-27, the memory controller(s)2632may use the persistent key to encrypt/decrypt data in the persistent storage section2670(included in the persistent memory2640). Further, the memory controller(s)2632may use the generated non-persistent key to encrypt/decrypt data in the memory expansion portion2660(included in the persistent memory2640) and/or the non-persistent memory2650. After block3160, the method3100may be completed. Note that, whileFIGS.26A-31illustrate various example implementations, other variations are possible. For example, it is contemplated that one or more embodiments may be implemented in the example devices and systems described with reference toFIGS.1-25. The following clauses and/or examples pertain to further embodiments. In Example 1, an apparatus for key generation includes a processor, persistent memory coupled to the processor, and a memory protection logic. The processor may include multiple processing engines. The persistent memory may include a persistent storage portion and a memory expansion portion. The memory protection logic is to: obtain a first ephemeral component associated with the persistent storage portion; generate a persistent key using the first ephemeral component; obtain a second ephemeral component associated with the memory expansion portion; and generate a non-persistent key using the second ephemeral component. In Example 2, the subject matter of Example 1 may optionally include a memory controller to: handle requests for the persistent storage portion using the persistent key; and handle requests for the memory expansion portion using the non-volatile key. In Example 3, the subject matter of Examples 1-2 may optionally include that the memory protection logic is to generate the persistent key based on a hash function of the first ephemeral component and a static component. In Example 4, the subject matter of Examples 1-3 may optionally include a memory storing firmware instructions, where the firmware instructions are executable to, in response to a detection of an initial boot of the apparatus: cause the memory protection logic to generate the first ephemeral component, store the first ephemeral component in a storage, and store the first ephemeral component in a first register of the processor; obtain a static component from the processor; and cause the memory protection logic to generate the persistent key using the static component and the first ephemeral component generated by the memory protection logic. In Example 5, the subject matter of Examples 1-4 may optionally include that the firmware instructions are executable to, in response to a detection of another boot of the apparatus that is subsequent to the initial boot: read the first ephemeral component from the storage; obtain the static component from the processor; and cause the memory protection logic to regenerate the persistent key using the static component and the first ephemeral component read from the storage. In Example 6, the subject matter of Examples 1-5 may optionally include that the firmware instructions are executable to, in response to the detection of the another boot: cause the memory protection logic to generate a new second ephemeral component, store the new second ephemeral component in the storage, and store the new second ephemeral component in a second register of the processor; and cause the memory protection logic to generate a new non-persistent key using the new second ephemeral component generated by the memory protection logic. In Example 7, the subject matter of Examples 1-6 may optionally include that the firmware instructions are executable to, in response to a detection of an exit of the apparatus from a standby state, wherein the standby state is subsequent to the another boot: read the first ephemeral component from the first register of the processor; cause the memory protection logic to regenerate the persistent key using the static component and the first ephemeral component read from the first register; read the new second ephemeral component from the second register of the processor; and cause the memory protection logic to regenerate the new non-persistent key using the new second ephemeral component read from the second register. In Example 8, the subject matter of Examples 1-7 may optionally include that the static component is obtained from one or more in fuses written into the processor during manufacture, and that the static component is a hidden value that is only accessible with a valid privilege. In Example 9, a method for key generation may include: detecting an initialization of a computing system comprising a processor and persistent memory, where the persistent memory includes a persistent storage portion and a memory expansion portion; in response to a detection of the initialization, obtaining a first ephemeral component associated with the persistent storage portion; generating a persistent key using the first ephemeral component; obtaining a second ephemeral component associated with the memory expansion portion; generating a non-persistent key using the second ephemeral component; and handling memory requests using the persistent key and the non-persistent key. In Example 10, the subject matter of Example 9 may optionally include: obtaining a static component from the processor; and generating the persistent key based on a hash function of the first ephemeral component and the static component. In Example 11, the subject matter of Examples 9-10 may optionally include: detecting an initial boot of the computing device; and in response to a detection of the initial boot of the computing device: generating, by a memory protection logic of the processor, the first ephemeral component; storing the first ephemeral component in a storage and in a first register of the processor; obtaining a static component from the processor; and generating, by the memory protection logic, the persistent key using the static component and the first ephemeral component generated by the memory protection logic. In Example 12, the subject matter of Examples 9-11 may optionally include: detecting another boot of the computing device that is subsequent to the initial boot; and in response to a detection of the another boot: reading the first ephemeral component from the storage; obtaining the static component from the processor; and regenerating, by the memory protection logic, the persistent key using the static component and the first ephemeral component read from the storage. In Example 13, the subject matter of Examples 9-12 may optionally include, in response to the detection of the another boot: generating, by the memory protection logic, a new second ephemeral component; storing the new second ephemeral component in the storage and in a second register of the processor; and generating, by the memory protection logic, a new non-persistent key using the new second ephemeral component generated by the memory protection logic. In Example 14, the subject matter of Examples 9-13 may optionally include: detecting an exit of the computing device from a standby state, wherein the standby state is subsequent to the another boot; and in response to the detection of the exit: reading the first ephemeral component from the first register of the processor; regenerating, by the memory protection logic, the persistent key using the static component and the first ephemeral component read from the first register; reading the new second ephemeral component from the second register of the processor; and regenerating, by the memory protection logic, the new non-persistent key using the new second ephemeral component read from the second register. In Example 15, the subject matter of Examples 9-14 may optionally include: detecting a request for the persistent memory; obtaining a key identifier in the request; determining whether a value of key identifier is greater than zero; in response to a determination that the value of the key identifier is greater than zero, handling the request using a particular key associated with the key identifier; and in response to a determination that the value of the key identifier is not greater than zero, handling the request using the persistent key instead of the particular key associated with the key identifier. In Example 16, a computing device may include one or more processors; and a memory having stored therein a plurality of instructions that when executed by the one or more processors, cause the computing device to perform the method of any of Examples 9 to 15. In Example, 17, at least one machine-readable medium having stored thereon data which, if used by at least one machine, causes the at least one machine to perform the method of any of Examples 9 to 15. In Example, 18, an electronic device comprising means for performing the method of any of Examples 9 to 15. In Example 19, a non-transitory machine-readable medium stores instructions for key generation. The instructions may be executable to: detect an initialization of a computing system comprising a processor and persistent memory, wherein the persistent memory includes a persistent storage portion and a memory expansion portion, and wherein the processor includes a memory protection logic; and in response to a detection of the initialization: obtain a first ephemeral component associated with the persistent storage portion; cause the memory protection logic to generate a first ephemeral component; obtain a second ephemeral component associated with the memory expansion portion; and cause the memory protection logic to generate a non-persistent key using the second ephemeral component. In Example 20, the subject matter of Example 19 may optionally include instructions executable to, in response to a determination that the initialization is an initial boot of the computing system: cause the memory protection logic to generate the first ephemeral component, store the first ephemeral component in a storage, and store the first ephemeral component in a first register of the processor; obtain a static component from the processor; and cause the memory protection logic to generate the persistent key based on a hash function of the static component and the first ephemeral component generated by the memory protection logic. In Example 21, the subject matter of Examples 19-20 may optionally include instructions executable to, in response to a determination that the initialization is another boot of the computing system that is subsequent to the initial boot: read the first ephemeral component from the storage; obtain the static component from the processor; and cause the memory protection logic to regenerate the persistent key using the static component and the first ephemeral component read from the storage. In Example 22, the subject matter of Examples 19-21 may optionally include instructions executable to, in response to the determination that the initialization is the another boot of the computing system: cause the memory protection logic to generate a new second ephemeral component, store the new second ephemeral component in the storage, and store the new second ephemeral component in a second register of the processor; and cause the memory protection logic to generate a new non-persistent key using the new second ephemeral component generated by the memory protection logic. In Example 23, the subject matter of Examples 19-22 may optionally include instructions executable to, in response to a detection of an exit of the computing system from a standby state, wherein the standby state is subsequent to the another boot: read the first ephemeral component from the first register of the processor; cause the memory protection logic to regenerate the persistent key using the static component and the first ephemeral component read from the first register; read the new second ephemeral component from the second register of the processor; and cause the memory protection logic to regenerate the new non-persistent key using the new second ephemeral component read from the second register. In Example 24, an apparatus for key generation may include: means for detecting an initialization of a computing system comprising a processor and persistent memory, wherein the persistent memory includes a persistent storage portion and a memory expansion portion; means for, in response to a detection of the initialization, obtaining a first ephemeral component associated with the persistent storage portion; means for generating a persistent key using the first ephemeral component; means for obtaining a second ephemeral component associated with the memory expansion portion; means for generating a non-persistent key using the second ephemeral component; and means for handling memory requests using the persistent key and the non-persistent key. In Example 25, the subject matter of Example 24 may optionally include: means for obtaining a static component from the processor; and means for generating the persistent key based on a hash function of the first ephemeral component and the static component. In Example 26, the subject matter of Examples 24-25 may optionally include: means for detecting an initial boot of the computing device; and means for, in response to a detection of the initial boot of the computing device: generating the first ephemeral component; storing the first ephemeral component in a storage and in a first register of the processor; obtaining a static component from the processor; and generating the persistent key using the static component and the first ephemeral component generated by the memory protection logic. In Example 27, the subject matter of Examples 24-26 may optionally include: means for detecting another boot of the computing device that is subsequent to the initial boot; and means for, in response to a detection of the another boot: reading the first ephemeral component from the storage; obtaining the static component from the processor; and regenerating the persistent key using the static component and the first ephemeral component read from the storage. In Example 28, the subject matter of Examples 24-27 may optionally include: means for, in response to the detection of the another boot: generating a new second ephemeral component; storing the new second ephemeral component in the storage and in a second register of the processor; and generating a new non-persistent key using the new second ephemeral component generated by the memory protection logic. In Example 29, the subject matter of Examples 24-28 may optionally include: means for detecting an exit of the computing device from a standby state, wherein the standby state is subsequent to the another boot; and means for, in response to the detection of the exit: reading the first ephemeral component from the first register of the processor; regenerating the persistent key using the static component and the first ephemeral component read from the first register; reading the new second ephemeral component from the second register of the processor; and regenerating the new non-persistent key using the new second ephemeral component read from the second register. In Example 30, the subject matter of Examples 24-29 may optionally include: means for detecting a request for the persistent memory; means for obtaining a key identifier in the request; means for determining whether a value of key identifier is greater than zero; means for, in response to a determination that the value of the key identifier is greater than zero, handling the request using a particular key associated with the key identifier; and means for, in response to a determination that the value of the key identifier is not greater than zero, handling the request using the persistent key instead of the particular key associated with the key identifier. Note that the examples shown inFIGS.1-31are provided for the sake of illustration, and are not intended to limit any embodiments. Specifically, while embodiments may be shown in simplified form for the sake of clarity, embodiments may include any number and/or arrangement of components. For example, it is contemplated that some embodiments may include any number of components in addition to those shown, and that different arrangement of the components shown may occur in certain implementations. Furthermore, it is contemplated that specifics in the examples shown inFIGS.1-31may be used anywhere in one or more embodiments. Understand that various combinations of the above examples are possible. Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein. References throughout this specification to “one embodiment” or “an embodiment” mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase “one embodiment” or “in an embodiment” are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application. While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. | 185,499 |
11861021 | DETAILED DESCRIPTION In order to make objects, technical details and advantages of the embodiments of the disclosure apparent, the technical solutions of the embodiments will be described in a clearly and fully understandable way in connection with the drawings related to the embodiments of the disclosure. Apparently, the described embodiments are just a part but not all of the embodiments of the disclosure. Based on the described embodiments herein, those skilled in the art can obtain other embodiment(s), without any inventive work, which should be within the scope of the disclosure. Unless otherwise defined, all the technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art to which the present disclosure belongs. The terms “first”, “second”, etc., which are used in the description and the claims of the present application for disclosure, are not intended to indicate any sequence, amount or importance, but distinguish various components. The terms “comprise”, “comprising”, “include”, “including”, etc., are intended to specify that the elements or the objects stated before these terms encompass the elements or the objects and equivalents thereof listed after these terms, but do not preclude the other elements or objects. The phrases “connect”, “connected”, “coupled”, etc., are not intended to define a physical connection or mechanical connection, but may include an electrical connection, directly or indirectly. “On”, “under”, “right”, “left” and the like are only used to indicate relative position relationship, and when the position of the object which is described is changed, the relative position relationship may be changed accordingly. With the rapid development of computer and internet technology, it becomes more and more easy to illegally produce, store, distribute, copy, modify, and trade digital works without permission, thereby bringing losses to copyright owners of digital works. Therefore, the issue of copyright protection of digital works becomes more and more important. Digital works include, for example, digital paintings, music, videos, etc. The digital paintings can be various types of pictures, including photos taken by digital cameras, digital copies of paper-based calligraphy and painting (such as scanning copies), machine-generated works (such as images generated by artificial intelligence (AI)), etc. For example, a painting transaction management system of a painted screen platform is used for transaction management of digital paintings. Because digital content is vulnerable to network monitoring and illegal copying and distribution, an effective copyright protection system is needed to protect digital paintings during transmission and storage, so as to prevent piracy. For example, a common digital rights management (DRM) system can be used to encrypt/decrypt digital content with encryption algorithm to ensure the security of the transmission process. However, in the common DRM system, for each mobile terminal device (such as a painted screen, a digital photo frame, a mobile phone, a tablet computer, etc.), the digital content is usually encrypted by using the same key, and a certain mobile terminal device can easily transmit the content key to other mobile terminal devices for usage after obtaining the decrypted content key, which leads to unauthorized devices acquiring the digital content (such as digital paintings), and thus the security is poor. At least one embodiment of the present disclosure provides a digital artwork display device, an electronic device, and a digital artwork management method. The digital artwork display device can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure further solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). Hereinafter, the embodiments of the present disclosure will be described in detail with reference to the drawings. It should be noted that the same reference numerals in different drawings are used to refer to the same elements that have been described. At least one embodiment of the present disclosure provides a digital artwork display device, and the digital artwork display device includes a registration unit, a transaction unit, and a file decryption unit. The registration unit is configured to apply for a device identifier and a device public-private key pair, the device public-private key pair includes a device public key and a device private key that is corresponding to the device public key. The transaction unit is configured to acquire a use license, and the use license includes the device identifier and a content key ciphertext obtained by encrypting a content key by using the device public key. The file decryption unit is configured to decrypt the content key ciphertext in the use license by using the device private key so as to obtain the content key, and decrypt an encrypted file that is obtained by using the content key so as to obtain an original file. FIG.1is a schematic block diagram of a digital artwork display device provided by some embodiments of the present disclosure. As illustrated inFIG.1, a digital artwork display device10includes a registration unit110, a transaction unit120, and a file decryption unit130. According to needs, the digital artwork display device10may further include a computing device (for example, a central processing unit), a storage device, a communication device (for example, a wireless communication device or a wired communication device), a modem, a radio frequency device, an encoding and decoding device, a display device (for example, a liquid crystal display panel, an organic light-emitting diode display panel, a projection device, or the like), an input device (for example, a keyboard, a button, a mouse, a touch screen, or the like), a data transmission interface (for example, an HDMI interface, a USB interface, etc., so that other output devices and storage devices can be connected), a speaker, etc., and the embodiments of the present disclosure are not limited in this aspect. The digital artwork display device10is, for example, applied to the scenario illustrated inFIG.2. For example, the digital artwork display device10is a painted screen01, which can be connected to the internet through wireless means or wired means, and digital paintings can be purchased and can be displayed by a display device of the painted screen01. Each unit in the digital artwork display device10is described in detail below with reference to the application scenario illustrated inFIG.2. The registration unit110is configured to apply for a device identifier (DID) and a device public-private key pair. For example, the device identifier is in one-to-one correspondence with the digital artwork display device10, and different devices correspond to different device identifiers. For example, the device identifier may be a character string including numbers, letters (uppercase or lowercase), or special characters. For example, the device identifier may be an international mobile equipment identity (IMEI), a product serial number, a media access control (MAC) address, etc., of the digital artwork display device10, or may be obtained by performing a predetermined operation thereon, and for example, the device identifier may be obtained by performing a hash operation with the IMEI, the product serial number or the MAC. The device identifier may be, for example, incorporated into data related to the digital artwork display device10, so as to identify, verify, and determine the digital artwork display device10in transaction, permission and other affairs, and the related data may be, for example, a transaction credential and a use license. The device identifier is also in one-to-one correspondence with the device public-private key pair assigned to the digital artwork display device10, so the device identifier can be stored in association with the device public-private key pair when storing the device public-private key pair. For example, the device public-private key pair includes a device public key and a device private key that is corresponding to the device public key, and the device public key and the device private key can be used to encrypt or decrypt digital content. The device public-private key pair is also in one-to-one correspondence with the digital artwork display device10, and different devices correspond to different device public-private key pairs. For example, after the digital artwork display device10applies for the device public-private key pair, the device private key is stored in the digital artwork display device10(for example, the storage device of the digital artwork display device10) to facilitate subsequent decryption of the digital content, and the device private key can be stored, for example, in a secure region specifically divided by the system to achieve a higher level of protection. For example, the device public-private key pair adopts an asymmetric encryption algorithm, such as the RSA1024 or RSA2048 algorithm, and other applicable algorithms may also be adopted, which is not limited in the embodiments of the present disclosure. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to a communication network, the registration unit110in the painted screen01can apply to a first server02for the device identifier and the device public-private key pair. For example, a DRM unit021in the first server02responds to the application of the painted screen01, assigns the device identifier and the device public-private key pair for the painted screen01, and transmits the device identifier and the device private key in the device public-private key pair that are assigned to the painted screen01, so that the device identifier and the device private key can be stored in the painted screen01. For example, the painted screen01and the first server02communicate with each other through a communication network (for example, any suitable network such as a wired LAN, a wireless LAN, a 3G/4G/5G communication network, etc.) and based on a corresponding communication protocol, so as to transmit data. For example, in an example, the first server02may be a server cluster, and the DRM unit021may be a DRM server. Of course, the embodiments of the present disclosure are not limited to this case, the first server02may also be a separate server, and the DRM unit021may be a DRM service process running in the separate server. For example, the first server02may also be a virtual server and run on any physical device or private cloud. For the specific implementation of the first server02, the embodiments of the present disclosure are not limited in this aspect. The transaction unit120is configured to acquire a use license. For example, the use license includes the device identifier of the digital artwork display device10and a content key ciphertext obtained by encrypting a content key by using the device public key. For example, the content key is used to decrypt an encrypted file (for example, a digital artwork file, such as a digital painting) that is obtained subsequently, and the content key is described in detail below and is not described in detail here. For example, after performing a payment operation, the transaction unit120acquires the use license, which is, for example, a digital file in a specific form. For example, the payment operation includes payment of a fee, which can be paid by the transaction unit120through electronic fund transfer. For example, the payment operation is realized through a payment service provided by a third party (for example, a bank, Alipay, WeChat Pay, etc.). The fee is used, for example, to purchase digital paintings, which can be in either actual currency (for example, RMB, US dollar, etc.) or various tokens (for example, Bitcoin, point, QQ coin, etc.). In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to the communication network, the transaction unit120in the painted screen01can acquire the use license issued by the first server02. For example, a license service unit022in the first server02generates the use license and transmits the use license to the painted screen01. Similar to the DRM unit021, the license service unit022may be a license server, a license service process, or the like, which is not limited in the embodiments of the present disclosure. The file decryption unit130is configured to decrypt the content key ciphertext in the use license by using the device private key so as to obtain the content key, and decrypt an encrypted file that is obtained by using the content key so as to obtain an original file. The original file is, for example, a digital artwork file, such as a painting. Because the content key ciphertext is obtained by encrypting the content key by using the device public key, and the device public key corresponds to the digital artwork display device10, only the corresponding device private key can decrypt the content key ciphertext. In combination with the unique device identifier corresponding to the digital artwork display device10, in this way, it can be ensured that the use license issued to the digital artwork display device10can only be decrypted by the digital artwork display device10so as to obtain the content key, and even if other device acquires the use license through improper ways, the other device cannot pass the verification and decrypt the use license, so the content key cannot be obtained. Therefore, the digital artwork display device10can achieve high security where one device corresponds to one key. For example, the digital artwork display device10can acquire the encrypted file from another server (for example, a storage server or a public cloud, etc.), and the encrypted file is obtained by encrypting the original file by using the content key. After obtaining the content key, the file decryption unit130decrypts the encrypted file by using the content key combining with the unique device identifier corresponding to the digital artwork display device10, so that the original file can be obtained. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to the communication network, the painted screen01can acquire the encrypted file from a second server03, or the painted screen01can acquire the encrypted file through other peripheral devices (for example, acquiring from other storage devices through an HDMI interface or a USB interface). The file decryption unit130in the painted screen01can decrypt the use license that is obtained so as to obtain the content key, and then decrypt the encrypted file by using the content key, thereby obtaining the original file. The original file is, for example, a digital painting, so that the painted screen01can display the digital painting through the display device in the painted screen01. Similar to the first server02, the second server03may be a server cluster, a separate server or a virtual server, or may be a public cloud, which is not limited in the embodiments of the present disclosure. FIG.3is a schematic block diagram of another digital artwork display device provided by some embodiments of the present disclosure. As illustrated inFIG.3, the digital artwork display device10further includes a file acquisition unit140and an output unit150. The transaction unit120of the digital artwork display device10includes a transaction credential acquisition unit121and a license acquisition unit122. The other structures in the digital artwork display device10are basically the same as the digital artwork display device10illustrated inFIG.1. The file acquisition unit140is configured to acquire the encrypted file. For example, the encrypted file is stored in another server (for example, a storage server or a public cloud, etc.). The file acquisition unit140communicates with the server through a communication network and based on a corresponding communication protocol, submits a file acquisition request, and receives the file transmitted by the server after the request is passed by the server, thereby acquiring the encrypted file. The encrypted file is, for example, obtained by encrypting the original file by using the content key. Because the encrypted file is encrypted, even if other device acquires the encrypted file, the other device cannot decrypt the encrypted file because absence of content key (and the corresponding device identifier), which may not cause leak of the original file. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to the communication network, the file acquisition unit140of the painted screen01can acquire the encrypted file from the second server03. For example, the encrypted file can be acquired by downloading directly from a link, or by file transfer protocol (FPT). The second server03is, for example, a storage server or a public cloud, and can allow any device to download the stored encrypted file. The output unit150is configured to output the original file. For example, the output unit150may be a display panel, a speaker, etc., so as to display or play the original file. For example, the original file may be a digital artwork file, such as digital painting, video, audio, e-book, etc., which is not limited in the embodiments of the present disclosure. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2, the output unit150of the painted screen01may be a display panel, which is used to display the digital painting. For example, the display panel can switch to display a plurality of digital paintings at a predetermined time interval, or display a plurality of digital paintings simultaneously in different regions, or display one single digital painting fixedly, or display the digital painting in other ways. The embodiments of the present disclosure are not limited in this aspect. The transaction credential acquisition unit121is configured to request and acquire a transaction credential. For example, the transaction credential may be a token and generated by using a token mechanism. The transaction credential includes the device identifier of the digital artwork display device10and may also include the content identifier (CID) of the encrypted file, a uses-permission, and a transaction credential digital signature. The transaction credential digital signature is generated by using, for example, a transaction credential private key, so as to prevent illegal users from forging and tampering with the content of the transaction credential. For example, the transaction credential acquisition unit121is further configured to perform a payment operation. For example, the payment operation includes payment of a fee, which can be paid by the transaction credential acquisition unit121through electronic fund transfer, and the fee is, for example, used to purchase digital paintings. After the transaction credential acquisition unit121performs the payment operation, the transaction credential can be acquired. It should be noted that the payment operation can also be performed by other units (for example, an online banking application, etc.) in the digital artwork display device10. For example, the transaction credential acquisition unit121enables other units to perform the payment operation in the calling way and return the result of the payment operation to the transaction credential acquisition unit121, and the specific implementation of the payment operation is not limited in the embodiments of the present disclosure. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to the communication network, the transaction credential acquisition unit121of the painted screen01can pay a fee to a transaction management unit023in the first server02and request the transaction credential. The transaction management unit023generates the transaction credential after receiving the fee paid by the painted screen01and sends the transaction credential to the painted screen01. The transaction credential includes, for example, the device identifier of the painted screen01, the content identifier of the purchased digital painting, the uses-permission, and a transaction credential digital signature. The transaction credential digital signature is generated by using, for example, the transaction credential private key stored in the transaction management unit023. For example, the transaction management unit023may be a transaction management server, a transaction management process, or the like. The embodiments of the present disclosure are not limited in this aspect. The license acquisition unit122is configured to request and acquire the use license by using the transaction credential. For example, the license acquisition unit122sends the transaction credential to other server to request the use license. After the other server verifies the integrity of the transaction credential, the use license is generated, and the use license is sent to the license acquisition unit122. For example, the other server mentioned above may be the same server as the server implementing the transaction credential acquisition unit121, or may be a different server from the server implementing the transaction credential acquisition unit121, which is not limited in the embodiments of the present disclosure. For example, the use license includes the device identifier of the digital artwork display device10, and also includes the content identifier of the encrypted file and a license digital signature. For example, the license digital signature is generated by using the license private key, and the license acquisition unit122is further configured to verify the integrity of the license digital signature by using a license public key corresponding to the license private key after acquiring the use license. The license public key is stored in the digital artwork display device10(for example, the storage device of the digital artwork display device10), and for example, the license public key can be stored in a secure region specifically divided by the system to achieve a higher level of protection. In the case where the digital artwork display device10is the painted screen01illustrated inFIG.2and the painted screen01is connected to the communication network, the license acquisition unit122of the painted screen01sends the acquired transaction credential to the license service unit022in the first server02. After receiving the transaction credential, the license service unit022verifies the integrity of the transaction credential digital signature by using a transaction credential public key, and the transaction credential public key is stored in the license service unit022. In the case where the verification is passed, the license service unit022acquires a corresponding content key from a content key database024in the first server02according to the content identifier of the digital painting in the transaction credential, and then encrypts the content key by using the device public key corresponding to painted screen01according the device identifier corresponding to the painted screen01in the transaction credential, thereby generating the use license and sending the use license to the license acquisition unit122of the painted screen01. For example, the content key database024may adopt an appropriate database form, such as a relational database or a non-relational database. For example, the content key database024can run on the same computer or server as the DRM unit021, the license service unit022, or the transaction management unit023, or separately run on a database server in a local area network, or run on a database server (such as a cloud server) in the internet, and the embodiments of the present disclosure are not limited in this aspect. After acquiring the use license, the file decryption unit130in the digital artwork display device10extracts the content key ciphertext from the use license after verifying the use license with the device identifier, and the content key ciphertext is subsequently decrypted by using the device private key stored in the digital artwork display device10so as to obtain the content key. Because the device public key and the device private key are in one-to-one correspondence with the digital artwork display device10, the use license is equivalent to being bound to the digital artwork display device10. Even if other devices acquire the use license through improper ways, the other devices cannot obtain the content key by decrypting because of absence of the device identifier and the device private key. Thus, the digital artwork display device10can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring the content key, and thus preventing unauthorized devices from acquiring digital files (for example, digital artwork files, such as digital paintings) by using the content key. The digital artwork display device10adopts the token mechanism to issue the use license, so that the transaction management unit023and the license service unit022in the first server02are relatively independent from each other, thereby solving the problem of high coupling degree between the transaction services and the license services in the traditional digital copyright management system, and improving the maintainability of the system. The encrypted file is stored in the second server03. The second server03is, for example, a public cloud, and any device can acquire the encrypted file from the second server03. Due to the use of the token mechanism, the device identifier, and the device public-private key pair, the encrypted file cannot be illegally decrypted in the case where the public cloud is used to store and distribute the encrypted file. Therefore, the public cloud resources can be used to achieve efficient distribution, thereby improving the transaction efficiency. It should be noted that the digital artwork display device10is not limited to include the units described above, and may further include more units to achieve more comprehensive functions. Each unit can be implemented as hardware, firmware, or software modules, and these software modules can be run in the digital artwork display device10to provide corresponding application programs or service processes, which are not limited in the embodiments of the present disclosure. The digital artwork display device10is not limited to the painted screen01, but can also be other devices, such as a video play device, an audio play device, an e-book reading device, etc. Correspondingly, the above-described original file can be a digital artwork file such as a video, an audio, an e-book, etc. The embodiments of the present disclosure are not limited in this aspect. At least one embodiment of the present disclosure further provides a digital artwork management method for a digital artwork display device. The digital artwork management method can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure also solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving the maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). FIG.4is a schematic flowchart of a digital artwork management method provided by some embodiments of the present disclosure. The digital artwork management method is, for example, used for a digital artwork display device, and the digital artwork display device is assigned a device identifier. For example, the digital artwork management method can be used for the digital artwork display device10illustrated inFIG.1orFIG.3. As illustrated inFIG.4, the digital artwork management method includes following steps. Step S101: acquiring a use license. Step S102: decrypting a content key ciphertext by using a device private key corresponding to a device public key so as to obtain a content key, and decrypting an encrypted file that is obtained by using the content key so as to obtain an original file. For example, in step S101, the use license includes the device identifier and the content key ciphertext obtained by encrypting the content key by using the device public key. The device identifier is in one-to-one correspondence with the digital artwork display device10, and different devices correspond to different device identifiers. For example, the device identifier is applied and obtained by the digital artwork display device10before performing the digital artwork management method. For example, the device public key is also in one-to-one correspondence with the digital artwork display device10, and different devices correspond to different device public keys. Accordingly, the device private key corresponding to the device public key is also in one-to-one correspondence with the digital artwork display device10, and the corresponding device private key is stored in the digital artwork display device10. The device public key and the device private key may adopt an asymmetric encryption algorithm, such as the RSA1024 or RSA2048 algorithm, or other applicable algorithms, which is not limited in the embodiments of the present disclosure. For example, the content key is used to subsequently decrypt the encrypted file that is acquired, and the content key may adopt AES128, AES256, or other applicable cryptographic algorithms Step S101can be performed by, for example, the transaction unit120of the digital artwork display device10illustrated inFIG.1orFIG.3, and the related description can be referred to the foregoing content, which is not repeated here. For example, step S101may further include following steps. Step S1011: requesting and acquiring a transaction credential. Step S1012: requesting and acquiring the use license by using the transaction credential. For example, in step S1011, the transaction credential may be a token and generated by using a token mechanism. The transaction credential includes the device identifier of the digital artwork display device10and may also include the content identifier of the encrypted file, a uses-permission, and a transaction credential digital signature. The transaction credential digital signature is generated by using, for example, a transaction credential private key, so as to prevent illegal users from forging and tampering with the content of the transaction credential. For example, the transaction credential can be requested and acquired by performing a payment operation. Step S1011may be performed by, for example, the transaction credential acquisition unit121of the digital artwork display device10illustrated inFIG.2, and the related description can be referred to the foregoing content, which is not repeated here. For example, in step S1012, the transaction credential can be sent to other server to request the use license. After the other server verifies the integrity of the transaction credential, the use license is generated, and the use license is sent to the digital artwork display device10. For example, the other server mentioned above may be the same server as the server which is used to issue the transaction credential, or may be a different server from the server which is used to issue the transaction credential, which is not limited in the embodiments of the present disclosure. For example, the use license may also include the content identifier of the encrypted file and a license digital signature. For example, the license digital signature is generated by using a license private key, and the digital artwork display device10can verify the license digital signature by using a license public key corresponding to the license private key after acquiring the use license. The license public key is stored in the digital artwork display device10(for example, the storage device of the digital artwork display device10). Step S1012may be performed by, for example, the license acquisition unit122of the digital artwork display device10illustrated inFIG.3, and the related description can be referred to the foregoing content, which is not repeated here. For example, as illustrated inFIG.4, in step S102, the content key ciphertext is decrypted by using the device private key stored in the digital artwork display device10, thereby obtaining the content key. For example, the digital artwork display device10can acquire the encrypted file from other server (for example, a storage server, a public cloud, or the like), and the encrypted file is obtained by encrypting the original file by using the content key. The original file is, for example, a digital artwork file, such as a digital painting. Therefore, after the content key is obtained, the encrypted file can be decrypted by using the content key, so that the original file can be obtained. Because the content key ciphertext is obtained by encrypting the content key by using the device public key, and the device public key corresponds to the digital artwork display device10, only the corresponding device private key can decrypt the content key ciphertext. In this way, it can be ensured that the use license issued to the digital artwork display device10can only be decrypted by the digital artwork display device10so as to obtain the content key, and even if other device acquires the use license through improper ways, the other device cannot decrypt the use license, so the content key cannot be obtained. Therefore, the digital artwork management method can achieve high security where one device corresponds to one key. Step S102can be performed by the file decryption unit130of the digital artwork display device10illustrated inFIG.1orFIG.3, and the related description can be referred to the foregoing content, which is not repeated here. FIG.5is a schematic flowchart of another digital artwork management method provided by some embodiments of the present disclosure. As illustrated inFIG.5, the digital artwork management method further includes steps S103and S104, and the remaining steps are basically the same as the digital artwork management method illustrated inFIG.4. Step S103: acquiring the encrypted file. Step S104: applying for the device identifier and a device public-private key pair. For example, in step S103, the encrypted file is stored in another server (for example, a storage server, a public cloud, or the like), and the encrypted file can be acquired through a communication network and based on a corresponding communication protocol. The encrypted file is, for example, obtained by encrypting the original file by using the content key. Because the encrypted file is encrypted, the leakage of the original file can be avoided. Step S103can be performed by the file acquisition unit140of the digital artwork display device10illustrated inFIG.2, and the related description can be referred to the foregoing content, which is not repeated here. For example, in step S104, the device public-private key pair includes a device public key and a device private key that is corresponding to the device public key, and the device public key and the device private key can be used to encrypt or decrypt digital content (for example, the obtained encrypted file). The device public-private key pair is in one-to-one correspondence with the digital artwork display device10, and different devices correspond to different device public-private key pairs. For example, after applying and obtaining the device identifier and the device public-private key pair, the device identifier and the device private key can be stored in the digital artwork display device10(such as the storage device of the digital artwork display device10) to facilitate subsequent decryption of the content key ciphertext. Step S104can be performed by, for example, the registration unit110of the digital artwork display device10illustrated inFIG.1orFIG.3, and the related description can be referred to the foregoing content, which is not repeated here. It should be noted that, in some embodiments of the present disclosure, the execution order of step S103, step S104, and step S101is not limited. AlthoughFIG.5illustrates the above steps in a specific order, it does not limit the embodiments of the present disclosure. For example, the above steps can be performed in the order of S104-S101-S103-S102, or in the order of S103-S104-S101-S102, or in the order of S104-S103-S101-S102, or in other order, which is not limited in the embodiments of the present disclosure. The digital artwork management method may further include more steps to achieve more functions, which is not limited by the embodiments of the present disclosure. At least one embodiment of the present disclosure further provides an electronic device, and the electronic device includes a license generation unit. The license generation unit is configured to generate a use license and send the use license to a requesting device. The use license includes a device identifier of the requesting device, a content identifier of an encrypted file, and a first content key ciphertext obtained by encrypting a content key by using a device public key, and the content key corresponds to the encrypted file. The electronic device can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure also solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving the maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). FIG.6is a schematic block diagram of an electronic device provided by some embodiments of the present disclosure. As illustrated inFIG.6, an electronic device20includes a license generation unit210. As needed, the electronic device20may further includes a computing device (for example, a central processing unit), a storage device, a communication device (for example, a wireless communication device or a wired communication device), a modem, a radio frequency device, an encoding and decoding device, an input device (for example, a keyboard, a button, a mouse, or a touch screen), etc., and the embodiments of the present disclosure are not limited in this aspect. The electronic device20is, for example, applied to the scenario illustrated inFIG.2, and the electronic device20is, for example, the first server02, and can respond to the request of the painted screen01and provide corresponding services. Hereinafter, each unit in the electronic device20is described in detail with reference to the application scenario illustrated inFIG.2. The license generation unit210is configured to generate the use license and send the use license to the requesting device30. For example, the use license includes the device identifier of the requesting device30, the content identifier of the encrypted file, and the first content key ciphertext obtained by encrypting the content key by using the device public key. For example, the content key corresponds to the encrypted file, and can be used when the requesting device30decrypts the encrypted file. For example, in some examples, different encrypted files correspond to different content keys. In response to the request of the requesting device30, the license generation unit210encrypts the content key by using the device public key corresponding to the requesting device30, thereby generating the use license and sending the use license to the requesting device30. For example, the license generation unit210and the requesting device30are communicated with each other through the communication network and based on the corresponding communication protocol, so as to transmit data. For example, the requesting device30may be the aforementioned digital artwork display device10. In the case where the electronic device20is the first server02illustrated inFIG.2, the license generation unit210of the electronic device20is implemented as, for example, the license service unit022of the first server02. In this case, the requesting device30is, for example, the painted screen01. The license service unit022responds to the request of the painted screen01, encrypts the content key by using the device public key corresponding to the painted screen01so as to generate the first content key ciphertext, thereby generating the use license, and sending the use license to the painted screen01. The use license includes the device identifier of the painted screen01, the content identifier of the encrypted file, and the first content key ciphertext. The painted screen01can subsequently decrypt the first content key ciphertext by using the device private key stored in the painted screen01so as to obtain the content key, and decrypt the encrypted file by using the content key so as to obtain the original file. The original file is, for example, a digital painting purchased by the painted screen01. Because the first content key ciphertext can only be decrypted by using the device private key corresponding to the painted screen01, even if other devices acquire the use license through improper ways, the other devices cannot decrypt the first content key ciphertext because of absence of the corresponding device private key. Therefore, the electronic device20can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital artwork files (for example, digital paintings). FIG.7is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. As illustrated inFIG.7, the electronic device20is basically the same as the electronic device20illustrated inFIG.6except that a transaction processing unit220is further included. The transaction processing unit220is configured to receive a transaction request of the requesting device30, generate a transaction credential according to the transaction request, and send the transaction credential to the requesting device30. For example, the transaction request may be a payment operation initiated by the requesting device30, and the transaction request includes the device identifier of the requesting device30. The transaction credential includes the device identifier of the requesting device30and may also include the content identifier of the encrypted file and a transaction credential digital signature. The transaction credential digital signature is generated by using a transaction credential private key. The transaction processing unit220sends the transaction credential to the requesting device30after generating the transaction credential. In the case where the electronic device20is the first server02illustrated inFIG.2, the transaction processing unit220of the electronic device20is implemented as, for example, the transaction management unit023of the first server02. In this case, the requesting device30is, for example, the painted screen01. The painted screen01can pay a fee to the transaction management unit023in the first server02and request the transaction credential. The transaction management unit023generates the transaction credential after receiving the fee paid by painted screen01and sends the transaction credential to the painted screen01. The transaction credential includes, for example, the device identifier of the painted screen01, the content identifier of the purchased digital painting, the uses-permission, and a transaction credential digital signature. The transaction credential digital signature is generated, for example, by using a transaction credential private key stored in the transaction management unit023. For example, in an example, the license generation unit210is further configured to receive the transaction credential from the requesting device30and verify the transaction credential digital signature by using a transaction credential public key corresponding to the transaction credential private key. After the transaction processing unit220sends the transaction credential to the requesting device30, the requesting device30sends the transaction credential to the license generation unit210to request the use license. After receiving the transaction credential, the license generation unit210verifies the integrity of the transaction credential digital signature by using the transaction credential public key stored in the license generation unit210. After the verification is passed, the license generation unit210generates the use license and sends the use license to the requesting device30. This method can prevent illegal users from forging and tampering with the content of the transaction credential. In the case where the electronic device20is the first server02illustrated inFIG.2, the transaction processing unit220of the electronic device20is, for example, implemented as the transaction management unit023of the first server02, the license generation unit210of the electronic device20is, for example, implemented as the license service unit022of the first server02, and the requesting device30is, for example, the painted screen01. After the painted screen01pays a fee to the transaction management unit023and obtains the transaction credential, the painted screen01sends the transaction credential to the license service unit022. The license service unit022verifies the transaction credential digital signature by using the transaction credential public key, and then generates the use license and sends the use license to the painted screen01. FIG.8is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. As illustrated inFIG.8, the electronic device20is basically the same as the electronic device20illustrated inFIG.7except that the electronic device20further includes a content key library230and a device identifier assignment unit240. The content key library230is used to store the content key corresponding to the encrypted file, and the content key is stored in the content key library230in an encrypted manner. That is, a second content key ciphertext is stored in the content key library230, and the second content key ciphertext is obtained by encrypting the content key by using a service public key. By storing the content key in an encrypted manner, the security can be improved, and leakage of the content key can be avoided. For example, the content key library230may adopt an appropriate database form, such as a relational database or a non-relational database. For example, the content key library230can run on the same computer or server as other units in the electronic device20, or separately run on a database server in a local area network, or run on a database server (such as a cloud server) in the internet. The embodiments of the present disclosure are not limited in this aspect. The license generation unit210is further configured to acquire the second content key ciphertext obtained by encrypting the content key by using the service public key from the content key library230, and decrypt the second content key ciphertext by using a service private key corresponding to the service public key so as to obtain the content key, and then encrypt the content key by using the device public key so as to obtain the first content key ciphertext, and then obtain the use license. For example, after receiving the transaction credential sent by the requesting device30, the license generation unit210acquires the corresponding second content key ciphertext from the content key library230according to the content identifier corresponding to the encrypted file in the transaction credential, and the second content key ciphertext corresponds to the encrypted file. For example, the service private key is stored in the license generation unit210, so the license generation unit210can decrypt the second content key ciphertext by using the service private key. Then, according to the device identifier of the requesting device30contained in the transaction credential, the content key is encrypted by using the device public key corresponding to the requesting device30so as to obtain the first content key ciphertext, thereby generating the use license and sending the use license to the requesting device30. After obtaining the use license, the requesting device30decrypts the first content key ciphertext by using the device private key stored in the requesting device30so as to obtain the content key, and then decrypts the encrypted file that is obtained by using the content key so as to obtain the original file. In the case where the electronic device20is the first server02illustrated inFIG.2, the content key library230of the electronic device20is, for example, implemented as the content key database024of the first server02. The license service unit022of the first server02acquires the corresponding second content key ciphertext from the content key database024according to the content identifier corresponding to the encrypted file, and decrypts the second content key ciphertext by using the service private key so as to obtain the corresponding content key, and then encrypt the content key by using the device public key corresponding to the painted screen01according to the device identifier of the painted screen01so as to obtain the first content key ciphertext, thereby generating the use license and sending the use license to the painted screen01. It should be noted that in some embodiments of the present disclosure, the key pair of the service public key and the service private key may be the same as the key pair of the license public key and the license private key, so as to lower the complexity of system processing. Of course, the embodiments of the present disclosure are not limited in this aspect, and the above two key pairs may also be different key pairs to improve the flexibility and security of system processing. As illustrated inFIG.8, the device identifier assignment unit240is configured to receive an application of the requesting device30and assign the device identifier and a device public-private key pair that are corresponding to the requesting device30. For example, the requesting device30sends a request to the device identifier assignment unit240when performing a registration. The device identifier assignment unit240receives the request and assigns the device identifier and the device public-private key pair that are corresponding to the requesting device30. The device identifier and the device public-private key pair of each requesting device30are unique, that is, the device identifiers of different requesting devices30are different, and the device public-private key pairs of different requesting devices30are also different. In this way, it can be ensured that the original files (for example, the digital paintings) purchased by the requesting device30can only be used on this requesting device30, thereby achieving high security where one device corresponds to one key. For example, the device identifier assignment unit240sends the assigned device identifier and the device private key of the device public-private key pair to the requesting device30, and the device identifier and the device private key are stored in the requesting device30. For example, the device identifier assignment unit240stores the assigned device public key of the device public-private key pair in the electronic device20, so that the license generation unit210can use the device public key when generating the use license. In the case where the electronic device20is the first server02illustrated inFIG.2, the device identifier assignment unit240of the electronic device20is implemented as, for example, the DRM unit021of the first server02. The DRM unit021responds to the application of the painted screen01and assigns the device identifier and the device public-private key pair to the painted screen01, and transmits the assigned device identifier and the device private key of the device public-private key pair to the painted screen01, so that the device identifier and the device private key can be stored in the painted screen01. The DRM unit021stores the device public key of the assigned device public-private key pair in the first server02(for example, stored in the license service unit022of the first server02), so that the license service unit022can use the device public key when generating the use license. It should be noted that in some embodiments of the present disclosure, the electronic device20may be a server cluster, a separate server, or a virtual server, and accordingly, the units in the electronic device20may be different servers, or different service processes running on the same server. Each unit may be implemented as hardware, firmware, or software modules, and these software modules may be run on the same hardware or firmware to provide different application programs or service processes, which are not limited in the embodiments of the present disclosure. At least one embodiment of the present disclosure further provides a digital artwork management method, which can achieve high security where one device corresponds to one key, and can prevent unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure also solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving the maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). FIG.9is a schematic flowchart of a digital artwork management method provided by some embodiments of the present disclosure. For example, the digital artwork management method can be used for the electronic device20illustrated inFIG.6,FIG.7, orFIG.8. As illustrated inFIG.9, the digital artwork management method includes following steps. Step S201: generating a use license and sending the use license to a requesting device. For example, in step S201, the use license includes the device identifier of the requesting device30, the content identifier of the encrypted file, and the first content key ciphertext obtained by encrypting the content key by using the device public key. For example, the content key corresponds to the encrypted file, and can be used when the requesting device30decrypts the encrypted file. Step S201can be performed by, for example, the license generation unit210of the electronic device20illustrated inFIG.6,FIG.7, orFIG.8, and the related description can be referred to the foregoing content, which is not repeated here. FIG.10is a schematic flowchart of another digital artwork management method provided by some embodiments of the present disclosure. As illustrated inFIG.10, the digital artwork management method further includes steps S202and S203, and the remaining steps are basically the same as the digital artwork management method illustrated inFIG.9. Step S202: receiving a transaction request of the requesting device, generating a transaction credential according to the transaction request, and sending the transaction credential to the requesting device. Step S203: receiving the transaction credential from the requesting device, and verifying a transaction credential digital signature in the transaction credential. For example, in step S202, the transaction request may be a payment operation initiated by the requesting device30, and the transaction request includes the device identifier of the requesting device30. The transaction credential includes the device identifier of the requesting device30, and may also include the content identifier of the encrypted file and a transaction credential digital signature. The transaction credential digital signature is generated by using the transaction credential private key. Step S202may be performed by, for example, the transaction processing unit220of the electronic device20illustrated inFIG.7orFIG.8, and the related description can be referred to the foregoing content, which is not repeated here. For example, in step S203, after receiving the transaction credential sent by the requesting device30, the transaction credential digital signature is verified by using the transaction credential public key corresponding to the transaction credential private key. After the verification is passed, the use license is generated, and the use license is sent to the requesting device30. This method can prevent illegal users from forging and tampering with the content of the transaction credential. Step S203may be performed by, for example, the license generation unit210of the electronic device20illustrated inFIG.6,FIG.7, orFIG.8, and the related description can be referred to the foregoing content, which is not repeated here. For example, step S201illustrated inFIG.10can further include following steps. Step S2011: acquiring a second content key ciphertext obtained by encrypting the content key by using a service public key, and decrypting the second content key ciphertext by using a service private key corresponding to the service public key so as to obtain the content key. Step S2012: encrypting the content key by using the device public key so as to obtain the first content key ciphertext, thereby obtaining the use license. For example, in step S2011, after receiving the transaction credential sent by the requesting device30, the corresponding second content key ciphertext is acquired according to the content identifier corresponding to the encrypted file in the transaction credential. For example, the second content key ciphertext corresponds to the encrypted file, and the second content key ciphertext is obtained by encrypting the content key by using the service public key, and the content key can be used to decrypt the encrypted file so as to obtain the original file. Then, the service private key is used to decrypt the second content key ciphertext so as to obtain the content key. For example, in step S2012, the content key is encrypted by using the device public key corresponding to the requesting device30according to the device identifier of the requesting device30contained in the transaction credential, so as to obtain the first content key ciphertext, thereby generating the use license and sending the use license to the requesting device30. After obtaining the use license, the requesting device30decrypts the first content key ciphertext by using the device private key stored in the requesting device30to obtain the content key, and then decrypts the obtained encrypted file by using the content key to obtain the original file. Steps S2011and S2012can be performed by, for example, the license generation unit210of the electronic device20illustrated inFIG.6,FIG.7, orFIG.8, and the related description can be referred to the foregoing content, which is not repeated here. The digital artwork management method may further include more steps to achieve more functions, which are not limited in the embodiments of the present disclosure. At least one embodiment of the present disclosure further provides an electronic device, which includes a processor and a memory. The memory includes one or more computer program modules that are stored in the memory and configured to be executed by the processor, and the one or more computer program modules include instructions for implementing the digital artwork management method according to any one of the embodiments of the present disclosure. The electronic device can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure also solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving the maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). FIG.11is a schematic block diagram of an electronic device provided by some embodiments of the present disclosure. As illustrated inFIG.11, an electronic device40includes a processor410and a memory420. The memory420is used to store non-temporary computer-readable instructions (for example, one or more computer program modules). The processor410is configured to execute the non-temporary computer-readable instructions. When executed by the processor410, the non-temporary computer-readable instructions can perform one or more steps in the digital artwork management method described above. The memory420and the processor410can be interconnected by a bus system and/or other forms of connection mechanisms (not illustrated). For example, the electronic device40may also be the aforementioned digital artwork display device10or the electronic device20. For example, the memory420and the processor410may be provided on a user side, for example, can be provided in the painted screen01for performing one or more steps in the digital artwork management method described inFIG.4orFIG.5. For example, the memory420and the processor410may also be provided on a server side (or cloud) for performing one or more steps in the digital artwork management method described inFIG.9orFIG.10. For example, the processor410may be a central processing unit (CPU), a digital signal processor (DSP), or other forms of processing units with data processing capabilities and/or program execution capabilities, such as a field programmable gate array (FPGA), etc. For example, the central processing unit (CPU) may adopt an X86 or ARM architecture. The processor410may be a general-purpose processor or a dedicated processor, and may control other components in the electronic device40to perform desired functions. For example, the memory420may include any combination of one or more computer program products. The computer program products may include various forms of computer-readable storage media, e.g., volatile memory and/or nonvolatile memory. Volatile memory, for example, may include a random access memory (RAM) and/or a cache memory. Nonvolatile memory, for example, may include a read-only memory (ROM), a hard disk, an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD-ROM), a USB memory, a flash memory, and the like. One or more computer program modules can be stored in the computer-readable storage medium, and the processor410can execute the one or more computer program modules to implement various functions of the electronic device40. Various application programs and various data, various data used and/or generated by the application programs, and the like, can also be stored in the computer-readable storage medium. For the specific functions and technical effects of the electronic device40, reference can be made to the description about the digital artwork management methods above, which is not repeated here. At least one embodiment of the present disclosure further provides an electronic device, which includes an application unit, an encryption unit, and a transmission unit. The application unit is configured to apply for a content identifier for an original file. The encryption unit is configured to generate a content key, encrypt and encapsulate the original file by using the content key so as to obtain an encrypted file, and encrypt the content key so as to obtain a content key ciphertext. The encrypted file includes the content identifier and an original file ciphertext corresponding to the original file. The transmission unit is configured to transmit the encrypted file and the content key ciphertext to different servers for storage. The electronic device can be used to prevent unauthorized devices from acquiring digital files (for example, digital artwork files), and can realize the efficient distribution of digital files (for example, digital artwork files). FIG.12is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. As illustrated inFIG.12, an electronic device50includes an application unit510, an encryption unit520, and a transmission unit530. As needed, the electronic device50may further include a computing device (for example, a central processing unit), a storage device, a communication device (for example, a wireless communication device or a wired communication device), a modem, a radio frequency device, an encoding and decoding device, a display device (for example, a liquid crystal display panel, an organic light-emitting diode display panel, a projection device, or the like), an input device (for example, a keyboard, a button, a mouse, a touch screen, or the like), a data transmission interface (for example, an HDMI interface, a USB interface, etc., so that other output devices and storage devices can be connected), a speaker, etc., and the embodiments of the present disclosure are not limited in this aspect. The electronic device50is, for example, applied to the scenario illustrated inFIG.2, and the electronic device50is, for example, a mobile phone04and can upload digital paintings for sale. Hereinafter, each unit in the electronic device50is described in detail in conjunction with the application scenario illustrated inFIG.2. The application unit510is configured to apply for the content identifier for the original file. For example, the original file may be a digital artwork file, such as a digital painting, a video, an audio, an e-book, etc., which is not limited in the embodiments of the present disclosure. The content identifier is in one-to-one correspondence with the original file, and different original files correspond to different content identifiers, so that the original file can be identified and determined by the content identifier. For example, the content identifier can be incorporated into the data related to the original file, so as to identify, verify, and determine the original file in transactions, permits, and other affairs. These related data can be, for example, transaction credentials and use licenses. The content identifier is also in one-to-one correspondence with the content key, so the content identifier can be stored in association with the content key ciphertext when storing the content key ciphertext. In the case where the electronic device50is the mobile phone04illustrated inFIG.2, the mobile phone04can apply to the DRM unit021in the first server02for the content identifier of the original file. The DRM unit021transmits the assigned content identifier to the mobile phone04. For example, the content identifier can be a hash value obtained by performing a hash operation on the original file, the influencing factors for performing the hash operation may include the file size, file content, creation date, creator, etc., and the hash operation can be performed in the DRM unit021. Of course, the embodiments of the present disclosure are not limited to this case, and in some other embodiments, the hash operation can also be performed in the electronic device50under the control of the DRM unit021. The encryption unit520is configured to generate the content key, encrypt and encapsulate the original file by using the content key so as to obtain the encrypted file, and encrypt the content key so as to obtain the content key ciphertext. For example, the encryption unit520adopts an encryption algorithm to generate a random content key, and the encryption algorithm may adopt AES128, AES256, or other applicable cryptographic algorithms. The encryption unit520encrypts and encapsulates the original file by using the content key so as to obtain the encrypted file. The encrypted file includes, for example, the content identifier and the original file ciphertext corresponding to the original file, and may further include information such as the name of the painting, author, description of the painting, encryption algorithm, version of the encrypted file, and the like. The encryption unit520encrypts the content key by using the service public key so as to obtain the content key ciphertext, thereby improving the security of transmission of the content key. For example, the content key ciphertext can be the aforementioned second content key ciphertext. The transmission unit530is configured to transmit the encrypted file and the content key ciphertext to different servers for storage. For example, the encrypted file and the content key ciphertext are transmitted to different servers to be stored separately, so that the encrypted file and the content key ciphertext can be prevented from being leaked simultaneously, and the storage security can be improved. For example, the encrypted file is transmitted to a public cloud for storage, so that any device can quickly acquire the encrypted file so as to achieve efficient distribution of the encrypted file; and the content key ciphertext is transmitted to, for example, a private cloud and stored in the content key library. Because the encrypted file is encrypted, the original file cannot be leaked when stored in the public cloud. In the case where the electronic device50is the mobile phone04illustrated inFIG.2, the transmission unit530in the mobile phone04transmits the encrypted file to the second server03for storage, and transmits the content key ciphertext to the first server02and the content key ciphertext is stored in the content key database024. The electronic device50can complete the encrypted upload of the original file (for example, a digital painting) by the joint action of the application unit510, the encryption unit520, and the transmission unit530. For example, the process of the encrypted upload is completed by the copyright owner of the original file through the electronic device50. It should be noted that the electronic device50is not limited to include the units described above and can further include more units to achieve more comprehensive functions. Each unit can be implemented as hardware, firmware, or software modules, and these software modules can be run in the electronic device50to provide corresponding application programs or service processes, which are not limited in the embodiments of the present disclosure. The electronic device50is not limited to the mobile phone04, and can also be other devices, such as a tablet computer, a personal computer, a notebook computer, etc., which are not limited in the embodiments of the present disclosure. FIG.13is a schematic flowchart of a digital artwork management method provided by some embodiments of the present disclosure. The digital artwork management method can be used for the electronic device50illustrated inFIG.12. As illustrated inFIG.13, the digital artwork management method includes following steps. Step S301: applying for a content identifier for an original file. Step S302: generating a content key, encrypting and encapsulating the original file by using the content key so as to obtain an encrypted file, and encrypting the content key so as to obtain a content key ciphertext, where the encrypted file includes the content identifier and an original file ciphertext corresponding to the original file. Step S303: transmitting the encrypted file and the content key ciphertext to different servers for storage. For example, the above steps can be performed by the electronic device50illustrated inFIG.12. For example, step S301can be performed by the application unit510, step S302can be performed by the encryption unit520, and step S303can be performed by the transmission unit530. Relevant descriptions can be referred to the foregoing contents, and are not repeated here. FIG.14is a schematic flowchart of another digital artwork management method provided by some embodiments of the present disclosure. The digital artwork management method can be used for the first server02illustrated inFIG.2. As illustrated inFIG.14, the digital artwork management method includes following steps. Step S401: receiving an application from a requesting device and assigning a content identifier for an original file. Step S402: storing a content key ciphertext that is received. Step S403: decrypting the content key ciphertext so as to obtain a content key, and decrypting an encrypted file, which is acquired, by using the content key so as to obtain the original file. Step S404: reducing the resolution of the original file to generate a preview image, and transmitting the preview image for storage. For example, in step S401, the DRM unit021in the first server02receives an application from a requesting device (for example, the mobile phone04) and assigns the content identifier for the original file. The content identifier is in one-to-one correspondence with the original file. The DRM unit021transmits the assigned content identifier to the mobile phone04. For example, in step S402, after receiving the content key ciphertext, the DRM unit021stores the content key ciphertext in the content key database024. For example, in step S403, the DRM unit021decrypts the content key ciphertext by using the service private key stored in the DRM unit021, so as to obtain the content key. The DRM unit021acquires the corresponding encrypted file from the second server03according to the content identifier, and decrypts the encrypted file by using the content key, so as to obtain the original file. For example, in step S404, the original file is, for example, a digital painting, and the DRM unit021reduces the resolution of the digital painting to generate a preview image, and transmits the preview image to the second server03for storage, thereby facilitating the user who needs to purchase the digital painting to browse through the painted screen01. For example, a non-erasable visible watermark can also be embedded in the preview image. Because the preview image has a lower resolution and contains the non-erasable visible watermark, the use value is greatly reduced. Although the user can acquire the preview image from the second server03, the leakage of the original file can be avoided. At least one embodiment of the present disclosure further provides a digital file management system, and the system can achieve high security where one device corresponds to one key, thereby preventing unauthorized devices from acquiring digital files (for example, digital artwork files). In addition, some embodiments of the present disclosure also solve the problem of high coupling degree between transaction services and license services in the traditional digital copyright management system, thereby improving the maintainability of the system, and implementing the efficient distribution of digital files (for example, digital artwork files). For example, in some embodiments, the digital file management system includes the digital artwork display device10illustrated inFIG.3and the electronic device20illustrated inFIG.8, and in some embodiments, the digital file management system can further include the electronic device50illustrated inFIG.12. For example, the system service provider of the digital file management system can provide a corresponding software for digital painting copyright owners and digital painting copyright purchasers to download, install, and use, and provide a corresponding service platform (server) for digital painting copyright owners and digital painting copyright purchasers to connect and use. In this way, the digital painting copyright owners can download, install, and use the software according to the instructions of the system service provider, or directly run the corresponding executable codes in a browser, so as to realize the electronic device50. The digital painting copyright purchasers can implement the digital artwork display device10through a dedicated digital artwork display device (for example, a painted screen), or can download, install, and use the software in a general electronic device (for example, a tablet computer) according to the instructions of the system service provider, or directly run the corresponding executable codes in a browser so as to realize the digital artwork display device10. For the technical effects and detailed description of the digital file management system, reference can be made to the descriptions of the digital artwork display device and the electronic device in the foregoing, which are not repeated here. For example, the digital file management system is applied in the scenario illustrated inFIG.2, the digital artwork display device10can be implemented as the painted screen01, the electronic device20can be implemented as the first server02, and the electronic device50can be implemented as the mobile phone04. The operation flow of the digital file management system is described below in conjunction with the application scenario illustrated inFIG.2. First, a digital painting is encrypted and uploaded. The mobile phone04acquires the digital painting, connects to the internet, for example, logs in to a digital painting transaction platform, and applies to the DRM unit021of the first server02for a content identifier for the original file (for example, the digital painting). The mobile phone04generates a content key, encrypts and encapsulates the digital painting by using the content key so as to obtain an encrypted file, and encrypts the content key by using a service public key so as to obtain a content key ciphertext (for example, the aforementioned second content key ciphertext). The encrypted file includes the content identifier and an original file ciphertext corresponding to the digital painting. The mobile phone04transmits the encrypted file to the second server03(for example, a public cloud) for storage, and for example, the content identifier can be transmitted to the second server03for storage together with the encrypted file, so as to be used to retrieve and locate the encrypted file. In addition, the mobile phone04transmits the content key ciphertext to the first server02for storage, and for example, the content identifier can be transmitted to the first server02together with the content key ciphertext for storage, so as to be used to retrieve and locate the content key ciphertext. Second, the management of the digital painting is performed. After receiving the content key ciphertext and the content identifier sent by the mobile phone04, the DRM unit021stores the content key ciphertext in the content key database024. The DRM unit021decrypts the content key ciphertext by using a service private key so as to obtain the content key. The DRM unit021acquires the encrypted file from the second server03according to the content identifier and uses the content key to decrypt the encrypted file, thereby obtaining the digital painting. The DRM unit021reduces the resolution of the digital painting and embeds a visible watermark, so as to generate a preview image, and then transmits the preview image to the second server03for storage. The second server03can display the preview image to the public through, for example, a webpage, etc., so that the public can browse the preview image and choose whether to purchase or not. In addition, the registration of the painted screen01is performed. The painted screen01connects to the internet and applies to the DRM unit021for a device identifier and a device public-private key pair. After receiving an application of the painted screen01, the DRM unit021assigns the device identifier and the device public-private key pair for the painted screen01, and transmits the device identifier and the device private key of the device public-private key pair to the painted screen01, whereby these data are stored in the painted screen01for later use. The device public key of the device public-private key pair is stored in the first server02(for example, the license service unit022of the first server02). Then, the purchase and acquisition of the digital painting are performed. For example, the user browses the preview image of the digital painting stored in the second server03through the painted screen01, selects the digital painting to be purchased, and initiates a transaction request to the transaction management unit023of the first server02, and then performs a payment operation. The corresponding encrypted file can be acquired from the second server03by using the content identifier after the payment operation is completed, and for example, the transaction request includes the content identifier of the digital painting to be purchased and the device identifier of the painted screen01. After receiving the fee paid by the painted screen01, the transaction management unit023generates a transaction credential and sends the transaction credential to the painted screen01. The transaction credential includes the device identifier of the painted screen01, and further includes the content identifier corresponding to the purchased digital painting, the uses-permission, and a transaction credential digital signature. The transaction credential digital signature is generated by using a transaction credential private key. The painted screen01can confirm the received transaction credential through the device identifier and the content identifier, and then send the received transaction credential to the license service unit022of the first server02to request a use license. After verifying the transaction credential digital signature by using a transaction credential public key, the license service unit022acquires the corresponding content key ciphertext (for example, the aforementioned second content key ciphertext) from the content key database024according to the content identifier, decrypts the content key ciphertext by using the service private key so as to obtain the content key, and then encrypts the content key by using the device public key corresponding to the painted screen01to obtain a new content key ciphertext (for example, the aforementioned first content key ciphertext), so as to generate the use license. The use license includes the device identifier of the painted screen01, the content identifier corresponding to the purchased digital painting, the uses-permission, the new content key ciphertext, and a license digital signature. The license digital signature is generated by using a license private key. Finally, the purchased digital painting is displayed. After receiving the use license, the painted screen01verifies the license digital signature by using a license public key, and can confirm the received use license through the device identifier and the content identifier, and then decrypts the content key ciphertext in the use license by using the device private key, so as to obtain the content key. The painted screen01decrypts the obtained encrypted file by using the content key, thereby obtaining the digital painting. Therefore, the painted screen01completes the purchase and acquisition of the digital painting, the digital painting can be used by the painted screen01, for example, for display, etc. The usage follows certain terms included in the use license, such as deadline, method, authority, etc. It should be noted that, in some embodiments of the present disclosure, the operation flow of the digital file management system is not limited to the above-described method, and can also be other applicable methods, which can include more or fewer operation steps, and can be determined according to the actual needs. The following statements should be noted. (1) The accompanying drawings involve only the structure(s) in connection with the embodiment(s) of the present disclosure, and other structure(s) can be referred to common design(s). (2) In case of no conflict, features in one embodiment or in different embodiments can be combined to obtain new embodiments. What have been described above are only specific implementations of the present disclosure, the protection scope of the present disclosure is not limited thereto, and the protection scope of the present disclosure should be based on the protection scope of the claims. | 85,577 |
11861022 | DETAILED DESCRIPTION Reference is made in detail to embodiments of the invention, which are illustrated in the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts, components, or operations. The present invention will be described with respect to particular embodiments and with reference to certain drawings, but the invention is not limited thereto and is only limited by the claims. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term) to distinguish the claim elements. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent.” etc.) Refer toFIG.1. The electronic apparatus10includes the host device (also referred to as a host side)110, the flash controller130and the flash device150, and the flash controller130and the flash device150may be collectively referred to as a device side. The electronic apparatus10may be equipped with a Personal Computer (PC), a laptop PC, a tablet PC, a mobile phone, a digital camera, a digital recorder, or other consumer electronic products. The host side110and the host interface (I/F)131of the flash controller130may communicate with each other by Universal Flash Storage (UFS). Although the following embodiments describe the functionalities of Host Performance Booster (HPB) defined in the UFS specification, those artisans may apply the invention in similar functionalities defined in other specifications, and the invention should not be limited thereto. The control logic139of the flash controller130and the flash device150may communicate with each other by a Double Data Rate (DDR) protocol, such as Open NAND Flash Interface (ONFI), DDR Toggle, or others. The flash controller130includes a processing unit134and the processing unit134may be implemented in numerous ways, such as with general-purpose hardware (e.g., a microcontroller unit, a single processor, multiple processors or graphics processing units capable of parallel computations, or others) that is programmed using firmware and/or software instructions to perform the functions recited herein. The processing unit134receives HPB commands, such as HPB READ, HPB READ BUFFER, HPB WRITE BUFFER commands, through the host I/F131, schedules and executes these commands. The flash controller130includes the Random Access Memory (RAM)136and the RAM136may be implemented in a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or the combination thereof, for allocating space as a data buffer. The RAM136stores necessary data in execution, such as variables, data tables, data abstracts, and so on. The flash controller130includes the Read Only Memory (ROM) for storing program code that is required to be executed in the system booting. The control logic139includes a NAND flash controller (NFC) to provide functions that are required to access to the flash device150, such as a command sequencer, a Low Density Parity Check (LDPC) encoder/decoder, etc. The flash controller130includes the coder-decoder (Codec)138being dedicated hardware. The Codec138includes an encoding logic for encrypting raw HPB entries; and a decoding logic for decrypting the encrypted content to recover the raw HPB entries. The following paragraphs will describe the details of the structures, the functionalities, and the interactions with other components for the Codec138. The bus architecture132may be configured in the flash controller130for coupling between components to transfer data, addresses, control signals, etc., which include the host I/F131, the processing unit134, the ROM135, the RAM136, the Codec138, the control logic139, and so on. In some embodiments, the host I/F131, the processing unit134, the ROM135, the RAM136, the Codec138, the control logic139are coupled to each other by a single bus. In alternative embodiments, a high-speed bus is configured in the flash controller130for coupling the processing unit134, the Codec138and the RAM136to each other and a low-speed bus is configured for coupling the processing unit134, the Codec138, the host I/F131and the control logic139to each other. The bus includes a set of parallel physical-wires connected to two or more components of the flash controller130. The flash device150provides huge storage space typically in hundred Gigabytes (GB), or even several Terabytes (TB), for storing a wide range of user data, such as high-resolution images, audio files, video files, etc. The flash device150includes control circuits and memory arrays containing memory cells that can be configured as Single Level Cells (SLCs), Multi-Level Cells (MLCs), Triple Level Cells (TLCs), Quad-Level Cells (QLCs), or any combinations thereof. The processing unit134programs user data into a designated address (a destination address) of the flash device150and reads user data from a designated address (a source address) thereof through the control logic139. The control logic139may use several electronic signals run on physical wires including data lines, a clock signal line and control signal lines for coordinating the command, address and data transfer with the flash device150. The data lines may be used to transfer commands, addresses, read data and data to be programmed; and the control signal lines may be used to transfer control signals, such as Chip Enable (CE), Address Latch Enable (ALE), Command Latch Enable (CLE), Write Enable (WE), etc. In alternative embodiments, refer toFIG.2. The electronic apparatus20includes the modified flash controller230, which does not include the Codec138as shown inFIG.1. In the flash controller230, the functions of the Codec138may be replaced by software or firmware instructions, and the processing unit134when loading and executing these instructions encrypts raw HPB entries and decrypts the encrypted content to recover the raw HPB entries. In other words,FIG.1encloses the hardware solutions whileFIG.2encloses software solutions for the encryption and the decryption. Refer toFIG.3. The I/F151of the flash device150may include four I/O channels (hereinafter referred to as channels) CH #0to CH #3and each is connected to four NAND flash units, for example, the channel CH #0is connected to the NAND flash units153#0,153#4,153#8and153#12. Each NAND flash unit can be packaged in an independent die. The control logic139may issue one of the CE signals CE #0to CE #3through the I/F151to activate the NAND flash units153#0to153#3, the NAND flash units153#4to153#7, the NAND flash units153#8to153#11, or the NAND flash units153#12to153#15, and read data from or program data into the activated NAND flash units in parallel. Since continuous data is distributed to be stored in flash units connected to multiple channels, the flash controller130uses a logical-to-physical (L2P) mapping table to record mapping relationships between logical addresses (managed by the host device110) and physical addresses (managed by the flash controller130) for user-data segments. The L2P table may be referred to as the host-to-flash (H2F) mapping table. The H2F mapping table includes multiple records arranged in the order of logical addresses and each record stores information indicating which physical address that user data of the corresponding logical address is physically stored in the flash module150. However, because the RAM136cannot provide enough space to store the whole H2F table for the processing unit134, the whole H2F table is divided into multiple Tables 1 (also referred to as T1 tables) and the T1 tables are stored in the non-volatile flash device150, so that only necessary T1 table or tables are read from the flash device150and stored in the RAM136for fast look-up when data read operations are performed in the future. Refer toFIG.4. The whole H2F table is divided into T1 tables430#0˜430/#15. The processing unit134further maintains a Table 2 (also referred to as a T2 table)410, which contains multiple records arranged in the order of the logical addresses. Each record stores information indicating which physical address that the corresponding T1 table for a designated logical address range is physically stored in. For example, the T1 table430#0associated with the 0thto the 4095thlogical block addresses (LBAs) is stored in the 0thphysical page of a designated physical block of a designated LUN (the letter “Z” represents the number of the designated physical block and the designated LUN), the T1 table430#1associated with the 4096thto the 8191thLBAs is stored in the 1stphysical page of the designated physical block of the designated LUN, and the remaining can be deduced by analogy. AlthoughFIG.4shows 16 T1 tables only, those artisans may modify the design to put more T1 tables depending on the capacity of the flash device150, and the invention should not be limited thereto. Space required by each T1 table may be 4 KB, 8 KB, 16 KB, or others. Each T1 table stores physical-address information corresponding to LBAs in the order of LBA, and each LBA corresponds to a fixed-length physical storage space, such as 4 KB. Refer toFIG.5. For example, the T1 table430#0stores physical-address information from LBA #0to LBA #4095sequentially. The physical-address information may be represented in four bytes: the two least-significant bytes530-0records a physical block number and the two most-significant bytes530-1records a physical page number. For example, the physical-address information530corresponding to LBA #2points to the physical page510of the physical block310#1. The bytes530-0records the number of the physical block310#1and the bytes530-1records the number of the physical page510. Refer toFIG.6. In the HPB specification, the host side110allocates space of its system memory as an HPB cache600for temporarily storing information of the H2F mapping table maintained by the device side. The HPB cache600stores multiple HPB entries received from the device side and each HPB entry records physical-address information corresponding to one LBA. Subsequently, the host side can issue read commands with the HPB entries to read user data of the corresponding LBAs. The device side can directly drive the control logic139to read user data of the designated LBAs according to the information of HPB entries, without spending time and computing resources to read the H2P mapping table from the flash device150and perform the L2P translation as before. The establishment and use of HPB cache600may be diviced into three stages: Stage I (HPB initiation): The host side110requests the device side (specifically, the flash controller130) for retrieving device capabilities and configuring the HPB feature in the device side, including the HPB mode, and so on. Stage II (L2P cache management): The host side110allocates space of its system memory as the HPB cache600for storing the HPB entries. The host side110issues an HPB READ BUFFER command based on the configured HPB mode to the flash controller130to load the designated HPB entries from the device side at needed time points. Subsequently, the host side110stores the HPB entries in one or more Sub-Regions of the HPB cache600. In the HPB specification, the LBAs of each logic unit (for example, partition) are divided into multiple HPB Regions, and each HPB Region is further subdivided into multiple Sub-Regions. For example, the HPB cache600may include “N” HPB Regions, and each HPB Region may include “L” Sub-Regions for storing the HPB entries for an LBA range, where “N” and “L” are variables being positive integers. The partition range of HPB cache600is shown in Table 1: TABLE 1HPB Sub-Region #0HPB Region #0HPB Sub-Region #1. . .HPB Sub-Region #L-1. . .. . .. . .. . .. . .HPB Sub-Region #0HPB Region #N-1HPB Sub-Region #1. . .HPB Sub-Region #L-1 In some embodiments, each Region and each Sub-Region may be set to have space of 32 MB individually, that is, each Region contains only one Sub-Region. In alternative embodiments, each Region may be set to have space of 32 MB, and each Sub-Region may be set to have space of 4 MB, 8 MB or 16 MB, that is, each Region contains eight, four, or two Sub-Regions. Stage III (HPB read command): The host side110searches the HPB entries of the HPB cache600to obtain physical block addresses (PBAs) corresponding to the user data of LBAs that are attempted to read. Then, the host side110issues an HPB READ command which includes the HPB entries in addition to the LBA, the TRANSFER LENGTH, etc. to the flash controller130to obtain the designated user data from the device side. However, the PBA information of the HPB entries are provided in plain code typically. The illegal persons may spy on the HPB information with the host side110to know the internal management performed in the device side and steal data (e.g. the system or management data) stored in the device side in an abnormal way. The HPB specification defines two modes for obtaining the HPB entries: the host control mode; and the device control mode. The host control mode is triggered by the host side110to determine which HPB Sub-Regions need to be stored in the HPB cache600while the device control mode is triggered by the flash controller to determine which HPB Sub-Regions need to be stored in the HPB cache600. Those artisans realize that the embodiments of the invention can be applied in the two modes, or the similar. Refer toFIG.7showing the diagram of the operation sequence applied in the host control mode. The detailed description is as follows: Operation711: The host side110identifies which Sub-Regions are to be activated. Operation713: The host side110issues an HPB READ BUFFER command to the flash controller130to request the flash controller130for the HPB entries of the identified Sub-Region. The HPB READ BUFFER command may contain 10 bytes, in which the 0thbyte records the operation code “F9h”, the 2ndand the 3rdbytes record information regarding the HPB Region to be activated, and the 4thand the 5thbytes record information regarding the Sub-Region to be activated. Operation715: The flash controller130reads the designated portion of the H2F mapping table from the flash device150, and arranges the read mapping information into HPB entries. In order to prevent the PBA information in the HPB entries from being snooped by illegal persons to know the internal management of data storage, the flash controller130encrypts the content of HPB entries. The following paragraphs will explain the reading operation in more detail. Operation717: The flash controller130delivers a DATA IN UFS Protocol Information Unit (UPIU) to the host side110, which includes the encrypted content of the HPB entries of the identified Sub-Regions, rather than plain code. Operation719: The host side110receives and stores the encrypted HPB entries in the activated Sub-Regions of the HPB cache600. Operation731: The host side110identifies which Regions to be deactivated. It is to be noted herein that in the HPB specification, the basic unit for activation is Sub-Region while the basic unit for de-activation is Region. The host side110may determine the activated Sub-Regions and the de-activated Region according to the requirements of its algorithm. Operation733: The host side110issues an HPB WRITE BUFFER command to the flash controller130to request the flash controller130for de-activating the identified Region. The HPB WRITE BUFFER command may contain 10 bytes, in which the 0thbyte records the operation code “FAh”, and the 2ndand the 3rdbytes record information regarding the HPB Region to be de-activated. Operation735: The flash controller130de-activates the HPB Region. For example, after delivering the HPB entries to the host side110, the flash controller130may perform an optimized operation on the read process of the subsequent read commands issued by the host side110for the activated Sub-Regions. Then, after receiving the notification of the de-activation of the Region including the previously activated Sub-Regions, the flash controller130may stop the optimized operation corresponding to the de-activated Region. Operation751: After executing one or more host write commands, or host erase commands, or performing a background operation (such as a garbage collection, a wear leveling, a read reclaim, or a read reflash process) completely, the flash controller130may update the content of H2F mapping table, which includes the content of activated Sub-Regions. Operation753: The flash controller130sends a RESPONSE UPIU to the host side110, which suggests updating the HPB entries of the modified Sub-Regions to the host side110. Operations755and757: The host side110issues an HPB READ BUFFER command to the flash controller130to request the flash controller130for the updated HPB entries of the recommended Sub-Regions. Operation771: The flash controller130reads the designated portion of the H2F mapping table from the flash device150, and arranges the read mapping information into HPB entries. Similarly, the flash controller130encrypts the content of HPB entries. The following paragraphs will explain the reading operation in more detail. Operation773: The flash controller110delivers a DATA IN UPIU to the host side110, which includes the encrypted content of the HPB entries of the updated Sub-Regions, rather than plain code. Operation775: The host side110overwrites the content of corresponding Sub-Region of the HPB cache600with the encrypted HPB entries received from the flash controller110. Refer toFIG.8showing the diagram of the operation sequence applied in the device control mode. The detailed description is as follows: Operation811: The flash controller130identifies which Sub-Regions are to be activated, and/or which Regions are to be de-activated. Operation813: The flash controller130sends a RESPONSE UPIU to the host side110, which suggests activating the aforementioned Sub-Regions and/or de-activating the aforementioned Regions to the host side110. Operation815: If required, the host side110discards the HPB entries of the de-activated Regions from the system memory. Operation831: If required, the host side110issues an HPB READ BUFFER command to the flash controller130to request the flash controller130for the HPB entries of the suggested Sub-Regions. Operation833: The flash controller130reads the designated portion of the H2F mapping table from the flash device150, and arranges the read mapping information into HPB entries. Similarly, the flash controller130encrypts the content of HPB entries. The following paragraphs will explain the reading operation in more detail. Operation835: The flash controller110delivers a DATA IN UPIU to the host side110, which includes the encrypted content of the HPB entries of the corresponding Sub-Regions, rather than plain code. Operation837: The host side110receives and stores the encrypted HPB entries in the activated Sub-Regions of the HPB cache600. Technical details regarding the reading operations715,771or833may refer toFIG.9showing a flowchart of a method for generating HPB entries. The method is performed by the processing unit134when loading and executing relevant software or firmware program code. Further description is as follows: Step S910: The aforementioned HPB READ BUFFER command is received from the host side110through the host I/F131, which includes information regarding Sub-Regions to be activated. The HPB READ BUFFER command is used to request the flash controller130for PBAs corresponding to an LBA range. Step S920: The T1 and T2 tables corresponding to the activated Sub-Region are read from the flash device150through the control logic139. Step S930: HPB entries are generated according to the content of T1 and T2 tables. Those artisans will realize that the length (for example, 8-byte) of each HPB entry defined in the HPB specification may be longer than the length (for example, 4-byte) of the physical-address information associated with each LBA recorded in the T1 table. Thus, in some embodiments, in addition to the physical-address information associated with each LBA (that is, the PBA information for this LBA recorded in the T1 table), the processing unit134may fill dummy values with the remaining space of each HPB entry. In alternative embodiments, in addition to the physical-address information associated with each LBA, the processing unit134may fill other information with the remaining space of each HPB entry depending on different system requirements to accelerate the future HPB read operations. In some embodiments, in each HPB entry of 8-byte, the processing unit134may fill in the corresponding PBA information of the T1 table in 4-byte and the corresponding PBA information of the T2 table in 4-byte. The PBA information of the T1 table indicates where data of a specific LBA is actually stored in the flash device150. The PBA information of the T2 table will be used to inspect whether this HPB entry is invalid by the device side. If the PBA information of the T2 table included in the HPB entry obtained from a future HPB READ command does not match the address that the corresponding T1 table is actually stored in the flash device150, then the processing unit134determines that this HPB entry is invalid. Exemplary HPB entries are illustrated in Table 2: TABLE 2PBAPBAHPBinformationinformationEntryof T2 tableof T1 tableNumber(4-byte)(4-byte)00x000040300x0000A00010x000040300x0000A00120x000040300x0000A00230x000040300x0000A00340x000040300x0000A00450x000040300x0000A00560x000040300x0000B00970x000040300x0000A00780x000040300x0000A00890x000040300x0000A009100x000040300x0000A00A110x000040300x0000B00A120x000040300x0000A00C. . .. . .. . . In alternative embodiments, in each HPB entry of 8-byte, the processing unit134may fill in the corresponding PBA information of the T1 table in 28-bit, the corresponding PBA information of the T2 table in 24-bit, and a continuous length of 12-bit. The continuous length indicates how many LBAs of user data after this LBA are stored in continuous physical addresses in the flash device150. Therefore, one HPB entry can describe information regarding multiple consecutive PBAs in the T1 table. Exemplary HPB entries are illustrated in Table 3: TABLE 3PBAPBAHPBContinuousinformationinformationEntryLengthof T2 tableof T1 tableNumber(12-bit)(24-bit)(28-bit)00x50x0040300x000A00010x40x0040300x000A00120x30x0040300x000A00230x20x0040300x000A00340x10x0040300x000A00450x00x0040300x000A00560x00x0040300x000B00970x30x0040300x000A00780x20x0040300x000A00890x10x0040300x000A009100x00x0040300x000A00A110x00x0040300x000B00A120x30x0040300x000A00C130x20x0040300x000A00D140x10x0040300x000A00E150x00x0040300x000A00F. . .. . .. . .. . . Suppose that the 0thHPB entry of Table 3 is associated with LBA “0x001000”: The 0thHPB entry indicates that five LBAs of user data after LBA “0x001000” are stored in continuous physical addresses in the flash device150. Specifically, the user data of LBAs “0x001000” to “0x001005” are stored in PBA “0x00A000” to “0x00A005” in the flash device150, respectively. The processing unit134will read user data of six LBAs “0x001000” to “0x001005” according to the information carried in the 0thHPB entry. If an HPB READ command indicates that user data of LBA “0x001000” is attempted to read, and the TRANSFER LENGTH is equal to or shorter than “6”, then the processing unit134will not need to read the corresponding portion of the H2F mapping table from the flash device150. In alternative embodiments, in each HPB entry of 8-byte, the processing unit134may fill in the corresponding PBA information of the T1 table in 28-bit, the corresponding PBA information of the T2 table in 24-bit, and a continuous bit table of 12-bit. The continuous bit table is used to describe the PBA continuity of multiple LBAs after this LBA (such as,12consecutive LBAs). For example, the 12 bits corresponds to 12 consecutive LBAs, respectively. Exemplary HPB entries are illustrated in Table 4: TABLE 4PBAPBAHPBinformationinformationEntryContinuous Bitof T2 tableof T1 tableNumberTable (12-bit)(24-bit)(28-bit)00xBDF0x0040300x000A000(101111011111)10xDEF0x0040300x000A001(110111101111)20xEF70x0040300x000A002(111011110111)30xF7B0x0040300x000A003(111101111011)4. . .0x0040300x000A004. . .. . .. . .. . . Suppose that the 0thHPB entry of Table 4 is associated with LBA “0x001000”: The continuous bit table of the 0thHPB entry indicates the PBA continuity of LBAs “0x001001” to “0x00100C”. Ideally, the user data of LBAs “0x001001” to “0x00100C” are stored in the PBA “0x000A001” to “0x000A00C” in the flash device150, respectively. The value of each bit being “0” means that the user data of the corresponding LBA is not stored in the ideal PBA while the value of each bit being “1” means that the user data of the corresponding LBA is stored in the ideal PBA. Thus, in light of the 0thHPB entry, the processing unit134can predict the PBAs for that the continuous bits are “1” and read the user data from the predicted PBAs of the flash device150, but ignores the PBAs for that the continuous bits are “0”. For example, the host device110issues the HPB READ command including the parameters carrying the 0thHPB entry and the TRANSFER LENGTH being “9” to request the flash controller130for the user data of LBAs “0x001000” to “0x001008”. The processing unit134obtains the continuous bit table from the 0thHPB entry in the HPB READ command, and predicts the PBAs that the user data of LBAs “0x001000” to “0x001005” and LBAs “0x001007” to “0x001008” are physically stored in after decoding the continuous bit table, without loading the H2F mapping table from the flash device150. In cases where there are only a few breakpoints, the number of times of loading specific PBA information of the T1 table from the flash device150would be reduced. Step S940: The raw HPB entries is stored in the RAM136. Refer toFIG.10. The RAM136may allocate continuous memory address space for the raw-entry area1010. The processing unit134may store the raw HPB entries in the raw-entry region1010in the order of LBA. Step S950: The HPB entries are encrypted and the encrypted HPB entries are stored in the RAM136. Refer toFIG.10. The RAM136may allocate continuous memory address space for the encrypted-entry area1020. With the architecture as shown inFIG.1, the processing unit134may set a register of the Codec138to drive the Codec138to read the aforementioned content of the HPB entries from the raw-entry area1010of the RAM136, encrypt the HPB entries according to set parameters, and store the encrypted HPB entries in the encrypted-entry area1020of the RAM136. After completing the encryption on the HPB entries, the Codec138issues an interrupt to the processing unit134to notify the completion of the encryption, so that the processing unit134could continue to process the encrypted HPB entries. Or, with the architecture as shown inFIG.2, the processing unit134may load and execute program code of an encryption module to complete the aforementioned operations. The exemplary feasible encryption algorithms are listed below: In some embodiments, the processing unit134or the Codec138left or right circular shifts the content of an HPB entry by n bits, where n is an arbitrary integer ranging from 1 to 63. In alternative embodiments, the processing unit134or the Codec138adds a preset key value to the content of an HPB entry. In still another embodiment, the processing unit134or the Codec138XORs the content of an HPB entry with a preset key value. In still another embodiment, the processing unit134or the Codec138randomizes the content of an HPB entry in a preset rule. For example, the preset rule states that the ithbit of the HPB entry is swapped with the (63-i)thbit of the HPB entry, for i=0 to 31. To improve the data security, the HPB entries may be grouped into several groups according to their LBAs, and different encryption algorithms with relevant encryption parameters are applied to the groups of HPB entries, respectively. Exemplary grouping rules for the HPB entries are as follows: In some embodiments, the LBA associated with one HPB entry may be divided by a value first, and the HPB entry is grouped according to the quotient. Suppose the value is set to “100”: The first group includes the HPB entries of LBA #0-99, the second group includes the HPB entries of LBA #100˜199, and so on. In alternative embodiments, the LBA associated with one HPB entry may be divided by a value first, and the HPB entry is grouped according to the remainder. Suppose the value is set to “100”: The first group includes the HPB entries such as LBA #0, LBA #100, LBA #200, etc., the second group includes the HPB entries such as LBA #1, LBA #101, LBA #201, etc., and so on. In some embodiments, different groups of HPB entries may be applied by the same encryption algorithm with different encryption parameters, respectively. For example, each HPB entry content of the first group is left circular shifted by 1 bit, each HPB content of the second group is right circular shifted by 2 bits, each HPB entry content of the third group is left circular shifted by 3 bits, and so on. Or, each HPB entry content of the first group is added to or XORed with a first value, each HPB content of the second group is added to or XORed with a second value, each HPB entry content of the third group is added to or XORed with a third value, and so on. Or, each HPB entry content of the first group is randomized in a first rule, each HPB content of the second group is randomized in a second rule, each HPB entry content of the third group is randomized in a third rule, and so on. In alternative embodiments, different groups of HPB entries may be applied by different encryption algorithms with relevant encryption parameters, respectively. For example, each HPB entry content of the first group is left circular shifted by n bits, each HPB content of the second group is XORed with a preset key value, each HPB entry content of the third group is added to a preset key value, each HPB entry content of the fourth group is randomized in a preset rule, and so on. In some embodiments, the processing unit134may store a group-and-encryption mapping table in the RAM136, which includes multiple configuration records. Each configuration record stores information indicating that a particular group of HPB entries are encrypted by a specific encryption algorithm with a specific encryption parameter. In alternative embodiments, similar information with the group-and-encryption mapping table may be embedded in the program logic executed by the processing unit134, and the invention should not be limited thereto. Step S960: The encrypted HPB entries are read from the encrypted-entry area1020of the RAM136, and a DATA IN UPIU is delivered to the host side110, which includes the encrypted HPB entries. Since the content of HPB entries is encrypted, illegal persons cannot comprehend the content of HPB entries through the host side110and know the internal management of data storage on the device side accordingly, so that sensitive data is prevented from being obtained by illegal persons in abnormal ways. Although the HPB entries are encrypted, as long as the host side110carries the encrypted HPB entries in the HPB READ command, the desired user data can still be obtained from the deice side. Refer toFIG.11showing the diagram of the operation sequence for HPB data reads. The detailed description is as follows: Operation1110: The host side110obtains the HPB entries associated with user data of LBAs that is attempted to be read from the HPB cache600. It is to be noted that the content of HPB entries is encrypted. Operation1120: The host side110issues HPB READ commands to the flash controller130to request the flash controller130for the user data of designated LBAs, and each HPB READ command includes such as an LBA, a TRANSFER LENGTH, the corresponding HPB entry, etc. Operation1130: The flash controller130decrypts the content of HPB entries, and reads the requested user data from the flash device150according to the PBA information of the T1 table (if required, plus the continuous length or the continuous bit table) of the decrypted HPB entries. Operation1140: The flash controller130delivers a DATA IN UPIU to the host side110, which includes the requested user data. Operation1150: The host side110processes the user data according to the requirements of such as the Operating System (OS), the driver, the application, etc. Technical details regarding the read operation1130may refer toFIG.12showing a flowchart of a method for reading user data. The method is performed by the processing unit134when loading and executing relevant software or firmware program code. Further description is as follows: Step S1210: An HPB READ command including information regarding such as an LBA, a TRANSFER LENGTH, an HPB entry, etc. is received from the host side110through the host side110. Refer toFIG.10. The RAM136may allocate continuous memory address space for the received-entry area1030for storing the received HPB entries. Step S1220: If the raw HPB entries has been undergone the aforementioned encryption in groups, then the group it belongs to is obtained according to the LBA in the HPB READ command. Technical details for obtaining the group to which the LBA belongs may be refer to the description of step S950, and are not be repeated herein for brevity. If the raw HPB entries were not undergone the aforementioned encryption in groups, this step is omitted. Step S1230: The HPB entry is decrypted by using the corresponding decryption algorithm with relevant decryption parameters. The decryption algorithm with relevant decryption parameters is the reverse process of the encryption algorithm with relevant encryption parameters that the raw HPB entries are applied. For example, if the encryption algorithm left circular shifts the raw HPB entry content of by 2 bits, then the decryption algorithm right circular shifts the encrypted HPB entry content by 2 bits. If the encryption algorithm adds a preset value to the raw HPB entry content, then the decryption algorithm subtracts the preset value from the encrypted HPB entry content. If the encryption algorithm XORs the raw HPB entry content with a preset value, then the decryption algorithm XORs the encrypted HPB entry content with the preset value again. If the encryption algorithm randomizes the HPB entry content in a preset rule, then the decryption algorithm de-randomizes the encrypted HPB entry content in the preset rule. In some embodiments, if the raw HPB entries has been undergone the aforementioned encryption in groups, then the processing unit134searches the group-and-encryption mapping table in the RAM136to obtain the encryption algorithm with relevant encryption parameters that the LBA belongs to, and decrypts the HPB entry content by using the corresponding decryption algorithm with relevant decryption parameters. Refer toFIG.10. The RAM136may allocate continuous memory address space for the decrypted-entry area1040. With the architecture as shown inFIG.1, the processing unit134may set a register of the Codec138to drive the Codec138to read the aforementioned content of the HPB entries from the received-entry area1030of the RAM136, decrypt the HPB entries according to set parameters, and store the decrypted HPB entries in the decrypted-entry area1040of the RAM136. After completing the decryption on the HPB entries, the Codec138issues an interrupt to the processing unit134to notify the completion of the decryption, so that the processing unit134could continue to process according to the decrypted HPB entries. Or, with the architecture as shown inFIG.2, the processing unit134may load and execute program code of a decryption module to complete the aforementioned operations. Step S1240: It is determined whether the decrypted HPB entry is valid. If so, then the process proceeds to step S1250. Otherwise, the process proceeds to step S1270. If the raw HPB entry does not include information of the T2 table, then this step is ignored. The processing unit134may determine whether the PBA information of the T2 table included in the decrypted HPB entry matches the address that the corresponding T1 table is actually stored in the flash device150. If they match, it means that this decrypted HPB entry is valid. Step S1250: The user data of requested LBA is read from the PBA of the flash device150through the control logic139according to the PBA information of the T1 table in the decrypted HPB entry. Step S1260: One or more DATA IN UPIUs are delivered to the host side110through the host I/F131, which include the read user data. Step S1270: A RESPONSE UPIU including a reading failure is sent to the host side110through the host I/F131. In alternative embodiments, the RESPONSE UPIU includes a suggestion for updating the HPB entries of the corresponding Sub-Regions to the host side110, so that the host side110could start the aforementioned operations755and757. Some or all of the aforementioned embodiments of the method of the invention may be implemented in a computer program such as a driver for a dedicated hardware, a firmware translation layer (FTL) of a storage device, or others. Other types of programs may also be suitable, as previously explained. Since the implementation of the various embodiments of the present invention into a computer program can be achieved by the skilled person using his routine skills, such an implementation will not be discussed for reasons of brevity. The computer program implementing some or more embodiments of the method of the present invention may be stored on a suitable computer-readable data carrier such as a DVD, CD-ROM, USB stick, a hard disk, which may be located in a network server accessible via a network such as the Internet, or any other suitable carrier. Although the embodiment has been described as having specific elements inFIGS.1to3, it should be noted that additional elements may be included to achieve better performance without departing from the spirit of the invention. Each element ofFIGS.1to3is composed of various circuits and arranged operably to perform the aforementioned operations. While the process flows described inFIGS.9and12include a number of operations that appear to occur in a specific order, it should be apparent that these processes can include more or fewer operations, which can be executed serially or in parallel (e.g., using parallel processors or a multi-threading environment). While the invention has been described by way of example and in terms of the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements. | 40,272 |
11861023 | DETAILED DESCRIPTION For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details. Many of the function units of the systems described in this specification have been labeled as modules. Embodiments of the invention apply to a wide variety of module implementations. For example, a module can be implemented as a hardware circuit including custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module can also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. Modules can also be implemented in software for execution by various types of processors. An identified module of executable code can, for instance, include one or more physical or logical blocks of computer instructions which can, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but can include disparate instructions stored in different locations which, when joined logically together, function as the module and achieve the stated purpose for the module. The various components, modules, sub-function, and the like of the systems illustrated herein are depicted separately for ease of illustration and explanation. In embodiments of the invention, the operations performed by the various components, modules, sub-functions, and the like can be distributed differently than shown without departing from the scope of the various embodiments of the invention describe herein unless it is specifically stated otherwise. For convenience, some of the technical functions and/or operations described herein are conveyed using informal expressions. For example, a processor that has data stored in its cache memory can be described as the processor “knowing” the data. Similarly, a user sending a load-data command to a processor can be described as the user “telling” the processor to load data. It is understood that any such informal expressions in this detailed description should be read to cover, and a person skilled in the relevant art would understand such informal expressions to cover, the informal expression's corresponding more formal and technical description. Turning now to an overview of aspects of the invention, embodiments of the invention provide computer systems, computer-implemented methods, and computer program products that receive an encrypted message and associated resource-scaling (RS) data; use RS constraints in the RS data to determine that the encrypted message does not have mandatory cryptographic processing requirements; determine cryptographic metrics derived from cryptographic operations performed by a processor system; use the RS constraints and the cryptographic metrics to determine and/or predict the non-mandatory cryptographic processing requirements for the encrypted message; use the determined and/or predicted cryptographic processing requirements to identify and/or select a matching and/or customized set of cryptographic computing resources from among a set of available cryptographic computing resources; and use the matching and/or customized set of cryptographic computing resources to perform customized cryptographic operations (e.g., customized decryption) on the encrypted message. In embodiments of the invention, the set of available cryptographic computing resources can be selected to align with the various types of cryptographic processing requirements that are expected for the various types of encrypted message that can be transmitted. In accordance with aspects of the invention, the cryptographic computing resource(s) that align with a particular type of encrypted message are the cryptographic computing resources that efficiently apply cryptographic operations to the particular type of encrypted message without wasting cryptographic computing resources. Accordingly, embodiments of the invention can be used to match the predicted cryptographic processing requirements of the encrypted message to a tailored or customized subset of the available cryptographic computing resources, thereby improving the efficiency of how the available cryptographic computing resources are applied. Accordingly, embodiments of the invention perform cryptographic operations in a novel manner that improves the efficiency of such cryptographic operations, which results in the non-wasteful and efficient usage of cryptographic computing resources, including specifically high-security cryptographic computing resources. The above-described aspects of the invention, as well as other aspects of the invention, are described herein as RS (resource-scaling) functionality. Some embodiments of the invention can be implemented using an open-source container orchestration framework (OS-COF) that has been modified in accordance with aspects of the invention to incorporate the various novel RS functionalities and/or features described herein, thereby forming a novel RS-OS-COF embodying aspects of the invention. The novel RS-OS-COF can be configured to include open-source software (e.g., OpenSSL software) and a cluster of interconnected physical and/or virtual computing nodes (i.e., machines) configured to provide automatic deployment and management of containerized applications. The RS-OS-COF cluster of nodes can contain two types of nodes, namely a master node and one or more worker nodes. The master node is responsible for managing worker nodes in the cluster. The master node is the entry point for all operations to be executed in the cluster that are received from, for example, an application programming interface, a user interface, or an open-source software (OSS) interface. The worker node is responsible for running one or more workloads and networking with different workloads running on other worker nodes in the cluster. In accordance with aspects of the invention, the master node of the RS-OS-COF cluster of nodes is modified to include RS functionality, thereby forming an RS-OS-COF working node. The basic scheduling unit in the RS-OS-COF worker node is a container, such as, for example, a pod. The container can be co-located on the worker node and share resources. Each container in the RS-OS-COF is assigned a unique container internet protocol address within the cluster of worker nodes, which allows applications to use ports without the risk of conflict. A service is a set of RS-OS-COF containers that work together, such as one tier of a multi-tier application. The novel RS-OS-COF can be configured to receive the encrypted message, which can take a variety of forms, including, for example, an HTTPS connection request. An example of a suitable RS-OS-COF is a Kubernetes® open-source platform that has been modified to incorporate various aspects of the novel RS functionality described herein. In some aspects of the invention, the encrypted message is received at the RS-OS-COF node, and the node issues a request to the OSS to perform cryptographic operations on the encrypted message to decrypt or unpack the encrypted message and generate a decrypted or unpacked message for downstream processing. In some embodiments of the invention, the OSS can be a modified off-the-shelf OS-COF tool that has the capability of performing known cryptographic operations in a conventional (or known) manner but has been modified to, based at least in part on receiving a request to perform cryptographic operations on an encrypted message, route the cryptographic operations to a novel RS engine of the RS-OS-COF worker node. In embodiments of the invention, the RS engine is configured to provide a hardware implementation of cryptographic operations in accordance with aspects of the invention. In some embodiments of the invention, the RS engine's implementation and control of cryptographic algorithms and operations in accordance with aspects of the invention offer improved performance over known cryptographic operations, which result in the non-wasteful and efficient usage of cryptographic computing resources, including specifically high-security cryptographic computing resources. In embodiments of the invention, a novel set (or stream) of RS data having multiple fields is associated with the encrypted message and received at the RS-OS-COF. The RS data includes at least one field that defines an operation policy (OP), and the OP includes multiple sub-fields that define cryptographic-operation constraints that are applied or utilized by the RS engine when it determines, predicts, and/or estimates the cryptographic processing requirements for the encrypted message. In addition, the RS engine is configured to generate and analyze cryptographic metric that are associated with (and derived from) all cryptographic operations performed by the RS-OS-COF. In embodiments of the invention, the cryptographic metrics include information associated with the encrypted message, along with performance measurements that result from cryptographic operations that have been performed by the RS-OS-COF. If the cryptographic-operation constraints specify mandatory cryptographic computing resources that must be applied to the encrypted message, the RS engine performs a set of non-predictive RS operations, wherein the RS engine does not need to perform any analysis that predicts, determines, and/or estimates the cryptographic processing requirements for the encrypted message. In the non-predictive RS operations, the specified mandatory cryptographic computing resources are selected from among the set of available cryptographic computing resources then used to apply non-predictive cryptographic operations to the encrypted message. If the cryptographic-operation constraints do not specify mandatory cryptographic computing resources that must be applied to the encrypted message, the RS engine initiates a set of predictive RS operations that are configured to use the operation-policy constraints and the cryptographic metrics to predict, determine, and/or estimate the cryptographic processing requirement for the encrypted message. In embodiments of the invention, the predictive RS operations can include using the predicted cryptographic processing requirements to select “customized” or “scaled” cryptographic computing resources from among the set of available cryptographic computing resources; and can further include using the “selected,” “customized,” and/or “scaled” cryptographic computing resources to apply “selected,” “customized,” and/or “scaled” cryptographic operations to the encrypted message. In accordance with embodiments of the invention, the cryptographic-operation constraints of the RS data are specific to the encrypted message with which the RS data is associated, and different encrypted messages can have different RS data with different OPs. In some embodiments of the invention, some or all of the predictive RS operations can be performed using a configuration of modules that collect, dispatch, analyze, and/or manage the data traffic that moves through the RS engine in the course of the RS engine identifying the “selected,” “customized,” and/or “scaled” cryptographic computing resources that will apply “selected,” “customized,” and/or “scaled” cryptographic operations to the encrypted message. In some embodiments of the invention, the modules include an algorithm handling module, a resource management module, and a configuration of data-traffic-related modules. In some embodiments of the invention, the data-traffic-related modules can include a traffic dispatcher module, a traffic analyzer module, and a traffic metrics collector module. In embodiments of the invention, the traffic analyzer module is configured to analyze the encrypted message based at least in part on dispatch principles. Additional details of how the configuration of modules operates to implement some or all of the predictive RS operations are provided subsequently herein. In some embodiments of the invention, some or all of the predictive RS operations can be performed using a first classifier trained to use machine learning algorithms and models to perform classification and/or prediction tasks. In some embodiments of the invention, the first classifier can be trained to use the cryptographic-operation constraints and the cryptographic metrics to perform the task of predicting the non-mandatory cryptographic processing requirements of the encrypted message. In some embodiments of the invention, some or all of the predictive RS operations can be performed using a second classifier trained to use machine learning algorithms and models to perform classification and/or prediction tasks. In some embodiments of the invention, the second classifier can be trained to use the cryptographic processing requirements predicted by the first classifier, along with descriptions of the functional capabilities of the suite of available cryptographic computing resources, to perform the task of identifying the customized set of cryptographic computing resources (taken from among the suite of available cryptographic computing resources) that match or satisfy (without undue waste of cryptographic computing resources) the predicted cryptographic processing requirements. In accordance with aspects of the invention, the encrypted message can be implemented as a resource-scaling envelope (RS-EVP) data extension. In the OSS, an envelope is a symmetric key that has been encrypted with a public key. Because encryption and decryption with symmetric keys is computationally expensive, messages are not encrypted directly with such keys but are instead encrypted using a symmetric “session” key, and this key is itself then encrypted using the public key. This combination of a symmetric session key encrypted with a public key is referred to in the OSS as an envelope. Technical effects and benefits provided by the embodiments of the invention described herein include segmenting out any mandatory cryptographic resource requirements to ensure that the mandatory requirements are applied properly because mandatory cryptographic resource requirements are typically applied to encrypted messages with higher security levels that require more secure cryptographic resources. By segmenting out the messages with mandatory cryptographic resource requirements, the remaining message types can be more efficiently matched with the cryptographic computing resources they need. By using both cryptographic constraints and cryptographic metrics to match the remaining message types to the cryptographic computing resources they need, the actual historical performance of the cryptographic hardware can be leveraged to improve this matching operation. Technical effects and benefits provided by the embodiments of the invention described herein further include allowing existing open-source software to be efficiently modified to incorporate features and functionality of the various embodiments of the invention. Technical effects and benefits provided by the embodiments of the invention described herein further include allowing the computer-implemented method to accumulate data from each iteration of the computer-implemented method and using the accumulation data to improve subsequent iterations of the computer-implemented method. Turning now to a more detailed description of aspects of the invention,FIG.1depicts an RS-OS-COF system100in accordance with embodiments of the invention. As shown, the system100includes an external entity110in communication with an application pod120over a wired or wireless network connection112. The external entity110can be any communications element capable of transmitting over the network connection112an encrypted message such as the HTTPS message114having RS Data116. The application node120includes one or more nodes130, wherein at least one of the nodes includes an RS engine140in accordance with aspects of the invention. The node(s)130are configured to utilize the OSS160, the RS data116, and the crypto-metrics170to cognitively select one or more of the set of available cryptographic computing resources180(i.e., one or more of CCR-A182A, CCR-B182B, CCR-C182C, CCR-D182D); use the selected ones of the set of cryptographic computing resources180to apply customized cryptographic operations to the HTTPS message114in order to generate the decrypted or unpacked HTPP message131; and send the decrypted HTTP message131to an application container150for downstream processing. FIG.2depicts a computer-implemented method200in accordance with aspects of the invention. The method200is performed by the RS-OS-COF system100(shown inFIG.1). Where appropriate, the description of the method200will make reference to the corresponding elements of the system100shown inFIG.1. In accordance with aspects of the invention, the method200begins at blocks202,204where the application pod120receives an encrypted message (e.g., the HTTPS message114) (block202), and where the RS engine140accesses the RS data116associated with the received encrypted message (block204). At block206of the method200, the RS engine140extracts or accesses an operation policy (OP) of the RS data116, wherein the OP defines OP constraints on performing cryptographic operations on the received encrypted message. At decision block208, the RS engine140determines whether or not the OP constraints define or specify that the encrypted message must be cryptographically processed using certain ones of the CCRs180. If the answer to the inquiry at decision block208is yes, the method200moves to block210where the RS engine140selects the required one(s) of the CCRs180(e.g., CCR-D182D) and uses the selected one(s) of the CCRs180to perform the mandated cryptographic operations on the encrypted message. From block210, the method200moves to block212where the RS engine140develops cryptographic metrics (CMs) from the performance of the mandated cryptographic operations on the encrypted message. In accordance with embodiments of the invention, the CMs developed at block212are added to the CMs that are used at block218(not yet described). Accordingly, after multiple iterations of the method200, the CMs used at block218have been accumulated throughout the multiple iterations of the method200. From block212, the method200moves to decision block214to determine whether or not additional encrypted messages are being transmitted. If the answer to the inquiry at decision block214is no, the methodology200moves to block216and ends. If the answer to the inquiry at decision block214is yes, the method200returns to block202to receive a next encrypted message. Returning to decision block208, if the answer to the inquiry at decision block208is no, the method200moves to block216where the RS engine140accesses and analyzes accumulated CMs and/or the OP constraints. At decision block220, the RS engine140attempts to use the OP constraints and the results of a CM analysis (performed at decision block220) to predict the cryptographic requirements of the encrypted message and match the predicted cryptographic requirements to selected ones of the available CCRs180. In accordance with some aspects of the invention, the operations described at decision block220can be performed by configuring the RS engine140to use a configuration of modules that collect, dispatch, analyze, and/or manage the data traffic that moves through the RS engine140to make the prediction of the cryptographic requirements of the encrypted message and/or match the predicted cryptographic requirements to selected ones of the available CCRs180. In some embodiments of the invention, the configuration of modules include an algorithm handling module420(shown inFIG.4), a resource management module430(shown inFIG.4), and a configuration of data-traffic-related modules. In some embodiments of the invention, the data-traffic-related modules can include a traffic dispatcher module442(shown inFIG.4), a traffic analyzer module446(shown inFIG.4), and a traffic metrics collector module444(shown inFIG.4). In embodiments of the invention, the traffic analyzer module446is configured to analyze the encrypted message based at least in part on dispatch principles446A (shown inFIG.4). Additional details of how the configuration of modules operate to implement some or all of the operations depicted in decision block220are provided subsequently herein. In accordance with aspects of the invention, the operations described at decision block220can be performed by configuring the RS engine140can be configured to use a classifier810(shown inFIG.8) having machine learning algorithms812(shown inFIG.8) and models816(shown inFIG.8) to make the prediction of the cryptographic requirements of the encrypted message and/or the match the predicted cryptographic requirements to selected ones of the available CCRs180. In some embodiments of the invention, decision block220can be implemented by training a first classifier to use the cryptographic-operation constraints and the cryptographic metrics to perform the task of predicting the non-mandatory cryptographic processing requirements of the encrypted message. A second classifier can be trained to use the cryptographic processing requirements predicted by the first classifier, along with descriptions of the functional capabilities of the suite of available cryptographic computing resources, to perform the task of identifying the customized set of cryptographic computing resources (taken from among the suite of available cryptographic computing resources) that match or satisfy (without undue waste of cryptographic computing resources) the predicted cryptographic processing requirements. Additional details of how a classifier810can be trained and used to perform prediction and/or classification operations that can be utilized to perform the operations defined in decision block220are provided subsequently herein. If the answer to the inquiry at decision block220is no, the method200moves to block226where the RS engine140selects a default set of the CCRs180(e.g., all non-mandatory ones of the CCRs180) and uses the default set of the CCRs180to perform default cryptographic operations on the encrypted message. From block226, the method200moves to block212where the RS engine140develops cryptographic metrics (CMs) from the performance of the default cryptographic operations at block226on the encrypted message. In accordance with embodiments of the invention, the CMs developed at block212are added to the CMs that are used at block218. Accordingly, after multiple iterations of the method200, the CMs used at block218have been accumulated throughout the multiple iterations of the method200. From block212, the method200moves to decision block214to determine whether or not additional encrypted messages are being transmitted. If the answer to the inquiry at decision block214is no, the methodology200moves to block216and ends. If the answer to the inquiry at decision block214is yes, the method200returns to block202to receive a next encrypted message. Returning to decision block220, if the answer to the inquiry at decision block220is yes, the method200move to block222where the RS engine140uses the matched or scaled one (s) of the CCR(s)180identified at decision block220to perform cryptographic operations on the encrypted message. From block222, the method200moves to block224where the RS engine140develops CMs from the performance of the matched and/or scaled cryptographic operations on the encrypted message. In accordance with embodiments of the invention, the CMs developed at block224are added to the CMs that are used at block218. From block224, the method200moves to decision block214to determine whether or not additional encrypted messages are being transmitted. If the answer to the inquiry at decision block214is no, the methodology200moves to block216and ends. If the answer to the inquiry at decision block214is yes, the method200returns to block202to receive a next encrypted message. FIG.3depicts an RS-OS-COF system100A in accordance with embodiments of the invention. The system100A leverages the functional principles of the system100(shown inFIG.1). However, the system100A depicts additional details of how the functional principles shown in the system100can be applied in a particular computing environment. As shown inFIG.3, the system100A includes an external entity (not shown) in wireless communication through an antenna302with a node cluster130A and a set of cryptographic computing resources (CCR)180A. The external entity can be any communications element capable of transmitting over a wireless communication path to the antenna302an encrypted message such as the HTTPS message114having RS data116in accordance with aspects of the invention. The node cluster130A includes a master node132A and multiple worker nodes132B,132C,132D, configured and arranged as shown. The master node132A includes an application pod that houses a web-server and application programs. The web-server of the master node132A can be implemented as an open-source HAProxy (high availability proxy) server310. The HAProxy server310is configured to receive the HTTPS message114with RS data116. The application programs of the master node132A include an RS engine312. The worker nodes132B,132C each include application pods and application programs, configured and arranged as shown to perform certain tasks of the node cluster130A under the control and direction of the master node132A. Worker node132D is specifically designed to support the resource-scaling operations performed by the master node132A in accordance with aspects of the invention. More specifically, the worker node132D includes an application pod322that houses cryptographic computer resource (CCR) application programs320configured to support the resource-scaling operations (e.g., method200shown inFIG.2) performed by the RS engine312. The CCR320includes cryptographic resources that can be accessed and scaled in accordance with embodiments of the invention, which demonstrates that any resources can be utilized scaled using he various embodiments of the invention. The node cluster130A is communicatively coupled to a set or suite of cryptographic computing resources (CCR)180A. In accordance with aspects of the invention, the RS engine312and the CCR application programs320are configured and arranged to evaluate the instructions in the RS data116and analyze the cryptographic metrics gathered by the node cluster130A in order to identify and select any combination of the CCRs180A, which can be any combination of a cloud HSM (hardware security module)360; a DBaaS (database as a service)362; a cryptographic express card364; a CPACF (central processor assist for cryptographic function) coprocessor366; and additional CCRs368. The selected combination of CCRs180A is customized for the particular cryptographic processing needs of the HTTPS message114as determined by the RS engine312and the CCR application programs320. In embodiments of the invention, the cloud HSM360is a dedicated cryptographic processor designed for the protection of the cryptographic key life cycle. The cloud HSM360is configured to generate, process, and store keys. The cloud HSM360can be used to build the user's own public key infrastructure to handle application and signing activities. The cloud HSM360protects the cryptographic infrastructure of the user by securely managing, processing, and storing cryptographic keys inside a hardened, tamper-resistant device. In embodiments of the invention, the DBaaS instance362is a cloud computing secondary service model and a key component of XaaS (anything as a service), which describes a general category of services related to cloud computing and remote access. XaaS recognizes the vast number of products, tools, and technologies that are now delivered to users as a service over the internet. In essence, the DBaaS instance362is a managed service configured to offer access to a database to be used with applications and their related data, which is a more structured approach compared to storage as a service. The DBaaS instance362can also include a database manager component, which controls all underlying database instances via an API. This API is accessible to the user via a management console, usually a web application, which the user can use to manage and configure the database and even provision or de-provision database instances. In embodiments of the invention, the crypto express card(s)364are I/O attached cards that implement additional cryptographic functions. The crypto express card364is a coprocessor and can support a wider range of callable services that include secure key and clear key support for PKA decrypt, digital signature verify, and digital signature generate (including RSA and ECC variants). Alternatively, the crypto express card364can be configured as an accelerator to provide better throughput at the expense of supporting fewer services. In embodiments of the invention, the CPACF (central processor assist for cryptographic function) coprocessor366is a coprocessor that uses the DES, TDES, AES-128, AES-256, SHA-1, and SHA-256 ciphers to perform symmetric key encryption and calculate message digests in hardware. DES, TDES, AES-128, and AES-256 are used for symmetric key encryption. SHA-1 and SHA-256 are used for message digests. In embodiments of the invention, the additional CCRs368can include any other CCR that could be used to efficiently apply cryptographic operations to a type of HTTPS message that can be sent as the HTTPS message114. FIGS.4-7depict various aspects of an RS-OS-COF architecture100B in accordance with embodiments of the invention. Before describing details of the RS-OS-COF architecture, definitions and descriptions of some terms used inFIGS.4-7will be provided. The term “traffic” describes data movements associated with the performance of cryptographic operations, which includes cryptographic requests to the cryptographic computing resources (CCR)180A and cryptographic responses back from the cryptographic computing resources (CCR)180A. “Response time” can be the duration that each cryptographic operation is handled. “Succeed/fail” is the result of the cryptographic operation that is currently being handled. “Succeed” indicates that the cryptographic operation has no errors and cryptographic computing resources (CCR)180A returns a “succeed” response. crypto traffic metrics462contain the metrics and logs that are generated during the cryptographic operations being processing. It contains “response time,” “succeed/fail,” and the like. Crypto resource pattern is the collection of “tenant profile,” “constraint,” “cost,” “control interface,”, and “ability interface.” The crypto resource pattern is an input of traffic analyzer446for analyzing. “Static” represents the type of “tenant profile,” “constraint,” “cost,” “control interface,” and “ability interface.” These do not change frequently. “Tenant profile” contains general information about the resource the tenant owns. “Constraint” has the same meaning as compliance, which means a security standard such as FIPS140. “Cost” refers to pricing, such as the cost per algorithm; the cost per CPU/memory/machine/disk/crypto-card; and the cost per throughput. “Resource control interface”431uses a series of methods to control a resource, such as scaling up the resource, scaling down the resource. It can be used to increase/decreasing resources in the system. “Ability interface”432includes crypto methods and persistence methods. Crypto methods are cryptographic algorithms that can be used to do encryption, decryption, signing, verifying and so on. It includes but is not limited to EC methods, RSA methods, DH, Ciphers, Hash, and the like. The resource metric collector436collects the metrics and logs from cryptographic computing resources while the resources are running. “Dynamic” refers to the dynamic metrics and logs data that are generated while the resource running, and it includes “current traffic workload,” “historical data and trends,” and “stability & reliability.” “Current traffic workload” refers to the metrics and logs of the workload of the cryptographic computing resources. The workload of the resource can be, for example, idle, normal, or overload. “Historical data and trends” refers to the metrics and logs of the historical traffic workload of a cryptographic resource. “Stability & reliability” refers to the metrics and logs that can be used to evaluate stability and/or reliability of a cryptographic computing resource. If a resource is out of service frequently, the stability & reliability are bad. All the above generate metrics and logs while the resource is running, which can be collected by resource metric collector436. The “module” referred to in the terms “different computing module” represents one of the cryptographic computing resources (CCR)180A. “Cost model” is a pricing-related description. For example, the traffic dispatcher442can dispatch the cryptographic operation to a less costly (e.g., to a customer) one of the cryptographic computing resources (CCRs)180A. Dispatch policy466A is used for the traffic dispatcher module442to dispatch the traffic to the optimal cryptographic computing resource320,180A. It consists of “turning threshold,” “priority-based on precondition,” “cost per algorithm,” “response time,” “compliance resource.” With an increase in the workload, when the throughput reaches a threshold, the performance of a cryptographic computing resource can drop steeply. “Turning threshold” refers to that threshold. “Priority-based on precondition” refers to a condition that must be satisfied before the other algorithm's execution. For example, as shown inFIG.5, CIPHER510A represents a series of symmetric encryption/decryption algorithm such as AES, DES. HASH510B represents a series of hash algorithms such as SHA and MD5. Other algorithms include asymmetric cryptographic algorithms. In some situations, algorithm1 depends on algorithm2. For example, algorithm1 can need the outputs from algorithm3 as an input. Under this situation, algorithm2 is a PRE-CONDITION510of algorithm1. “Cost per Algorithm” refers to the cost (or price) per one cryptographic operation in a cryptographic computing resource. “Response time” refers to the duration of one cryptographic operation that is being handled in a cryptographic computing resource. “Compliance resource” refers to a situation where the cryptographic operation has a compliance requirement, which should be delivered to a compliance-certified cryptographic computing resource. The term “cost” refers to the pricing, such as the cost per algorithm, the cost per CPU/memory/machine/disk/crypto-card, and/or the cost per throughput. “Cost model” refers to the pricing choice, for example, an economic preference that can impact the traffic dispatcher442to dispatch the traffic to the target cryptographic computing resource that has a lower price than others. An “enterprise” preference can impact the traffic dispatcher442to dispatch the traffic to the target cryptographic computing resource that may have a higher price but have a faster response time. The algorithms handling module420and the registration module422have the following purpose. When a cryptographic operation (e.g., a cryptographic request) comes in with EVP data, the algorithms handling module420attaches an RS-EVP data extension450into each cryptographic operation, then sends the cryptographic operation to the traffic analyzer446. The registration module422is an existing module in OpenSSL that can be utilized to register the OpenSSL Engine into OpenSSL. The “compliance module” referenced in SAFETY-SENSITIVE512can be a module that meets security compliances. For example, IBM Cloud® Hyper Protect Crypto Services is a FIPS140-2Level 4 certified hardware security module. The “real-time demand” referred to in REAL_TIME514can be a time requirement of a cryptographic operation. A cryptographic operation has a time requirement that should be handled in a short time. For example, a cryptographic operation can have a real-time demand that requires complete handling and receipt of the response message in 100 milliseconds. I/O516can be a cryptographic computing resource that can store persistence data, such as like keys. I/O516is to emphasize the ability to keep persistence data. It can be a database471or a cloud HSM475in RESOURCES518. However, not all of the RESOURCES518has the ability to keep persistence data. The EVP request is the envelope request that is sent from OpenSSL to the engine312A. EVP data contains parameters and functions about one algorithm. RS-EVP data extension450is an extension of the EVP request. The resource control interface431includes consists of a series of methods to control a cryptographic computing resource, such as scaling up the resource, scaling down the resource. It can be used to increase/decrease resources in the system. Cloud scalability in cloud computing refers to the ability to increase or decrease IT resources as needed to meet changing demand. “Scale up” refers to increasing a resource into the system. “Scale down” refers to decreasing resources from the system. The ability interface432includes crypto methods and persistence methods. Crypto methods are cryptographic algorithms that can be used to do encryption, decryption, signing, verifying and the like. Crypto methods include but are not limited to EC methods, RSA methods, DH, Ciphers, Hash, and the like. Persistence methods are load/store related methods, which can be used to persist data onto hardware. Persistence methods include but are not limited to load/store keys. The information interface433includes a series of methods to collect information about tenant profile, constraint information, and cost information of a resource. The “tenant profile” contains general information about the cryptographic computing resource which the tenant owns. “Constraint” has the same meaning as compliance, which means some security standard such as like FIPS140-2. “Cost” refers to the pricing, such as the cost per algorithm, the cost per CPU/memory/machine/disk/crypto-card, and/or the cost per throughput. The discovery module434collects registration information from the resource control interface431, the ability interface432, and the information interface433and registers them into registers of the module434, then delivers this information to the pusher435. The pusher435collects static data from the discovery module434and collects dynamic data from the resource metric collector436, then pushes the data to the traffic analyzer446for analyzing. The static data includes data of the resource control interface431, the ability interface432, and the information interface433. The dynamic data includes data of the current traffic workload; historical data and trends; and stability & reliability. The resource metric collector436collects the metrics and logs from cryptographic computing resources during the running of the resources. “Current traffic workload” refers to the metrics and logs of the workload of the cryptographic computing resources. The workload of the cryptographic computing resource can be idle, normal, or overload. “Historical data and trends” refers to the metrics and logs of the historical traffic workload of a cryptographic computing resource. “Stability & reliability” refers to the metrics and logs that can be used to evaluate stability and/or reliability of a cryptographic computing resource. If a resource is out of service frequently, the stability & reliability are bad. All the above generate metrics and logs while the resource is running, which can be collected by resource metric collector436. Turning now to a more detailed description ofFIGS.4-7,FIG.4is a block diagram that depicts the RS-OS-COF architecture100B in accordance with aspects of the invention. The architecture100B includes and leverages the functional principles of the system100(shown inFIG.1) and the system100A (shown inFIG.3). However, the architecture100B depicts additional details of how the functional principles shown in the systems100,100A can be applied in a particular computing environment. As shown inFIG.4, moving from top to bottom, the first layer of the architecture100B includes various web-servers that can be used to receive the HTTPS message114with RS data116. The first layer can include an Nginx® web-server410, an HAProxy web-server310A, SSL utilities412, and additional web-server support414. The second layer of the architecture100B includes a suite of OSS elements160A corresponding to the OSS160(shown inFIG.1). The OSS elements160A include a variety of algorithms (e.g., ECDHE (elliptic-curve diffie-hellman ephemeral), EC key generate, and the like), which are configured and arranged to perform encryption and decryption work. The third layer of the architecture100B includes an RS engine312A, a resource management module430, a traffic dispatcher module442, a traffic analyzer module446, and a traffic metrics collector module444, configured and arranged as shown. The RS engine312A includes an algorithm handling module420and a registration module422. The fourth layer of the architecture100B includes a set or suite of cryptographic computing resources (CCR)180B, which can include a database471, a file system472, a host CPU473, a container CPU474, a cloud HSM475, a cryptographic card476, a cryptographic co-processor477, and additional CCRs368. The database471and the file system472are the CCRs used for key storage. The cloud HSM475, the cryptographic card476, and the cryptographic co-processer477are the hardware CCRs used to perform cryptographic operations (e.g., encryption/decryption). The host CPU473and the container CPU474are the CPU resources used for cryptographic operations (e.g., encryption/decryption). The cloud HSM475are essentially a cryptographic card located on the cloud (e.g., cloud computing system50shown inFIG.11).FIG.4also depicts a resource-scaling envelope (RS-EVP) data extension450, along with cryptographic metrics160. The cryptographic metrics160include cryptographic traffic metrics462, cryptographic computing resource pattern data464, and cryptographic computing resource metrics466. The RS-EVP data extension450include an operation policy that describes constraints on how the RS engine312A handles the HTTP message114and the RS-EVP data extension450in the OpenSSL envelope of the OSS elements160A. The cryptographic traffic metric includes response times, succeed instances, fail instances, and other metrics related to data traffic through the architecture100B when performing cryptographic operations. The cryptographic computing resource pattern464is static data defining a variety of parameters including, for example, tenant profile, constraints, cost, the control interface, and the ability interface. The cryptographic computing resource metric466is dynamic data defining a variety of dynamically changing data such as current traffic, workload, historical data, historical trends, stability, and reliability. The resource management module430is used to scale up and down the CCR module180B according to traffic model and/or the cost model. The traffic dispatcher442sends traffic to different computing modules according to the cost model. The traffic metrics collector module collects traffic metrics generated by the architecture100B based on the architecture performing the resource-scaling operations described herein (e.g., method200shown inFIG.2). The traffic analyzer module446analyzes encrypted messages (e.g., HTTPS message114) based on the dispatch policy, which defines policies such as the turning threshold, priority based on precondition, cost per algorithm, response time, and compliance resources. The traffic dispatcher442, the traffic metrics collector444, the traffic analyzer446, and the resource management430reside in the master Node132A. In some embodiments of the invention, the elements can either reside in RS Engine312or can be in the same layer with RS Engine312from an architecture perspective. FIG.5depicts a block diagram showing an RS-EVP data extension450A according to embodiments of the invention. The RS-EVP data extension450A corresponds to the RS-EVP data extension450(shown inFIG.4), however the RS-EVP data extension450A provides additional details of how the fields that define an operation policy and operation policy constraints can be implemented in some embodiments of the invention. As shown inFIG.5, the RS-EVP data extension450A has multiple fields, including OPT_POLICY451(operation policies), EC_GROUP452(curve related information), EC_POINT453(computed point in the curve), BIGNUM454(base point in the curve, namely the private key), POINT_CONV_FORM455(for the encoding of an elliptic curve point), and additional fields456. The OPT_POLICY451defines operation policy constraints502. The operation policy constraints502include PRE-CONDITION510(a condition that must be satisfied prior to the execution of other algorithms), SAFETY_SENSITIVE512(must be handled by a compliance module), REAL_TIME514(the HTTPS message114has a real-time demand), I/O516(permanent demand and target device), and RESOURCES518(mandates the CCRs that must be used to perform cryptographic operations on the HTTPS message114). The CIPHER510A, HASH510B, and additional precondition510C are examples of what the PRE-CONDITION510can define. The encryption/decryption hardware516A, the cryptographic coprocessor477, the cloud HSM475, the CPACF coprocessor366, the database471, and the additional CCRs368are examples of mandatory CCRs that can be called out the I/O516and/or the RESOURCES518. As described in greater detail subsequently herein, if the I/O516and RESOURCES518identify mandatory CCRs, the traffic analyzer446sends the HTTPS message114to the traffic dispatcher442to dispatch to the target CCRs. If the I/O516and RESOURCES518do not identify mandatory CCRs, the traffic analyzer446assess the HTTPS message114, the OPT_POLICY451, and the cryptographic metrics460to predict and/or determine the cryptographic requirements of the HTTPS message114then recommend a combination of the CCRs180A that match the predicted and/or determined cryptographic requirements of the HTTPS message114. FIG.6depicts a diagram illustrating both the architecture100B and a methodology602performed by the architecture100B in accordance with aspects of the invention. In general, the traffic metrics collector444collects the cryptographic traffic metrics462and sends it to the traffic analyzer446to generate cryptographic computing resource pattern462to instruct the traffic dispatcher442to dispatch traffic to different computing modules. The cryptographic traffic metrics444include a user level metric that represents the user level an encryption request being associated; a performance metric that represents the performance of the HTTPS message114, for example, its response time, CPU usage, memory cost, network usage, etc.; a security level metric of the resources being accessed which represents the security level of the resources being accessed; a succeed/fail metric which represents historical success rate of an encryption service; and an access frequency metric which represent access frequency of an encryption service, etc. Under the guidance from the traffic analyzer446, the traffic dispatcher442sends traffic to different computing module according to cost model. The inputs to the cryptographic traffic analyzer446include cryptographic traffic metrics from cryptographic traffic metrics collector444; the RS-EVP data extension450,450A from the algorithms handling module420; and the cryptographic computing resource pattern/metric from the resource management module430. The outputs of the cryptographic traffic analyzer446include the dispatch policy, which includes turning threshold; priority base on precondition; cost per algorithms call; response time; and compliance resource. The method602will now be described in the context of how steps S1-S7can proceed in a first example. In S1, the algorithms handling module420sends a ECDHE request to the traffic analyzer446with the RS-EVP data extension450. The value of I/O516in the RS-EVP data extension450A is “impermanence.” The value of the RESOURCES518variable in the RS-EVP data extension450A Ext is the container CPU474, which is fixed. In S2, the traffic analyzer module446receives this request generated at S1and finds that the resource has a mandatory resource, so there is no need to do more analysis and the request is just sent to the traffic dispatcher module442. In S3, the traffic dispatcher module442dispatches the request to the container CPU474of the CCR180B. In S4, the traffic metric collector module444collects information of the cryptographic metrics160, which includes performance metrics, success rate metrics, access frequency metrics of the container CPU474, and the like. In S5, the traffic metric collector444pushes the collected cryptographic metrics to the traffic analyzer446. In S6, the resource management module430pushes to the traffic analyzer446the cryptographic metrics160related to the execution of cryptographic operations by the container CPU474. In S7, the traffic analyzer446generates the dispatch policy446A based on inputs from S1, S4, and S6and uses the dispatch policy446A for further cryptographic request dispatch. The traffic analyzer446utilizes the interface provided by resource management module430to scale up and/or scale down the CCRs180B to match the cryptographic operations required by the HTTPS message114. The method602will now be described in the context of how steps S1-S7can proceed in a second example. In S1, the algorithms handling module420sends an EC Key Gen request to the traffic analyzer446with the RS-EVP data extension450. The value of I/O516in the RS-EVP data extension450A is “permanence.” The value of the RESOURCES518variable in the RS-EVP data extension450A Ext is the cloud HSM475, which is fixed. In S2, the traffic analyzer module446receives this request generated at S1and finds that the resource has a mandatory resource, so there is no need to do more analysis and the request is just sent to the traffic dispatcher module442. In S3, the traffic dispatcher module442dispatches the request to the cloud HSM475of the CCR180B. In S4, the traffic metric collector module444collects information of the cryptographic metrics160, which includes performance metrics, success rate metrics, access frequency metrics, and the like of the cloud HSM475. In S5, the traffic metric collector444pushes the collected cryptographic metrics to the traffic analyzer446. In S6, the resource management module430pushes to the traffic analyzer446the cryptographic metrics160related to the execution of cryptographic operations by the cloud HSM475. In S7, the traffic analyzer446generates the dispatch policy446A based on inputs from S1, S4, and S6and uses the dispatch policy446A for further cryptographic request dispatch. The traffic analyzer446utilizes the interface provided by resource management module430to scale up and/or scale down the CCRs180B to match the cryptographic operations required by the HTTPS message114. FIG.7depicts a diagram illustrating both a subset of the architecture100B and a methodology702performed by the architecture100B in accordance with aspects of the invention. More specifically,FIG.7depicts a resource management module430A, which provides additional details about how the resource management module430(shown inFIG.7) can be implemented in accordance with embodiments of the invention. As shown, the resource management module430A includes a resource control interface431, an ability interface432, an information interface433, a discovery module434, a pusher435, and resource metric collector436, configured and arranged as shown. The method702will now be described with reference to how steps S11-S15would be performed by the portion of the architecture100B shown inFIG.7. In S11, within the resource management module430A, the discovery module434receives data and information from the resource control interface431(scale up, scale down), the ability interface432(cryptographic methods, persistence methods), and the information interface433(tenant profile, constraint, cost). The cryptographic methods include EC methods, RSA methods, DH, Ciphers, Hash, and the like. The information interface433uses so-called “get” methods to retrieve resource-related information about tenant profiles, constraints, costs, and the like. In S12, the resource metric collector436collects dynamic metrics about current traffic workload; historical data; historical trends; and stability & reliability. In S13, the pusher435receives inputs from the resource metric collector436that result from the performance of S12. In S13′, the pusher435receives inputs from the discovery module434resource metric collector436that result from the performance of S11. In S14, the pusher435pushes the cryptographic computing resource pattern464and the cryptographic computing resource metric466to the traffic analyzer module446. The cryptographic computing resource pattern464is statically pushed one time. The cryptographic computing resource metric466is dynamically pushed periodically. In S15, the traffic analyzer466utilizes the resource control interface431to scale up and/or scale down the CCRs180B to match the cryptographic operations required by the HTTPS message114. Additional details of machine learning techniques that can be used to implement aspects of the invention disclosed herein will now be provided. The various prediction and/or determination functionality of the processors described herein can be implemented using machine learning and/or natural language processing techniques. In general, machine learning techniques are run on so-called “neural networks,” which can be implemented as programmable computers configured to run sets of machine learning algorithms and/or natural language processing algorithms. Neural networks incorporate knowledge from a variety of disciplines, including neurophysiology, cognitive science/psychology, physics (statistical mechanics), control theory, computer science, artificial intelligence, statistics/mathematics, pattern recognition, computer vision, parallel processing and hardware (e.g., digital/analog/VLSI/optical). The basic function of neural networks and their machine learning algorithms is to recognize patterns by interpreting unstructured sensor data through a kind of machine perception. Unstructured real-world data in its native form (e.g., images, sound, text, or time series data) is converted to a numerical form (e.g., a vector having magnitude and direction) that can be understood and manipulated by a computer. The machine learning algorithm performs multiple iterations of learning-based analysis on the real-world data vectors until patterns (or relationships) contained in the real-world data vectors are uncovered and learned. The learned patterns/relationships function as predictive models that can be used to perform a variety of tasks, including, for example, classification (or labeling) of real-world data and clustering of real-world data. Classification tasks often depend on the use of labeled datasets to train the neural network (i.e., the model) to recognize the correlation between labels and data. This is known as supervised learning. Examples of classification tasks include identifying objects in images (e.g., stop signs, pedestrians, lane markers, etc.), recognizing gestures in video, detecting voices, detecting voices in audio, identifying particular speakers, transcribing speech into text, and the like. Clustering tasks identify similarities between objects, which the clustering task groups according to those characteristics in common and which differentiate them from other groups of objects. These groups are known as “clusters.” An example of machine learning techniques that can be used to implement aspects of the invention will be described with reference toFIGS.8and9. Machine learning models configured and arranged according to embodiments of the invention will be described with reference toFIG.8. Detailed descriptions of an example computing system and network architecture capable of implementing one or more of the embodiments of the invention described herein will be provided with reference toFIG.10. FIG.8depicts a block diagram showing a classifier system800capable of implementing various predicting and determining aspects of the invention described herein. More specifically, the functionality of the system800is used in embodiments of the invention to generate various models and/or sub-models that can be used to implement predicting and determining functionality in embodiments of the invention. The system800includes multiple data sources802in communication through a network804with a classifier810. In some aspects of the invention, the data sources802can bypass the network804and feed directly into the classifier810. The data sources802provide data/information inputs that will be evaluated by the classifier810in accordance with embodiments of the invention. The data sources802also provide data/information inputs that can be used by the classifier810to train and/or update model(s)816created by the classifier810. The data sources802can be implemented as a wide variety of data sources, including but not limited to, sensors configured to gather real time data, data repositories (including training data repositories), and outputs from other classifiers. The network804can be any type of communications network, including but not limited to local networks, wide area networks, private networks, the Internet, and the like. The classifier810can be implemented as algorithms executed by a programmable computer such as a processing system1000(shown inFIG.10). As shown inFIG.8, the classifier810includes a suite of machine learning (ML) algorithms812; natural language processing (NLP) algorithms814; and model(s)816that are relationship (or prediction) algorithms generated (or learned) by the ML algorithms812. The algorithms812,814,816of the classifier810are depicted separately for ease of illustration and explanation. In embodiments of the invention, the functions performed by the various algorithms812,814,816of the classifier810can be distributed differently than shown. For example, where the classifier810is configured to perform an overall task having sub-tasks, the suite of ML algorithms812can be segmented such that a portion of the ML algorithms812executes each sub-task and a portion of the ML algorithms812executes the overall task. Additionally, in some embodiments of the invention, the NLP algorithms814can be integrated within the ML algorithms812. The NLP algorithms814include speech recognition functionality that allows the classifier810, and more specifically the ML algorithms812, to receive natural language data (text and audio) and apply elements of language processing, information retrieval, and machine learning to derive meaning from the natural language inputs and potentially take action based on the derived meaning. The NLP algorithms814used in accordance with aspects of the invention can also include speech synthesis functionality that allows the classifier810to translate the result(s)820into natural language (text and audio) to communicate aspects of the result(s)820as natural language communications. The NLP and ML algorithms814,812receive and evaluate input data (i.e., training data and data-under-analysis) from the data sources802. The ML algorithms812include functionality that is necessary to interpret and utilize the input data's format. For example, where the data sources802include image data, the ML algorithms812can include visual recognition software configured to interpret image data. The ML algorithms812apply machine learning techniques to received training data (e.g., data received from one or more of the data sources802) in order to, over time, create/train/update one or more models816that model the overall task and the sub-tasks that the classifier810is designed to complete. Referring now toFIGS.8and9collectively,FIG.9depicts an example of a learning phase900performed by the ML algorithms812to generate the above-described models816. In the learning phase900, the classifier810extracts features from the training data and coverts the features to vector representations that can be recognized and analyzed by the ML algorithms812. The features vectors are analyzed by the ML algorithm812to “classify” the training data against the target model (or the model's task) and uncover relationships between and among the classified training data. Examples of suitable implementations of the ML algorithms812include but are not limited to neural networks, support vector machines (SVMs), logistic regression, decision trees, hidden Markov Models (HMMs), etc. The learning or training performed by the ML algorithms812can be supervised, unsupervised, or a hybrid that includes aspects of supervised and unsupervised learning. Supervised learning is when training data is already available and classified/labeled. Unsupervised learning is when training data is not classified/labeled so must be developed through iterations of the classifier810and the ML algorithms812. Unsupervised learning can utilize additional learning/training methods including, for example, clustering, anomaly detection, neural networks, deep learning, and the like. When the models816are sufficiently trained by the ML algorithms812, the data sources802that generate “real world” data are accessed, and the “real world” data is applied to the models816to generate usable versions of the results820. In some embodiments of the invention, the results820can be fed back to the classifier810and used by the ML algorithms812as additional training data for updating and/or refining the models816. In aspects of the invention, the ML algorithms812and the models816can be configured to apply confidence levels (CLs) to various ones of their results/determinations (including the results820) in order to improve the overall accuracy of the particular result/determination. When the ML algorithms812and/or the models816make a determination or generate a result for which the value of CL is below a predetermined threshold (TH) (i.e., CL<TH), the result/determination can be classified as having sufficiently low “confidence” to justify a conclusion that the determination/result is not valid, and this conclusion can be used to determine when, how, and/or if the determinations/results are handled in downstream processing. If CL>TH, the determination/result can be considered valid, and this conclusion can be used to determine when, how, and/or if the determinations/results are handled in downstream processing. Many different predetermined TH levels can be provided. The determinations/results with CL>TH can be ranked from the highest CL>TH to the lowest CL>TH in order to prioritize when, how, and/or if the determinations/results are handled in downstream processing. In aspects of the invention, the classifier810can be configured to apply confidence levels (CLs) to the results820. When the classifier810determines that a CL in the results820is below a predetermined threshold (TH) (i.e., CL<TH), the results820can be classified as sufficiently low to justify a classification of “no confidence” in the results820. If CL>TH, the results820can be classified as sufficiently high to justify a determination that the results820are valid. Many different predetermined TH levels can be provided such that the results820with CL>TH can be ranked from the highest CL>TH to the lowest CL>TH. FIG.10illustrates an example of a computer system1000that can be used to implement any of the computer-based components of the various embodiments of the invention described herein. The computer system1000includes an exemplary computing device (“computer”)1002configured for performing various aspects of the content-based semantic monitoring operations described herein in accordance aspects of the invention. In addition to computer1002, exemplary computer system1000includes network1014, which connects computer1002to additional systems (not depicted) and can include one or more wide area networks (WANs) and/or local area networks (LANs) such as the Internet, intranet(s), and/or wireless communication network(s). Computer1002and additional system are in communication via network1014, e.g., to communicate data between them. Exemplary computer1002includes processor cores1004, main memory (“memory”)1010, and input/output component(s)1012, which are in communication via bus1003. Processor cores1004includes cache memory (“cache”)1006and controls1008, which include branch prediction structures and associated search, hit, detect and update logic, which will be described in more detail below. Cache1006can include multiple cache levels (not depicted) that are on or off-chip from processor1004. Memory1010can include various data stored therein, e.g., instructions, software, routines, etc., which, e.g., can be transferred to/from cache1006by controls1008for execution by processor1004. Input/output component(s)1012can include one or more components that facilitate local and/or remote input/output operations to/from computer1002, such as a display, keyboard, modem, network adapter, etc. (not depicted). It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.11, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.11are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.12, a set of functional abstraction layers provided by cloud computing environment50(FIG.11) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.12are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and automatically scaling cryptographic computing resources up or down to match the cryptographic processing requirements of encrypted communications96. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instruction by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. As previously noted herein, conventional techniques related to making and using aspects of the invention are well-known so may or may not be described in detail herein. However, to provide context, a more detailed description of various cryptography methods and definitions that can be utilized in implementing one or more embodiments of the present invention will now be provided. Digital certificates support public key cryptography in which each party involved in a communication or transaction has a pair of keys, called the public key and the private key. Each party's public key is published while the private key is kept secret. Public keys are numbers associated with a particular entity and are intended to be known to everyone who needs to have trusted interactions with that entity. Private keys are numbers that are supposed to be known only to a particular entity, i.e. kept secret. In a typical public key cryptographic system, a private key corresponds to exactly one public key. Within a public key cryptography system, because all communications involve only public keys and no private key is ever transmitted or shared, confidential messages can be generated using only public information and can be decrypted using only a private key that is in the sole possession of the intended recipient. Furthermore, public key cryptography can be used for authentication, i.e. digital signatures, as well as for privacy, i.e. encryption. Accordingly, public key cryptography is an asymmetric scheme that uses a pair of keys—specifically, a public key that is used to encrypt data, along with a corresponding private or secret key that is used to decrypt the data. The public key can be published to the world while the private key is kept secret. Any entity having a copy of the public key can then encrypt information that the entity in possession of the secret/private key can decrypt and read. Encryption is the transformation of data into a form unreadable by anyone without a secret decryption key; encryption ensures privacy by keeping the content of the information hidden from anyone for whom it is not intended, even those who can see the encrypted data. Authentication is a process whereby the receiver of a digital message can be confident of the identity of the sender and/or the integrity of the message. For example, when a sender encrypts a message, the public key of the receiver is used to transform the data within the original message into the contents of the encrypted message. A sender uses a public key to encrypt data, and the receiver uses a private key to decrypt the encrypted message. A certificate is a digital document that vouches for the identity and key ownership of entities, such as an individual, a computer system, a specific server running on that system, etc. Certificates are issued by certificate authorities. A certificate authority (CA) is an entity, usually a trusted third party to a transaction that is trusted to sign or issue certificates for other people or entities. The CA usually has some kind of legal responsibilities for its vouching of the binding between a public key and its owner that allow one to trust the entity that signed a certificate. There are many such certificate authorities, and they are responsible for verifying the identity and key ownership of an entity when issuing the certificate. If a certificate authority issues a certificate for an entity, the entity provides a public key and some information about the entity. A software tool, such as specially equipped web browsers, can digitally sign this information and send it to the certificate authority. The certificate authority might be a company or other entity that provides trusted third-party certificate authority services. The certificate authority will then generate the certificate and return it. The certificate can contain other information, such as dates during which the certificate is valid and a serial number. One part of the value provided by a certificate authority is to serve as a neutral and trusted introduction service, based in part on their verification requirements, which are openly published in their certification service practices (CSP). Typically, after the CA has received a request for a new digital certificate, which contains the requesting entity's public key, the CA signs the requesting entity's public key with the CA's private key and places the signed public key within the digital certificate. Anyone who receives the digital certificate during a transaction or communication can then use the public key of the CA to verify the signed public key within the certificate. The intention is that an entity's certificate verifies that the entity owns a particular public key. There are several standards that define the information within a certificate and describe the data format of that information. The terms “cryptography,” “cryptosystems,” “encryption,” and equivalents thereof are used herein to describe secure information and communication techniques derived from mathematical concepts, including, for example, rule-based calculations called algorithms configured to transform messages in ways that are hard to decipher without authorization. Cryptography uses a set of procedures known as cryptographic algorithms, encryption algorithms, or ciphers, to encrypt and decrypt messages in order to secure communications among computer systems and applications. A cryptography suite can use a first algorithm for encryption, a second algorithm for message authentication, and a third algorithm for key exchange. Cryptographic algorithms, which can be embedded in protocols and written in software that runs on operating systems and networked computer systems, involve public and private key generation for data encryption/decryption; digital signing and verification for message authentication; and key exchange operations. The terms “asymmetric-key encryption algorithm” and equivalents thereof are used herein to describe public-key or asymmetric-key algorithms that use a pair of keys, a public key associated with the creator/sender for encrypting messages and a private key that only the originator knows for decrypting that information. The term “key” and equivalents thereof are used herein to describe a random string of bits created explicitly for scrambling and unscrambling data. Keys are designed with algorithms intended to ensure that every key is unpredictable and unique. The longer the key built in this manner, the harder it is to crack the encryption code. A key can be used to encrypt, decrypt, or carry out both functions based on the sort of encryption software used. The terms “private key” and equivalents thereof are used herein to describe a key that is paired with a public key to set off algorithms for text encryption and decryption. A private key is created as part of public key cryptography during asymmetric-key encryption and used to decrypt and transform a message to a readable format. Public and private keys are paired for secure communication. A private key is shared only with the key's initiator, ensuring security. For example, A and B represent a message sender and message recipient, respectively. Each has its own pair of public and private keys. A, the message initiator or sender, sends a message to B. A's message is encrypted with B's public key, while B uses its private key to decrypt A's received message. A digital signature, or digital certificate, is used to ensure that A is the original message sender. To verify this, B uses the following steps: B uses A's public key to decrypt the digital signature, as A must previously use its private key to encrypt the digital signature or certificate; and, if readable, the digital signature is authenticated with a certification authority (CA). Thus, sending encrypted messages requires that the sender use the recipient's public key and its own private key for encryption of the digital certificate. Thus, the recipient uses its own private key for message decryption, whereas the sender's public key is used for digital certificate decryption. The terms “public key” and equivalents thereof are used herein to describe a type of encryption key that is created in public key encryption cryptography that uses asymmetric-key encryption algorithms. Public keys are used to convert a message into an unreadable format. Decryption is carried out using a different, but matching, private key. Public and private keys are paired to enable secure communication. The terms “digital signature” and equivalents thereof are used herein to describe techniques that incorporate public-key cryptography methodologies to allow consumers of digitally signed data to validate that the data has not been changed, deleted or added. In an example digital signature technique/configuration, a “signer” hashes the record data and encrypts the hash with the signer's private key. The encrypted hash is the signature. The consumer of the record data can hash the same record data, and then use the public key to decrypt the signature and obtain the signer's hash. A consumer attempting to validate a record can compare the consumer's hash with the signer's hash. When the two hash values match, the data content and source(s) of the record are verified. The terms “elliptic curve cryptography” (ECC) describes algorithms that use the mathematical properties of elliptic curves to produce public key cryptographic systems. Like all public-key cryptography, ECC is based on mathematical functions that are simple to compute in one direction but very difficult to reverse. In the case of ECC, this difficulty resides in the infeasibility of computing the discrete logarithm of a random elliptic curve element with respect to a publicly known base point, or the “elliptic curve discrete logarithm problem” (ECDLP). The elliptic curve digital signature algorithm (ECDSA) is a widely-used signing algorithm for public key cryptography that uses EC. Coprocessors are supplementary processors that take over the responsibility for performing selected processor-intensive tasks of an associated central processing unit (CPU) in order to allow the CPU to focus its computing resources on tasks that are essential to the overall system. A coprocessor's tasks can include input/output (I/O) interfacing, encryption, string processing, floating-point arithmetic, signal processing, and the like. Coprocessors can include one or more embedded systems (ES). An ES is a computer system that performs one or more dedicated functions within a larger mechanical and/or electronic system. An example of an ES is a bootstrap loader (or boot loader), which serves as a mediator between the computer's hardware and the operating system. In some computer configurations, the coprocessor itself can be considered an embedded system. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, unless the context clearly indicates otherwise, the singular forms “a”, “an” and “the” are intended to include the plural forms. The terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one more other features, integers, steps, operations, element components, and/or groups thereof. The term “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” can include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” can include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” can include both an indirect “connection” and a direct “connection.” The terms “about,” “substantially” and equivalents thereof are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about,” “substantially” and equivalents thereof can include a range of ±8% or 5%, or 2% of a given value. While the present invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the present invention is not limited to such disclosed embodiments. Rather, the present invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the present invention. Additionally, while various embodiments of the present invention have been described, it is to be understood that aspects of the present invention can include only some of the described embodiments. Accordingly, the present invention is not to be seen as limited by the foregoing description but is only limited by the scope of the appended claims. | 95,788 |
11861024 | DETAILED DESCRIPTION Some embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying figures, in which some, but not all embodiments of the disclosures are shown. Indeed, these disclosures may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. Overview As noted above, methods, apparatuses, systems, and computer program products are described herein that provide for managing data usage. Traditionally, there has been no reliable process for determining whether data is being used legally and/or correctly (e.g., in accordance with pre-determined metadata attributes governing the use of particular business elements) and identifying potential risks arising from the use of that data in various computing environments by various users and user devices. In some embodiments, the present disclosure relates to a data management architecture that enables monitoring usage of data and flagging of improper or illegal uses of the data. In some embodiments, the data management architecture involves manipulating database systems to add a new metadata attribute for specific business elements. In some embodiments, the architecture further deploys data compliance “bots” throughout the system environment that can monitor movement of data from various local nodes within the environment. Each data compliance bot may be configured to evaluate whether data entering is in compliance with the metadata attributes governing use of its constituent business elements, and if not, the data compliance bot may generate an alert regarding potentially improper use of the data or, in some instances, disallow or prevent transmission or use of the data. In one illustrative example embodiment, the present disclosure relates to a data management architecture that enables the monitoring of data to flag improper or illegal data usage by identifying business elements to track, developing rules regulating use of the identified business elements, and adding metadata attributes to the identified business elements that outline allowable uses of the business elements. The data management architecture then may disseminate data compliance “bots” throughout an entity's computing infrastructure. Each data compliance bot may comprise a beacon, plugin, agent, or standalone app that can monitor data entering or exiting a computing environment (e.g., the data compliance bot's local environment). In some instances, the data management architecture may require dissemination of data compliance bots to an external system before allowing access to data by devices within that external system. In some embodiments, the data compliance bots may always or periodically monitor the use of data for compliance by intercepting data entering or exiting a computing environment (e.g., the data compliance bot's local environment), extracting the rules (e.g., metadata attributes) associated with each business element, and evaluating governed business elements included in the data to ensure compliance. The data compliance bot may also monitor common triggers for potential noncompliant uses of data even when rules appear to be followed, such as: a user in a computing environment (e.g., a local environment) emails a data set; a user in a computing environment employs the print screen function while viewing a data set; a data set is not used in a computing environment for its ostensible purpose; or a broader monitoring of users accessing data set, frequency of access, and the like for anomalous activity. In some embodiments, if a data compliance bot detects non-compliance, the data compliance bot may take remedial action by, for example, documenting the action, generating an alert including data regarding the non-compliance (e.g., the data regarding the non-compliance may include the business element used improperly, the systems involved, and any users involved), preventing the action, or a combination thereof. There are many advantages of these and other embodiments described herein, such as: facilitating determination of whether data is used legally and/or correctly; facilitating identification of risks arising from improvident uses of data; improving data quality; and educating users on the proper use of data. Definitions As used herein, the terms “data,” “content,” “information,” “electronic information,” “signal,” “command,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received, and/or stored in accordance with embodiments of the present disclosure. Thus, use of any such terms should not be taken to limit the spirit or scope of embodiments of the present disclosure. Further, where a first computing device or circuitry is described herein to receive data from a second computing device or circuitry, it will be appreciated that the data may be received directly from the second computing device or circuitry or may be received indirectly via one or more intermediary computing devices or circuitries, such as, for example, one or more servers, relays, routers, network access points, base stations, hosts, and/or the like, sometimes referred to herein as a “network.” Similarly, where a first computing device or circuitry is described herein as sending data to a second computing device or circuitry, it will be appreciated that the data may be sent directly to the second computing device or circuitry or may be sent indirectly via one or more intermediary computing devices or circuitries, such as, for example, one or more servers, remote servers, cloud-based servers (e.g., cloud utilities), relays, routers, network access points, base stations, hosts, and/or the like. The term “comprising” means including but not limited to, and should be interpreted in the manner it is typically used in the patent context. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. The phrases “in one embodiment,” “according to one embodiment,” and the like generally mean that the particular feature, structure, or characteristic following the phrase may be included in at least one embodiment of the present disclosure, and may be included in more than one embodiment of the present disclosure (importantly, such phrases do not necessarily refer to the same embodiment). The word “example” is used herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “example” is not necessarily to be construed as preferred or advantageous over other implementations. If the specification states a component or feature “may,” “can,” “could,” “should,” “would,” “preferably,” “possibly,” “typically,” “optionally,” “for example,” “often,” or “might” (or other such language) be included or have a characteristic, that particular component or feature is not required to be included or to have the characteristic. Such component or feature may be optionally included in some embodiments, or it may be excluded. The terms “processor” and “processing circuitry” are used herein to refer to any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above. In some devices, multiple processors may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Software applications may be stored in the internal memory before they are accessed and loaded into the processors. The processors may include internal memory sufficient to store the application software instructions. In many devices the internal memory may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. The memory may also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection). For the purposes of this description, a general reference to “memory” refers to memory accessible by the processors including internal memory or removable memory plugged into the device, remote memory (e.g., cloud storage), and/or memory within the processors themselves. For instance, memory may be any non-transitory computer readable medium having computer readable instructions (e.g., computer program instructions) stored thereof that are executable by a processor. The term “computing device” is used herein to refer to any one or all of programmable logic controllers (PLCs), programmable automation controllers (PACs), industrial computers, desktop computers, personal data assistants (PDAs), laptop computers, tablet computers, smart books, palm-top computers, personal computers, smartphone, headset, smartwatch, and similar electronic devices equipped with at least a processor configured to perform the various operations described herein. Devices such as smartphones, laptop computers, tablet computers, headsets, and smartwatches are generally collectively referred to as mobile devices. The term “server” is used to refer to any computing device capable of functioning as a server, such as a master exchange server, web server, mail server, document server, or any other type of server. A server may be a dedicated computing device or a computing device including a server module (e.g., an application which may cause the computing device to operate as a server). A server module (e.g., server application) may be a full function server module, or a light or secondary server module (e.g., light or secondary server application) that is configured to provide synchronization services among the dynamic databases on computing devices. A light server or secondary server may be a slimmed-down version of server type functionality that can be implemented on a computing device, such as a smart phone, thereby enabling it to function as an Internet server (e.g., an enterprise e-mail server) only to the extent necessary to provide the functionality described herein. The terms “bot,” “circuitry,” “module,” “software module,” “utility,” “cloud utility,” “suite,” and “software suite” (or other such terms) should be understood broadly to include hardware. In some embodiments, these terms may also include software for configuring the hardware. For example, in some embodiments, “circuitry” may include processing circuitry, memory, communications circuitry, and/or input-output circuitry. In another example, in some embodiments, a “bot” may include one or more beacons, plugins, agents, or standalone apps. In some embodiments, other elements of the present disclosure may provide or supplement the functionality of particular circuitry, modules, utilities, or suites. The term “business element” refers to any data element included in a data set, such as user or customer information (e.g., name, address, age, social security number, preferences, etc.), account information (e.g., account number, age of account, account activity, etc.), a value (e.g., an account balance; a property value; an interest rate; a projected or future value; an average, median, or mean value; a standard deviation value; etc.), a matter requiring attention (MRA), protected health information (PHI), any other suitable data element, or any combination thereof. Having set forth a series of definitions called-upon throughout this application, an example system architecture is described below for implementing example embodiments and features of the present disclosure. System Architecture Methods, systems, apparatuses, and computer program products of the present disclosure may be embodied by any of a variety of devices. For example, the method, system, apparatus, and computer program product of an example embodiment may be embodied by a networked device, such as one or more servers, remote servers, cloud-based servers (e.g., cloud utilities), “bots,” or other network entities, configured to communicate with one or more devices, such as one or more server devices, user devices, data compliance bots, or a combination thereof. Example embodiments of the user devices include any of a variety of stationary or mobile computing devices, such as a portable digital assistant (PDA), mobile telephone, smartphone, laptop computer, tablet computer, a desktop computer, an electronic workstation, or any combination of the aforementioned devices. FIG.1illustrates a system diagram of a set of devices that may be involved in some example embodiments described herein. In this regard,FIG.1discloses an example environment100within which embodiments of the present disclosure may operate to govern, monitor, and, in some instances, enforce compliance of data sets. As illustrated, a data management system102may be connected to one or more data management system server devices104in communication with one or more data management system databases106. The data management system102may further be connected to one or more data compliance bots120. The data management system102may be connected to one or more server devices110A-110N (which may provide data sets, and possibly accompanying data regarding the data sets, to the data management system102for monitoring) and one or more user devices112A-112N (by which information about data sets can be retrieved or provided by users or other entities that utilize the data sets) through one or more communications networks108. In some embodiments, the data management system102may be configured to monitor and control electronic use of a data set provided by server device110as described in further detail below. The data management system102may be embodied as one or more computers or computing systems as known in the art. In some embodiments, the data management system102may provide for receiving a data set from various sources, including but not necessarily limited to the server devices110A-110N, the user devices112A-112N, or both. The data set may comprise one or more business elements. The data management system102may further provide for generating one or more metadata attributes configured to govern electronic usage of some or all of the one or more business elements in the data set. The data management system102may further provide for generating one or more governed business elements, wherein each governed business element comprises comprising the business element and the metadata attribute generated for that business element. In some embodiments, the data management system102may provide for storing the governed data set in various sources, including but not necessarily limited to the server devices110A-110N, the user devices112A-112N, or both. In some instances, the data management system102may provide for storing the governed data set by linking the business elements and generated metadata attributes together using, for example, a linked list, struct, or other data structure that demonstrates the existence of an expressly inserted connection between the metadata attributes and the business elements. In some embodiments, the data management system102may provide for monitoring electronic usage of the governed data set in a computing environment. The data management system102may provide for monitoring electronic usage of a plurality of governed data sets in a plurality of computing environments by deploying a plurality of data compliance bots (e.g., data compliance bots120), wherein each of the plurality of data compliance bots is configured to monitor electronic usage of a respective governed data set in a respective computing environment. The data management system102may provide for identifying, via a data compliance bot (e.g., one of one or more data compliance bots120), transmission of an electronic usage request from a user device. The electronic usage request may comprise a request for a user of the user device to electronically use the business element in the computing environment. The data management system102may further provide for identifying the metadata attribute based on the business element. The data management system102may further provide for, in response to identification of the transmission of the electronic usage request and identification of the metadata attribute, determining whether electronic use of the business element is allowed. In some embodiments, the data management system102may further provide for generating an electronic control signal based on the determination of whether electronic use of the business element is allowed. In some embodiments, the electronic control signal may be an authorization signal, a transmission including a set of rules, or any other suitable electronic signal or data. In some embodiments, the electronic control signal may be configured to control an electronic use of the business element in the computing environment. In some embodiments, the electronic control signal may be interpreted or executed by a processor (e.g., processing circuitry202) on the user device to effect the governance of the user device's actions. In some embodiments, the electronic control signal may update a usage policy stored in a memory (e.g., memory204) of the user device. In some embodiments, the electronic control signal may include an authentication key enabling subsequent requests by the user device to be granted by the data management system. In some embodiments, the electronic control signal may comprise electronic notification content configured for display on a display device in communication with the user device. The electronic notification may comprise, for example, an alert (e.g., an audio alarm, a pop-up display screen overlay, an electronic message, an e-mail, a report, a log) including data regarding the non-compliance (e.g., the data regarding the non-compliance may include the business element used improperly, the systems involved, and any users involved), The data management system102may further provide for transmitting the electronic control signal to various devices, including but not necessarily limited to the server devices110A-110N, the user devices112A-112N, or both. In some embodiments, in response to identification of the transmission of the electronic usage request by a first user device112A and identification of the metadata attribute, the data management system102may further provide for generating an electronic reporting signal and transmitting the electronic reporting signal to a second user device112B. The one or more data management system server devices104may be embodied as one or more servers, remote servers, cloud-based servers (e.g., cloud utilities), processors, “bots,” or any other suitable server devices, or any combination thereof. The one or more data management system server devices104receive, process, generate, and transmit data, signals, and electronic information to facilitate the operations of the data management system102. The one or more data management system databases106may be embodied as one or more data storage devices, such as a Network Attached Storage (NAS) device or devices, or as one or more separate databases or servers. The one or more data management system databases106include information accessed and stored by the data management system102to facilitate the operations of the data management system102. For example, the one or more data management system databases106may store user account credentials for users of one or more server devices110A-110N, one or more user devices112A-112N, or both. In another example, the one or more data management system databases106may store data regarding device characteristics of various server devices110A-110N, user devices112A-112N, or both. The one or more data compliance bots120may be embodied as one or more processors, circuitries, servers, remote servers, cloud-based servers (e.g., cloud utilities), processors, or any other suitable bots, or any combination thereof. The one or more data compliance bots120may monitor, receive, process, generate, and transmit data, signals, and electronic information to facilitate the operations of the data management system102. In some embodiments, the one or more data compliance bots120may comprise one or more beacons, plugins, agents, or standalone apps that can monitor data entering or exiting its local environment. In some embodiments, the one or more data compliance bots120may be active participants in system operations that receive electronic usage requests from user devices and transmit adjudicated responses. In some embodiments, the one or more data compliance bots120may be passive observers (or enabling intermediaries that are not part of the system) capable of seeing that an electronic usage request has been generated although they are not the direct recipients of the request. The one or more data compliance bots120are shown inFIG.1as being elements of data management system102. In other embodiments (not shown inFIG.1for brevity), one or more of the one or more data compliance bots120may be elements of the one or more server devices110A-110N, the one or more user devices112A-112N, or a combination thereof. In some embodiments, the one or more data compliance bots120may be configured to crawl through various networks and computing environments using, in some instances, artificial intelligence. The one or more server devices110A-110N may be embodied by any computing device known in the art. In some embodiments, the one or more server devices110A-110N may be embodied as one or more data storage devices, such as one or more NAS devices, or as one or more separate databases or database servers. In some embodiments, the one or more server devices110A-110N may be embodied as one or more servers, remote servers, cloud-based servers (e.g., cloud utilities), processors, or any other suitable devices, or any combination thereof. In some embodiments, the one or more server devices110A-110N may receive, process, generate, and transmit data, signals, and electronic information to facilitate the operations of the data management system102. Information received by the data management system102from one or more server devices110A-110N may be provided in various forms and via various methods. It will be understood, however, that in some embodiments, the one or more server devices110A-110N need not themselves be databases or database servers, but may be peripheral devices communicatively coupled to databases or database servers. In some embodiments, the one or more server devices110A-110N may include or store various data and electronic information associated with one or more data sets. For example, the one or more server devices110A-110N may include or store one or more data sets or one or more links or pointers thereto. In another example, the one or more server devices110A-110N may include or store one or more governed data sets or one or more links or pointers thereto. The one or more user devices112A-112N may be embodied by any computing device known in the art. Information received by the data management system102from the one or more user devices112A-112N may be provided in various forms and via various methods. For example, the one or more user devices112A-112N may be laptop computers, smartphones, netbooks, tablet computers, wearable devices, desktop computers, electronic workstations, or the like, and the information may be provided through various modes of data transmission provided by these user devices. In embodiments where a user device112is a mobile device, such as a smartphone or tablet, the mobile device may execute an “app” (e.g., a thin-client application) to interact with the data management system102and/or one or more server devices110A-110N. Such apps are typically designed to execute on mobile devices, such as tablets or smartphones. For example, an app may be provided that executes on mobile device operating systems such as Apple Inc.'s iOS, Google LLC's Android®, or Microsoft Corporation's Windows®. These platforms typically provide frameworks that allow apps to communicate with one another and with particular hardware and software components of mobile devices. For example, the mobile operating systems named above each provide frameworks for interacting with location services circuitry, wired and wireless network interfaces, user contacts, and other applications in a manner that allows for improved interactions between apps while also preserving the privacy and security of individual users. In some embodiments, a mobile operating system may also provide for improved communication interfaces for interacting with external devices (e.g., server devices, user devices). Communication with hardware and software modules executing outside of the app is typically provided via application programming interfaces (APIs) provided by the mobile device operating system. Additionally or alternatively, the one or more server devices110A-110N, the one or more user devices112A-112N, or any combination thereof may interact with the data management system102over one or more communications networks108. As yet another example, the one or more server devices110A-110N and/or the one or more user devices112A-112N may include various hardware or firmware designed to interface with the data management system102. For example, an example server device110A may be a database server modified to communicate with the data management system102, and another example server device110B may be a purpose-built device offered for the primary purpose of communicating with the data management system102. As another example, an example user device112A may be a user's workstation and may have an application, such as a data compliance bot, stored thereon facilitating communication with the data management system102. Example Implementing Apparatus The data management system102described with reference toFIG.1may be embodied by one or more computing systems, such as apparatus200shown inFIG.2. As illustrated inFIG.2, the apparatus200may include processing circuitry202, memory204, input-output circuitry206, communications circuitry208, data governance circuitry210, data monitoring circuitry212, and data compliance circuitry214. The apparatus200may be configured to execute the operations described above with respect toFIG.1and below with respect toFIGS.3-7. Although some of these components202-212are described with respect to their functional capabilities, it should be understood that the particular implementations necessarily include the use of particular hardware to implement such functional capabilities. It should also be understood that certain of these components202-212may include similar or common hardware. For example, two sets of circuitry may both leverage use of the same processor, network interface, storage medium, or the like to perform their associated functions, such that duplicate hardware is not required for each set of circuitry. It should also be appreciated that, in some embodiments, one or more of these components202-212may include a separate processor, specially configured field programmable gate array (FPGA), application specific interface circuit (ASIC), or cloud utility to perform the functions described herein. The use of the terms “circuitry” and “bot” as used herein with respect to components of the apparatus200therefore includes particular hardware configured to perform the functions associated with respective circuitry or bot described herein. Of course, while the terms “circuitry” and “bot” should be understood broadly to include hardware, in some embodiments, circuitry or bots may also include software for configuring the hardware. For example, in some embodiments, “circuitry” may include processing circuitry, storage media, network interfaces, input-output devices, and other components. In some embodiments, other elements of the apparatus200may provide or supplement the functionality of particular circuitry. For example, the processing circuitry202may provide processing functionality, memory204may provide storage functionality, and communications circuitry208may provide network interface functionality, among other features. In some embodiments, the processing circuitry202(and/or co-processor or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory204via a bus for passing information among components of the apparatus. The memory204may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory may be an electronic storage device (e.g., a computer readable storage medium). The memory204may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with example embodiments of the present disclosure. For example, the memory204may be configured to store data and electronic information associated with one or more data sets and updates or revisions thereof. In some instances, the memory204may be configured to store one or more data sets or one or more links or pointers thereto. In some instances, the memory204may be configured to store one or more governed data sets comprising one or more governed business elements that each comprise a business element and one or more metadata attributes configured to govern electronic usage of the business element. It will be understood that the memory204may be configured to store any electronic information, data, metadata, business elements, metadata attributes, content, users, uses, applications, deployments, outcomes, embodiments, examples, figures, techniques, processes, operations, techniques, methods, systems, apparatuses, or computer program products described herein, or any combination thereof. The processing circuitry202may be embodied in a number of different ways and may, for example, include one or more processing devices configured to perform independently. Additionally or alternatively, the processing circuitry202may include one or more processors configured in tandem via a bus to enable independent execution of instructions, pipelining, and/or multithreading. The use of the term “processing circuitry” may be understood to include a single core processor, a multi-core processor, multiple processors internal to the apparatus, and/or remote or “cloud” processors. In an example embodiment, the processing circuitry202may be configured to execute instructions stored in the memory204or otherwise accessible to the processor. Alternatively or additionally, the processor may be configured to execute hard-coded functionality. As such, whether configured by hardware or software methods, or by a combination of hardware with software, the processor may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. As another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. In some embodiments, the apparatus200may include input-output circuitry206that may, in turn, be in communication with processing circuitry202to provide output to the user and, in some embodiments, to receive an indication of a user input such as an electronic usage request provided by a user. The input-output circuitry206may comprise a user interface and may include a display that may include a web user interface, a mobile application, a client device, or any other suitable hardware or software. In some embodiments, the input-output circuitry206may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input-output mechanisms. The processing circuitry202and/or input-output circuitry206(which may utilize the processing circuitry202) may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software, firmware) stored on a memory (e.g., memory204). Input-output circuitry206is optional and, in some embodiments, the apparatus200may not include input-output circuitry. For example, where the apparatus200does not interact directly with the user, the apparatus200may generate electronic notification content, electronic reporting content, or both for display by one or more other devices with which one or more users directly interact and transmit the generated content to one or more of those devices. The communications circuitry208may be any device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from or to a network and/or any other device, circuitry, or module in communication with the apparatus200. In this regard, the communications circuitry208may include, for example, a network interface for enabling communications with a wired or wireless communication network. For example, the communications circuitry208may include one or more network interface cards, antennae, buses, switches, routers, modems, and supporting hardware and/or software, or any other device suitable for enabling communications via a network. In some embodiments, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). These signals may be transmitted by the apparatus200using any of a number of wireless personal area network (PAN) technologies, such as Bluetooth® v1.0 through v3.0, Bluetooth Low Energy (BLE), infrared wireless (e.g., IrDA), ultra-wideband (UWB), induction wireless transmission, or any other suitable technologies. In addition, it should be understood that these signals may be transmitted using Wi-Fi, Near Field Communications (NFC), Worldwide Interoperability for Microwave Access (WiMAX) or other proximity-based communications protocols. The data governance circuitry210includes hardware components designed or configured to receive a data set comprising one or more business elements. These hardware components may, for instance, utilize processing circuitry202to perform various computing operations and may utilize memory204for storage of data sets and/or other data received or generated by the data governance circuitry210. The hardware components may further utilize communications circuitry208or any suitable wired or wireless communications path to communicate with a server device (e.g., one or more of server devices110A-110N), a user device (e.g., one or more of user devices112A-112N), data monitoring circuitry212, data compliance circuitry214, or any other suitable circuitry or device. For example, the data governance circuitry210may be in communication with one or more server devices (e.g., one or more server devices110A-110N), and thus configured to receive the data set from the one or more server devices. In some embodiments, the data governance circuitry210may be configured to receive the data set from memory204. In some embodiments, the data governance circuitry210may include hardware components designed or configured to generate one or more metadata attributes configured to govern electronic usage of some or all of the one or more business elements in the data set. For example, the data governance circuitry210may generate one or more metadata attributes configured to govern electronic usage of business elements having permissions metadata (e.g., restricted access, confidential). Other circuitry (e.g., data monitoring circuitry212) may automatically identify attempts to combine business elements having permissions metadata with other business elements, and other circuitry (e.g., data compliance circuitry214) may determine, based on the metadata attributes, that the identified attempts to combine business elements having permissions metadata with other business elements are inappropriate uses of the business elements and disallow those attempts. In another example, the data governance circuitry210may generate one or more metadata attributes configured to govern electronic usage of business elements having personally identifiable information (PII) data (e.g., social security number, birth place, race, religious beliefs, any other data that might uniquely identify someone in a data set). When queries are run that contain PII data, other circuitry (e.g., data monitoring circuitry212) may automatically identify and apply flags to those queries, and other circuitry (e.g., data compliance circuitry214) may determine, based on the metadata attributes, that the identified queries are inappropriate uses of the business elements and disallow those queries. For instance, data monitoring circuitry212may search for ten digit numbers in a data set and, when a ten digit number is identified, apply a flag to that ten digit number. Similarly, data monitoring circuitry212may search for nine digit numbers in the data set and, when a nine digit number is identified, apply a flag to that ten digit number. Subsequently, data compliance circuitry214may evaluate the flagged ten digit numbers and nine digit numbers to ensure that no phone numbers or social security numbers are included in the data set. In some embodiments, the data governance circuitry210may use usage pattern recognition to identify inappropriate uses of business elements (e.g., in one or more computing environments, by one or more user devices, by one or more users, or a combination thereof) and generate one or more metadata attributes based on the identification of the inappropriate uses of the business elements. In some embodiments, the data governance circuitry210may include hardware components designed or configured to generate one or more governed business elements, wherein each governed business element comprises both the business element itself and the metadata attribute generated for that business element. In some embodiments, the data governance circuitry210may include hardware components designed or configured to store the governed data set in various sources, including but not necessarily limited to the server devices110A-110N, the user devices112A-112N, or both. In some instances, the data management system102may provide for storing the governed data set by linking the business elements and generated metadata attributes together using, for example, a linked list, struct, or other data structure that demonstrates the existence of an expressly inserted connection between the metadata attributes and the business elements. For example, the data governance circuitry210may be in communication with one or more server devices (e.g., one or more server devices110A-110N), and thus configured to store the governed data set in the one or more server devices. In some embodiments, the data governance circuitry210may be configured to store the governed data set in memory204. The data monitoring circuitry212includes hardware components designed or configured to monitor electronic usage of the governed data set in a computing environment. These hardware components may, for instance, utilize processing circuitry202to perform various computing operations and may utilize memory204for storage of data sets and/or other data received or generated by the data monitoring circuitry212. The hardware components may further utilize communications circuitry208or any suitable wired or wireless communications path to communicate with a server device (e.g., one or more of server devices110A-110N), a user device (e.g., one or more of user devices112A-112N), data governance circuitry210, data compliance circuitry214, or any other suitable circuitry or device. In some embodiments, the data monitoring circuitry212may include hardware components designed or configured to monitor electronic usage of a plurality of governed data sets in a plurality of computing environments by deploying a plurality of data compliance bots (e.g., data compliance bots120), wherein each of the plurality of data compliance bots is configured to monitor electronic usage of a respective governed data set in a respective computing environment. For example, the data monitoring circuitry212may deploy a plurality of data compliance bots by installing and activating a respective data compliance bot in each of a plurality of server devices or user devices where a governed data set is configured or expected to be used or accessed by a user in a computing environment (e.g., local environment). In some embodiments, the data monitoring circuitry212may include hardware components designed or configured to identify, via a data compliance bot, transmission of an electronic usage request from a user device. The electronic usage request may comprise a request for a user of the user device to electronically use the business element in the computing environment. In some embodiments, the data monitoring circuitry212may include hardware components designed or configured to identify the metadata attribute based on the business element. The data compliance circuitry214includes hardware components designed or configured to, in response to identification of the transmission of the electronic usage request and identification of the metadata attribute, determine whether electronic use of the business element is allowed. These hardware components may, for instance, utilize processing circuitry202to perform various computing operations and may utilize memory204for storage of data sets and/or other data received or generated by the data compliance circuitry214. The hardware components may further utilize communications circuitry208or any suitable wired or wireless communications path to communicate with a server device (e.g., one or more of server devices110A-110N), a user device (e.g., one or more of user devices112A-112N), data governance circuitry210, data monitoring circuitry212, or any other suitable circuitry or device. In some embodiments, the data compliance circuitry214may include hardware components designed or configured to generate an electronic control signal based on the determination of whether electronic use of the business element is allowed. The electronic control signal may be configured to control an electronic use of the business element in the computing environment. For example, the data compliance circuitry214may determine, based on the metadata attribute, that electronic use of the business element is allowed in the computing environment by the user of the user device. In another example, the data compliance circuitry214may determine, based on the metadata attribute, that electronic use of the business element is disallowed in the computing environment, by the user device, the user of the user device, or a combination thereof. In some embodiments, the electronic control signal may comprise electronic notification content configured for display on a display device in communication with the user device. In some embodiments, the data compliance circuitry214may include hardware components designed or configured to, in response to identification of the transmission of the electronic usage request, determine, based on the metadata attribute, that electronic use of the business element is allowed, and generate an electronic control signal is configured to allow the user of the user device to electronically use the business element in the computing environment. For example, in response to identification of an electronic usage request indicative of a request for a high-level user (e.g., an administrator) to print the business element, the data compliance circuitry214may determine that printing of the business element is allowed for the high-level user and generate an electronic control signal configured to allow printing of the business element in the computing environment (e.g., local environment) by the high-level user. In another example, in response to identification of an electronic usage request indicative of a request for an executive-level user (e.g., a senior vice president) to generate a value (e.g., an average value) based on the business element, the data compliance circuitry214may determine that generating a value based on the business element is allowed for the executive-level user and generate an electronic control signal is configured to allow generation of the value in the computing environment by the executive-level user. In yet another example, in response to identification of an electronic usage request indicative of a request for an executive-level user (e.g., a senior vice president) to generate a value (e.g., an average value) based on the business element in an analytic computing environment, the data compliance circuitry214may determine that generating a value based on the business element is allowed in the analytic computing environment and generate an electronic control signal is configured to allow generation of the value in the analytic computing environment by the user of the user device. In some embodiments, the data compliance circuitry214may include hardware components designed or configured to, in response to identification of the transmission of the electronic usage request, determine, based on the metadata attribute, that the electronic use of the business element is disallowed, and generate an electronic control signal configured to disallow the user of the user device to electronically use the business element in the computing environment. For example, in response to identification of an electronic usage request indicative of a request for a low-level user (e.g., a customer service agent) to print the business element, the data compliance circuitry214may determine that printing of the business element is disallowed for the low-level user and generate an electronic control signal configured to disallow printing of the business element in the computing environment by the low-level user. In another example, in response to identification of an electronic usage request indicative of a request for a publicly accessible user device (e.g., a public computer located at a school or library, a common computer located at an office or bank and commonly used by multiple employees) to generate a value (e.g., an average value) based on the business element, the data compliance circuitry214may determine that generating a value based on the business element is disallowed for the publicly available user device and generate an electronic control signal is configured to disallow generation of the value in the computing environment by the user of the publicly available computer device. In some embodiments, the data compliance circuitry214may include hardware components designed or configured to transmit the electronic control signal to various devices, including but not necessarily limited to a server device (e.g., one or more server devices110A-110N), a user device (e.g., one or more user devices112A-112N), a data compliance bot (e.g., one or more data compliance bots120), or any other suitable device or combination thereof. In some embodiments, the data compliance circuitry214may be configured to transmit a generated electronic control signal comprising electronic notification content to the input-output circuitry206, and the input-output circuitry206may be configured to receive the electronic control signal and display the electronic notification content on one or more display screens. In some embodiments, the data compliance circuitry214may include hardware components designed or configured to, in response to identification of the transmission of the electronic usage request by a first user device (e.g., user device112A) and identification of the metadata attribute, generate an electronic reporting signal and transmit the electronic reporting signal to a second user device (e.g., user device112B). In some embodiments, the data compliance circuitry214may be configured to transmit a generated electronic reporting signal to the input-output circuitry206, and the input-output circuitry206may be configured to receive the electronic reporting signal and generate a display comprising one or more portions of the electronic reporting signal on one or more display screens. In some embodiments, one or more of the data governance circuitry210, data monitoring circuitry212, and data compliance circuitry214may be hosted locally by the apparatus200. In some embodiments, one or more of the data governance circuitry210, data monitoring circuitry212, and data compliance circuitry214may be hosted remotely (e.g., by one or more cloud servers) and thus need not physically reside on the apparatus200. Thus, some or all of the functionality described herein may be provided by a third party circuitry. For example, the apparatus200may access one or more third party circuitries via any sort of networked connection that facilitates transmission of data and electronic information between the apparatus200and the third party circuitries. In turn, the apparatus200may be in remote communication with one or more of the data governance circuitry210, data monitoring circuitry212, and data compliance circuitry214. In another example, the data governance circuitry210may be deployed as a first cloud utility, the data monitoring circuitry212may be deployed as a second cloud utility, and the data compliance circuitry214may be deployed as a third cloud utility. In some embodiments, one or more of the data governance circuitry210, data monitoring circuitry212, and data compliance circuitry214may be deployed as part of a data compliance bot. In some embodiments, the apparatus200may be partially or wholly implemented as a data compliance bot, a server device, or a combination thereof. For example, a data compliance bot may comprise the data monitoring circuitry212and the data compliance circuitry214. In another example, a server device may comprise the data monitoring circuitry212and the data compliance circuitry214. In yet another example, the data compliance bot may comprise the data monitoring circuitry212, and a server device may comprise the data compliance circuitry214. As will be appreciated, any such computer program instructions and/or other type of code may be loaded onto a computer, processor or other programmable apparatus's circuitry to produce a machine, such that the computer, processor, or other programmable circuitry that executes the code on the machine creates the means for implementing various functions, including those described herein. As described above and as will be appreciated based on this disclosure, embodiments of the present disclosure may be configured as systems, apparatuses, methods, bots, mobile devices, backend network devices, computer program products, other suitable devices, and combinations thereof. Accordingly, embodiments may comprise various means including entirely of hardware or any combination of software with hardware. Furthermore, embodiments may take the form of a computer program product on at least one non-transitory computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including non-transitory hard disks, CD-ROMs, flash memory, optical storage devices, or magnetic storage devices. The server devices110A-110N and user devices112A-112N may be embodied by one or more computing devices or systems that also may include processing circuitry, memory, input-output circuitry, and communications circuitry. For example, a server device110may be a database server on which computer code (e.g., C, C++, C #, java, a structured query language (SQL), a data query language (DQL), a data definition language (DDL), a data control language (DCL), a data manipulation language (DML)) is running or otherwise being executed by processing circuitry. In another example, a user device112may be a smartphone on which an app (e.g., a mobile database app) is running or otherwise being executed by processing circuitry. As it relates to operations described in the present disclosure, the functioning of these devices may utilize components similar to the similarly named components described above with respect toFIG.2. Additional description of the mechanics of these components is omitted for the sake of brevity. These device elements, operating together, provide the respective computing systems with the functionality necessary to facilitate the communication of data (e.g., electronic marketing information, business analytic data, or the like) with the data management system described herein. FIG.3illustrates example electronic information300comprising a governed data set302. The governed data set302may comprise one or more governed business elements304A-304N. Each of the one or more governed business elements304A-304N may respectively comprise a business element306A-306N and one or more metadata attributes308A-308N. Each of the one or more business elements306A-306N may comprise, for example, a data element such as user or customer information (e.g., name, address, age, social security number, preferences, etc.), account information (e.g., account number, age of account, account activity, etc.), a value (e.g., an account balance; a property value; an interest rate; a projected or future value; an average, median, or mean value; a standard deviation value; etc.), a matter requiring attention (MRA), protected health information (PHI), any other suitable data element, or any combination thereof. Each of the one or more metadata attributes308A-308N may be respectively configured to govern electronic usage of the business element306A-306N. The governed data set302may additionally comprise one or more business elements310(i.e., “non-governed” business elements), and metadata312(e.g., pointers, linked lists, structs, data structures, identification data indicative of an identity of governed data set302). In some embodiments, each of the one or more metadata attributes308A-308N may respectively indicate that electronic use of the each of the business elements306A-306N is allowed or disallowed in each of a plurality of computing environments, by each of a plurality of user devices, by each of a plurality of users, or any combination thereof. For example, the one or more metadata attributes308A may indicate that a first electronic use (e.g., viewing) of the business element306A is allowed in some or all computing environments, by some or all user devices, by some or all users, or a combination thereof. The one or more metadata attributes308A may further indicate, for example, that a second electronic use (e.g., printing) of the business element306A is disallowed in some or all computing environments, by some or all user devices, by some or all users, or a combination thereof. In one illustrative example, the one or more metadata attributes308A may indicate that printing of the business element306A is allowed for a high-level user (e.g., an administrator) and disallowed for a low-level user (e.g., a customer service agent). In another illustrative example, the one or more metadata attributes308B may indicate that generating a value (e.g., an average value) based on the business element306B is allowed for an executive-level user (e.g., a senior vice president) and disallowed for any user of any publicly accessible user device (e.g., a public computer located at a school or library, a common computer located at an office or bank and commonly used by multiple employees). In another illustrative example, the one or more metadata attributes308C (not shown) may indicate that sharing the business element306C (not shown) (e.g., comprising a matter requiring attention (MRA) or protected health information (PHI)) with a contractor or vendor is disallowed for all user devices and all users. The one or more metadata attributes308C may further indicate, for example, that transmitting the business element306C via e-mail is allowed for an executive-level user and disallowed for all other users. The one or more metadata attributes308C may further indicate, for example, that initiating a screen sharing application or program on a user device presently displaying or having access to the business element306C is disallowed for all user devices and all users. The one or more metadata attributes308C may further indicate, for example, that taking a screenshot (e.g., a digital image) of a display screen presently displaying or having access to the business element306C is allowed for a data steward and disallowed for all other users. The one or more metadata attributes308C may further indicate, for example, that using the business element306C is allowed in an analytic computing environment and downloading the business element306C is disallowed in the analytic computing environment. FIG.4illustrates an example user interface display screen400in accordance with some example embodiments described herein. In some embodiments, generated electronic notification content may be configured to be displayed by a display device in display screen400. As shown inFIG.4, display screen400may comprise a header402for displaying an Internet Protocol (IP) address, a title, a computing environment name (e.g., “Data Set Revision Environment—Monitored”), any other suitable information, or any combination thereof. As further shown inFIG.4, display screen400may comprise electronic notification content404(e.g., “You are not authorized to print this business element in this computing environment”; “You are not using this business element properly”; “WARNING: Possible disclosure of MRA to external contractor”; “WARNING: There may be legal implications arising from the requested use of this business element”). In some embodiments, electronic notification content404may be configured to be displayed by a display device as a display screen overlay. The display screen400may further comprise a button406(e.g., “OK”) configured to close the electronic notification content404, or otherwise alter the display screen400, when clicked or selected by a user. FIG.5illustrates an example user interface display screen500in accordance with some example embodiments described herein. In some embodiments, generated electronic notification content may be configured to be displayed by a display device in display screen500. As shown inFIG.5, display screen500may comprise a header502for displaying an Internet Protocol (IP) address, a title, a computing environment name (e.g., “Data Set Viewing Environment—Monitored”), any other suitable information, or any combination thereof. As further shown inFIG.5, display screen500may comprise electronic notification content504(e.g., “Warning: This computing environment typically is not used to generate average values. Would you still like to generate an average value based on this business element?”). In some embodiments, electronic notification content504may be configured to be displayed by a display device as a display screen overlay. The display screen500may further comprise a button506(e.g., “Yes”) configured to allow the electronic use requested by the user when clicked or selected by the user. The display screen500may further comprise a button508(e.g., “No”) configured to disallow the electronic use requested by the user when clicked or selected by the user. FIG.6illustrates an example user interface display screen600in accordance with some example embodiments described herein. In some embodiments, one or more portions of electronic notification content, a generated electronic reporting signal, or both may be configured to be displayed by a display device in display screen600. As shown inFIG.6, display screen600may comprise a header602for displaying an Internet Protocol (IP) address, a title, a computing environment name (e.g., “Data Set Administrator Environment”), any other suitable information, or any combination thereof. As further shown inFIG.6, display screen600may comprise electronic notification content604(e.g., “User A requested to use Business Element 1 of Dataset X in Computing Environment Y. The Data Compliance Bot monitoring this computing environment disallowed the requested use based on Metadata Attribute M.”). In some embodiments, electronic notification content604may be configured to be displayed by a display device as a display screen overlay. In some embodiments, electronic notification content604may comprise one or more selectable portions configured to provide additional information when clicked or selected by a user. For example, electronic notification content604may comprise the selectable text “User A” configured to provide, when clicked or selected by the second user (e.g., an administrator or data steward), a pop up display screen comprising identification data, access levels, and/or activity logs for the user associated with the electronic usage request (i.e., the first user that requested to electronically use the governed business element). In another example, electronic notification content604may comprise the selectable text “use” configured to provide, when clicked or selected by the second user, a pop up display screen comprising information indicative of the use associated with the electronic usage request. In another example, electronic notification content604may comprise the selectable text “Dataset X” configured to provide, when clicked or selected by the second user, a pop up display screen comprising identification data, access levels, and/or activity logs for the governed data set comprising the governed business element associated with the electronic usage request. In another example, electronic notification content604may comprise the selectable text “Computing Environment Y” configured to provide, when clicked or selected by the second user, a pop up display screen comprising identification data, access levels, and/or activity logs for the computing environment associated with the electronic usage request. The display screen600may further comprise a button606(e.g., “Allow”) configured to allow the electronic use requested by the user of the first user device when clicked or selected by the user (e.g., an administrator or data steward) of the second user device. The display screen600may further comprise a button608(e.g., “Deny”) configured to disallow the electronic use requested by the user of the first user device when clicked or selected by the user of the second user device. The display screen600may further comprise a button610(e.g., “Forward”) configured to transmit the electronic reporting signal to a third user device (e.g., a user device used by a higher level administrator or data steward). There are many advantages provided by the display screens described herein with reference toFIGS.4-6, such as: facilitating determination of whether data is used legally and/or correctly; facilitating identification of risks arising from improvident uses of data; improving data quality; and educating users on the proper use of data. Having described specific components of example devices and display screens involved in various embodiments contemplated herein, example procedures for managing data usage are described below in connection withFIG.7. Example Operations for Managing Data Usage Turning toFIG.7, an example flowchart700is illustrated that contains example operations for managing electronic usage of a governed data set according to an example embodiment. The operations illustrated inFIG.7may, for example, be performed by one or more components described with reference to data management system102shown inFIG.1, by a server device110or by a user device112in communication with data management system102. In any case, the respective devices may be embodied by an apparatus200, as shown inFIG.2, by a data compliance bot218in communication with apparatus200, or by any combination thereof. In some embodiments, the various operations described in connection withFIG.7may be performed by the apparatus200by or through the use of one or more of processing circuitry202, memory204, input-output circuitry206, communications circuitry208, data governance circuitry210, data monitoring circuitry212, data compliance circuitry214, any other suitable circuitry, and any combination thereof. As shown by operation702, the apparatus200includes means, such as data monitoring circuitry212described with reference toFIG.2or the like, for monitoring electronic usage of a governed data set in a computing environment. The governed data set (e.g., governed data set302described with reference toFIG.3) may comprise a governed business element (e.g., governed business element304A), and the governed business element may comprise a business element (e.g., business element306A) and a metadata attribute (e.g., one or more metadata attributes308A) configured to govern electronic usage of the business element. For example, the apparatus200may actively monitor electronic usage of the governed data set in the computing environment by receiving electronic usage requests from user devices and transmitting adjudicated responses (e.g., electronic control signals as described below with reference to optional operations710and712). In another example, the apparatus200may passively monitor electronic usage of the governed data set in the computing environment by seeing that an electronic usage request has been generated although they are not the direct recipients of the electronic usage requests and transmitting electronic reporting signals to other user devices (e.g., user devices used by administrators or data stewards). In some embodiments, the apparatus200may actively or passively monitor a governed data set in a tool (e.g., a tool used to house business elements for risk and regulatory reporting) by tracking from where the business elements are coming and to where the business elements are going. For example, if a business element is being sourced from an application that has a poor data quality rating, or that has not been certified for the ultimate usage of the business element, then the apparatus200may disallow the electronic usage of that business element for the intended report or system. As shown by operation704, the apparatus200includes means, such as the data monitoring circuitry212or the like, for identifying, via a data compliance bot (e.g., one of one or more data compliance bots120described with reference toFIG.1), transmission of an electronic usage request from a user device. The electronic usage request may comprise a request for a user of the user device to electronically use the business element in the computing environment. For example, the apparatus200may identify, via a data compliance bot, transmission of an electronic usage request indicative of a request for a user of a user device to print the business element in the computing environment. In another example, the apparatus200may identify, via a data compliance bot, transmission of an electronic usage request indicative of a request for a user of a user device to generate a value in the computing environment based on the business element. In some embodiments, the apparatus200may itself comprise the data compliance bot, while in other embodiments, the apparatus200and the data compliance bot are distinct devices (in which case the data compliance bot may be a stand-alone device, a component of a third party device, or an agent or plugin hosted by the user device itself). As shown by operation706, the apparatus200includes means, such as the data monitoring circuitry212or the like, for identifying the metadata attribute based on the business element. For example, the apparatus200may identify one or more metadata attributes308A based on a request to electronically use business element306A. In some embodiments, identification of the metadata attribute may occur via reference to a lookup table or other data structure storing a data set that includes the business element and the corresponding metadata attribute, or that stores a mapping of the business element to its corresponding metadata attribute. In other embodiments, the business element itself may contain a pointer to a relevant data storage location at which the metadata element can be found. As shown by operation708, the apparatus200includes means, such as data compliance circuitry214described with reference toFIG.2or the like, for determining, in response to identification of the transmission of the electronic usage request and identification of the metadata attribute, whether electronic use of the business element is allowed. In some embodiments, the data compliance circuitry214at operation708may determine, based on the metadata attribute, that electronic use of the business element is allowed in the computing environment by the user of the user device. For example, in response to identification of an electronic usage request indicative of a request for a high-level user (e.g., an administrator) to print the business element, the data compliance circuitry214may determine that printing of the business element is allowed for the high-level user. In another example, in response to identification of an electronic usage request indicative of a request for an executive-level user (e.g., a senior vice president) to generate a value (e.g., an average value) based on the business element, the data compliance circuitry214may determine that generating a value based on the business element is allowed for the executive-level user. In some embodiments, the data compliance circuitry214at operation708may determine, based on the metadata attribute, that electronic use of the business element is disallowed in the computing environment, by the user device, the user of the user device, or a combination thereof. For example, in response to identification of an electronic usage request indicative of a request for a low-level user (e.g., a customer service agent) to print the business element, the data compliance circuitry214may determine that printing of the business element is disallowed for the low-level user. In another example, in response to identification of an electronic usage request indicative of a request for a publicly accessible user device (e.g., a public computer located at a school or library, a common computer located at an office or bank and commonly used by multiple employees) to generate a value (e.g., an average value) based on the business element, the data compliance circuitry214may determine that generating a value based on the business element is disallowed for the publicly available user device. In another example, in response to identification of an electronic usage request indicative of a request to share a governed data set (e.g., comprising governed business elements such as matters requiring attention (MRAs) or protected health information (PHI)) with a contractor or vendor, the data compliance circuitry214may determine that sharing the governed data set with the contractor or vendor is disallowed. In another example, in response to identification of an electronic usage request indicative of a request to transmit a governed business element via e-mail, the data compliance circuitry214may determine that transmitting the governed business element via e-mail is disallowed. In another example, in response to identification of an electronic usage request indicative of a request to initiate a screen sharing application or program on a user device presently displaying or having access to a governed business element, the data compliance circuitry214may determine that initiating the screen sharing application is disallowed. In another example, in response to identification of an electronic usage request indicative of a request to take a screenshot (e.g., a digital image) of a display screen presently displaying or having access to a governed business element, the data compliance circuitry214may determine that taking the screenshot is disallowed. Optionally, as shown by operation710, the apparatus200may include means, such as the data compliance circuitry or the like, for generating an electronic control signal based on the determination of whether electronic use of the business element is allowed. The electronic control signal may be configured to control an electronic use of the business element in the computing environment. In some embodiments, in response to a determination that electronic use of the business element is allowed, the data compliance circuitry214may generate an electronic control signal configured to allow the user of the user device to electronically use the business element in the computing environment. In some embodiments, in response to a determination that electronic use of the business element is disallowed, the data compliance circuitry214may generate an electronic control signal configured to disallow the user of the user device to electronically use the business element in the computing environment. In one example, the electronic usage request may be indicative of a request to print the business element, and the electronic control signal may be configured to disallow printing of the business element in the computing environment by the user of the user device. In another example, the electronic usage request may be indicative of a request to generate a value based on the business element, and the electronic control signal may configured to disallow generation of the value in the computing environment by the user of the user device. In some embodiments, the electronic control signal may comprise electronic notification content configured for display on a display device in communication with the user device, or with one or more other user devices (e.g., user devices used by administrators or data stewards). The electronic notification content may comprise any suitable content, such as one or more portions of display screen400, display screen500, or display screen600respectively described with reference toFIGS.4-6. Optionally, as shown by operation712, the apparatus200may include means, such as the data compliance circuitry or the like, for transmitting the electronic control signal. For example, the data compliance circuitry may transmit the electronic control signal to the user device, to a server device, or to a data compliance bot to control an electronic use of the business element in the computing environment. In some embodiments in which the user directly interacts with the apparatus200and wherein the electronic control signal comprises electronic notification content configured for display on a display device in communication with the user device, the data compliance circuitry may further produce a graphic, audio, or multimedia output of the electronic control signal via input-output circuitry206. In other embodiments in which the user does not directly interact with the apparatus200(e.g., the apparatus200comprises a data management system102, but the user interacts with a server device110or a user device112that is in communication with the data management system102), the data compliance circuitry may utilize means, such as communications circuitry, for transmitting the electronic control signal. For example, the data compliance circuitry may transmit the electronic control signal to a server device110or a user device112for graphic, audio, or multimedia output via input-output circuitry of the server device110or the user device112. In some embodiments,FIG.7provides a reliable process for determining whether data is being used legally and/or correctly (e.g., in accordance with pre-determined metadata attributes governing the use of particular business elements) and identifying potential risks arising from the use of that data in various computing environments by various users and user devices. The flowchart operations generally provide for, in some embodiments: adding metadata attributes to data sets that outline the allowable use of the corresponding business elements; using metadata attributes to store rules governing use of data elements; using distributed set of data compliance bots throughout a system to monitor proper utilization of data elements; monitoring data utilization through both changes in the metadata for a data set as well as addition of data compliance bots throughout a system; requiring installation of bots as a requirement for accessing data being provided to a system (which enables enforcement of compliance data use policies for external systems); and detecting certain transmission triggers (e.g., printing, saving to portable media, print screen usage) and analyzing those triggers for authorization based on the user performing the triggering function. There are many advantages of these and other operations described herein, such as: facilitating determination of whether data is used legally and/or correctly; facilitating identification of risks arising from improvident uses of data; improving data quality; and educating users on the proper use of data. FIG.7thus illustrates an example flowchart describing the operation of various systems (e.g., data management system102described with reference toFIG.1), apparatuses (e.g., apparatus200described with reference toFIG.2), methods, and computer program products according to example embodiments contemplated herein. It will be understood that each operation of the flowchart, and combinations of operations in the flowchart, may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be performed by execution of computer program instructions. In this regard, the computer program instructions that, when executed, cause performance of the procedures described above may be stored by a memory (e.g., memory204) of an apparatus (e.g., apparatus200) and executed by a processor (e.g., processing circuitry202) of the apparatus. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart operations. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture, the execution of which implements the functions specified in the flowchart operations. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions executed on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart operations. The flowchart operations described with reference toFIG.7support combinations of means for performing the specified functions and combinations of operations for performing the specified functions. It will be understood that one or more operations of the flowchart, and combinations of operations in the flowchart, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. In some embodiments, one or more operations of the flowchart, and combinations of operations in the flowchart, may be implemented by one or more data compliance bots (e.g., one or more data compliance bots120described with reference toFIG.1). In some embodiments, one or more operations of the flowchart, and combinations of operations in the flowchart, may be implemented by a server device (e.g., data management system server device104or server device110described with reference toFIG.1), wherein the one or more data compliance bots are thin clients that pass-through indications of user actions in their respective settings. In one example, a data compliance bot may comprise the data monitoring circuitry and the data compliance circuitry and implement the flowchart operations described with reference thereto. In another example, a server device may comprise the data monitoring circuitry and the data compliance circuitry and implement the flowchart operations described with reference thereto. In yet another example, a data compliance bot may comprise the data monitoring circuitry and implement the flowchart operations described with reference thereto, and a server device may comprise the data compliance circuitry and implement the flowchart operations described with reference thereto. Conclusion While various embodiments in accordance with the principles disclosed herein have been shown and described above, modifications thereof may be made by one skilled in the art without departing from the teachings of the disclosure. The embodiments described herein are representative only and are not intended to be limiting. Many variations, combinations, and modifications are possible and are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Accordingly, the scope of protection is not limited by the description set out above, but is defined by the claims which follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. Furthermore, any advantages and features described above may relate to specific embodiments, but shall not limit the application of such issued claims to processes and structures accomplishing any or all of the above advantages or having any or all of the above features. In addition, the section headings used herein are provided for consistency with the suggestions under 37 C.F.R. 1.77 or to otherwise provide organizational cues. These headings shall not limit or characterize the disclosure set out in any claims that may issue from this disclosure. For instance, a description of a technology in the “Background” is not to be construed as an admission that certain technology is prior art to any disclosure in this disclosure. Neither is the “Summary” to be considered as a limiting characterization of the disclosure set forth in issued claims. Furthermore, any reference in this disclosure to “disclosure” or “embodiment” in the singular should not be used to argue that there is only a single point of novelty in this disclosure. Multiple embodiments of the present disclosure may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the disclosure, and their equivalents, that are protected thereby. In all instances, the scope of the claims shall be considered on their own merits in light of this disclosure, but should not be constrained by the headings set forth herein. Also, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other devices or components shown or discussed as coupled to, or in communication with, each other may be indirectly coupled through some intermediate device or component, whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the scope disclosed herein. Many modifications and other embodiments of the disclosure set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of teachings presented in the foregoing descriptions and the associated figures. Although the figures only show certain components of the apparatus and systems described herein, it is understood that various other components may be used in conjunction with the supply management system. Therefore, it is to be understood that the disclosure is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. For example, the various elements or components may be combined, rearranged, or integrated in another system or certain features may be omitted or not implemented. Moreover, the steps in any method described above may not necessarily occur in the order depicted in the accompanying figures, and in some cases one or more of the steps depicted may occur substantially simultaneously, or additional steps may be involved. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. | 86,067 |
11861025 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENT(S) Various embodiments of the present invention will now be described in detail with reference to the accompanying drawings. In the following description, specific details such as detailed configuration and components are merely provided to assist the overall understanding of these embodiments of the present invention. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. Embodiments of the invention are described herein with reference to illustrations of idealized embodiments (and intermediate structures) of the invention. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the invention should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. FIG.1andFIG.2are visual representations of what a typical TCP/IP stack and IP layer functions may entail. Data Flow An incoming payload or portion of an IP datagram that contains covert data310may arrive through an electrical interface and pass to the IP layer210for processing. This covert data310may enter the IP layer210and appear the same as other network traffic312. Covert insertion of information into a remote system may require the active operation of a robust implementation of a TCP/IP protocol stack110. This implementation may be one where the source is tightly controlled, either through proprietary means, or by careful and undisclosed modifications to open source availability. All data flow in and out of the TCP/IP stack110must conform correctly to all known standards of operation in order to allow such data flow to pass therethrough successfully. By providing consistent operation, the containment of covert data310may be masked by normal operations. Inbound and outbound data is first and foremost properly structured IP datagrams, or low-level address resolution in nature. These datagrams may contain higher level requirements and may match and conform to layers such as TCP or UPD, for example without limitation. Any arriving covert payload310must be contained within the normal dataflow312of the TCP/IP stack110. Covert data310that does not conform to recognized protocols will normally be isolated and exposed. While there are numerous methods that may be used for transmission, the covert payload310that appears part of normal data flow312will be unnoticed and unobtrusive. It is possible to contain covert information310directly within a standard transmission312destined for an application, however that application and data transmission will be recorded and observable to monitoring systems and facilities. Therefore, the transmissions of covert data310must be contained within standard data traffic312, and yet be extracted and processed via unexpected portions of the IP layer210protocols that exist below the TCP layers protocols. Covert Interception As illustrated inFIG.3, a packet of covert data310may be intercepted by a covert extraction314routine inserted within one of the normal IP layer210routines of the TCP/IP protocol stack110. In other exemplary embodiments, the covert extraction314may be a routine inserted above, below, or between normal IP layer210routines. In still other embodiments, the covert extraction314routine may be an extension of one or more layers of the normal IP layer210routines. Regardless, by intercepting the covert data310at this level of processing, the covert interception314mechanism can isolate, extract, and perform a covert processing316routine on the extracted covert data310. The cover data310may be further treated as nonsense for the rest of normal operation and processing through the TCP/IP protocol stack110. TCP/IP protocol stacks110are typically designed to deal with the potential of erroneous data produced by normal IP network traffic312, and these failed pieces of information are normally be discarded without further notation or observation. Therefore, the covert interception process314, which may be inserted between normal operations and the discard mechanism, may be used to read what otherwise appears to be nonsense, but which actually contains the covert payload310. Covert Processing Once a covert payload310has been intercepted314within the IP layer210of a TCP/IP protocol stack110, the covert payload310may be passed to a covert processing routine316for processing. Such covert processing316may comprise, for example without limitation, decoding of the covert data310, which may be encrypted. While it is possible to spawn a process to handle this operation, the existence of an observable process may expose the existence of the covert interception314routine. Therefore, any processing316of covert data310may occur as an extension of the interception314routine, which may be an extension of, or inserted between, normal routines in the IP layer210. In this way, existence of the covert processing316routine is further obfuscated. As this process may extend the IP layer210, it is possible that a wide range of operations within the operating system may occur. Any embodiment of the present invention may include any of the optional or preferred features of the other embodiments of the present invention. The exemplary embodiments herein disclosed are not intended to be exhaustive or to unnecessarily limit the scope of the invention. The exemplary embodiments were chosen and described in order to explain some of the principles of the present invention so that others skilled in the art may practice the invention. Having shown and described exemplary embodiments of the present invention, those skilled in the art will realize that many variations and modifications may be made to the described invention. Many of those variations and modifications will provide the same result and fall within the spirit of the claimed invention. It is the intention, therefore, to limit the invention only as indicated by the scope of the claims. | 6,360 |
11861026 | It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the preferred embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Currently-available digital manufacturing systems are deficient because, among other reasons, they fail to securely transfer design/build files and receive feedback data; accordingly, a digital manufacturing system that provides secure exchange, transform, delegation of digital design and build files, adherence to defined manufacturing parameters, ease of auditability and customer insight into the history of all stages of a product, and secure manufacturing process data feedback to stakeholders in a digital supply chain can prove desirable and provide a basis for a wide range of digital manufacturing applications, such as manufacturing, using, and selling designs and consumables according to a creator's/owner's minimum criteria and quality level. Various benefits of the systems and methods disclosed herein will be readily apparent to one of ordinary skill in the art. For example, the systems and methods disclosed herein allow for the integration of various security measures into a digital manufacturing system in an automated fashion without disturbing existing digital workflows. The applications disclosed herein need not replace any applications currently used in the workflow, but may instead integrate with those existing applications to provide security and access control of digital files based on customer requirements. The systems and methods disclosed herein also may improve the technical performance of digital manufacturing devices and solve specific technical problems that have plagued those devices. For example, the systems and methods disclosed herein minimize or prevent the introduction of a variety of manufacturing defects, whether introduced through intentional misconduct or otherwise. This solves recognized problems of high failure rates and high vulnerability to manufacturing defects, unauthorized production, and cyber-attacks affecting digital manufacturing devices. The systems and methods thus improve the reliability of digital manufacturing devices, and the security and efficiency of digital manufacturing processes. By allowing for fully automated processes, along with the ability for the user to exercise precise and customizable control over various aspects of the processes disclosed herein, the systems and methods disclosed herein provide further solutions to the technical problems of inefficiency, increased production time, and lack of scalability specific to conventional digital manufacturing systems. These and other beneficial results can be achieved, according to one embodiment disclosed herein, by a digital manufacturing system100as illustrated inFIG.1. Turning toFIG.1, the digital manufacturing system100includes a network device101, which can be a computer, a mobile phone, a handheld tablet device, or any other mobile network device capable of accessing a network. The network device101can be used to produce data, such as a source data file105(e.g., 3D object data file, design file, build file, and so on) that is suitable for digital manufacturing. The network device101can also run a protection application110. In some embodiments, the protection application110provides encryption of the source data file105(such as by creating an encrypted file115) and documents, manufacturing and licensing policies (e.g., predefined rules). In a preferred embodiment, the encrypted file115includes one or more digital supply item (“DSI”) files, which can include the corresponding manufacturing and licensing policies and will be further discussed below with reference toFIG.5. In a preferred embodiment, the protection application110can also include the modeling application that is used to create the object data file105. The protection application110can be installed on the network device101or accessed through an interface with a cloud based hosting solution (not shown). When generating the encrypted file115, the modeling application110can also produce a lock certificate or license, such as a protection authorized policy list (APL) (not shown), which can be moved or sent to any storage database120. In one embodiment, the protection APL is unique and associated with the encrypted file115. In some embodiments, the protection APL includes a configuration file that exists within the protection application110. In some embodiments, the protection APL can include a certificate, a license, and/or an APL file. In a preferred embodiment, a selected APL is an extensible markup language (XML) file (such as defined by the W3C's XML 1.0 Specification or other open standards), representing various policy parameters and values (discussed below), and includes a digital cryptographic signature of all information in the APL to maintain data integrity. The protection APL may comprise information that describes features and parameters of an instance of the protection application110. For example, in some embodiments, parameters of the protection APL can describe signing key information (e.g., public key SN of a signing key, a company name, and a key role). A signing key from the protection application110can be used to create the digital signature of the protection APL. The digital signature can include an asymmetric public/private key pair, such as an RSA. The encrypted file115can be sent to a delivery portal150for future production, which includes, but is not limited to, a public or private web based marketplace, a secured library of designs internal to a private network, or any system enabling the storage and retrieval of files. An authorization APL116is sent to a management application130that authorizes production to an enforcement application160via the delivery portal150if all criteria defined in the authorization APL are met. The authorization APL116can be generated when the encrypted file115is created by the protection application110. In some embodiments, the authorization APL116may comprise information from the protection APL, for example, that describes features and parameters of the instance of the protection application110. For example, in some embodiments, parameters of the authorization APL can describe APL information, authorizer identification, information on the protection application110(which was included in the protection APL), transform identification, information on the manage application130, trace identification, information regarding the encrypted file115, manufacturing parameters, and licensing parameters. A signing key from the protection application110or the manage application130can be used to create the digital signature of the authorization APL. The digital signature can include an asymmetric public/private key pair, such as an RSA. In other words, the authorization APL116provides rights for the enforcement application160to access encrypted files within the encrypted file115as well as set manufacturing parameters and enforce licensing rules for access of the encrypted file115. In an alternative embodiment, the authorization APL116sets manufacturing parameters and enforce licensing rules for machine features and processes that do not involve accessing files from the encrypted file115. For instance, the authorization APL116, when processed by the management application130controls whether or not the machine accepts the encrypted file115for producing parts. In another example, when the management application130processes the authorization APL116, the management application130controls whether the machine produced parts or under what circumstances the machine can produce parts (e.g., a limit on number of parts or time allowed to produce parts). In another example, the authorization APL116, when processed by the management application130, controls whether certain features of a machine were enabled when producing parts from a non-encrypted file115-based build file. The mechanism by which the management application130controls these processes, via generation of a second authorization APL sent to an enforcement application160, is described further below. Although not shown, a manage APL can also be used to describe all features and parameters of an instance of the manage application130. In some embodiments, the manage APL includes a configuration file maintained in the respective manage application130it represents. The protection application110can request an updated manage APL from a selected manage application130at any time. In some examples, the parameters of the manage APL include a manage site name, signing key information, encryption key information, manage location (e.g., URL), DAM URL, trace URL, manage type, and information on the registered machine and model list. A signing key from the manage application130can be used to create the digital signature of the manage APL. Similarly, although not shown, an enforce APL can be used to describe all features and parameters of the enforcement application160. In order to authorize production to the enforcement application160, a manufacturing device170is registered and/or identified in a device database140through its unique identifier. Once the management application130matches the requirement of the authorization APL116to the device certificate146, the management application130authorizes production on the manufacturing device170by providing a second authorization APL136to the enforcement application160. The second authorization APL136can be created when a license of the encrypted file115is distributed by the management application130(either for delegation to another instance of the management application130or for authorization of production by the enforcement application160). The second authorization APL136can be linked or associated to the encrypted file115by a universally unique identifier (UUID) of the encrypted file115. The protection application110can define all parameters included in the second authorization APL136. When the management application130licenses the encrypted file115to another application, any optional parameters that are not defined by the current management application130can be set. Additionally, any values set by the current management application130can be restricted. In some embodiments, parameters of the authorization APL136can describe parameters from the protection application110, encrypted file115identification (e.g., file information block), manufacturing parameters (e.g., machine manufacturer, a machine model, and so on), licensing information (e.g., an authorized user, expiration date, quantity, owner of the encrypted file115, and so on). The encrypted file115identification (e.g., file information block) can represent public identifier items such as the UUID of the encrypted file115, the design name, user customizable identifiers, design description, and so on. Accordingly, a non-trusted storage database120or delivery portal150can read this information and display it to the user. The file information block includes details on the encryption key used, the file names, the compression method, a hash digest of each file, and so on. All information in the file information block can is treated as confidential and can be encrypted. A signing key from the protection application110or the management application130can be used to create the digital signature of the authorization APL136. Once the authorization APL136and its associated encrypted file115are sent to the enforcement application160, the enforcement application160verifies parameters in the authorization APL136to authenticate the device to be used and provide the ultimate authorization to manufacture. If successfully authorized, an enforcement APL146, and its associated encrypted files, is decrypted and sent from the enforcement application160to the manufacturing device170if the manufacturing device170is set to the parameters established by the protection application110. Generally, the device on which the enforcement application160resides can be referred to as a network client. In one embodiment, the enforcement application160can be embedded in the firmware of the manufacturing device170, in others it is embedded in the controller of the manufacturing device, and/or a standalone set-top box. The enforcement APL146may comprise information that describes features and parameters of an instance of the enforcement application160. For example, in some embodiments, parameters of the protection APL116can describe signing key information (e.g., public key SN of a signing key, a company name, and a key role), encryption key information, machine manufacturer, machine model, machine serial number, family, machine type, device name, device ID, and users. A signing key from the enforcement application160can be used to create the digital signature of the enforcement APL146. The encrypted file115is decrypted and the manufacturing device170can produce the object180designed in the encrypted file115. Turning toFIG.2, an exemplary top-level diagram of the protection application110ofFIG.1is shown as a protection component200. The protection component200provides the main interface from post processors, product lifecycle management (PLM) systems, and/or other design products to a secure system for protecting the encrypted file115. For example, the protection component200can create and/or edit the encrypted file115to provide a container of files and communicate with a management license server300(shown inFIG.3), such as the management application130, for all licensing, key storage, and reporting processes. Further regardingFIG.2, design files and policy data can be imported into the protection application110in any means, such as being received automatically through a network socket, a command line, scripting interface, a graphical user interface (GUI), and/or directly through manual importation. Existing digital supply item (“DSI”) files can also be loaded into the protection application110for modification. The protection application110can also create the encrypted file115by taking one or more design files and generating a symmetric key. A benefit of the systems and methods disclosed herein is that they may integrate with existing applications, allowing any application that creates digital files to incorporate those digital files into the encrypted file115. For any application that consumes digital files, the systems and methods disclosed herein may decrypt only the files required by the application from the encrypted file115and only allow the application to perform operations allowed by the authorized user. The protection component200includes a high entropy key generation module, such as a cryptographic engine110, and a random number generator (RNG)221for generating the unique symmetric encryption keys. The protection component200further includes a storage device (such as the storage210shown inFIG.2) for maintaining the signature generation and encryption keys. As shown inFIG.2, a security parameter represents those components that can pushed to a removable backup storage device (not shown) in the event of a security threat and/or based on predefined requirements. As an example, an individual smart card can be used for each user so that each user is responsible for the credentials to unlock their respective smart card. The protection component200can run in a stand-alone mode or as a “plug-in” to CAM/post processing, PLM, and/or CAD products. Accordingly, not all features shown inFIG.2are necessary (e.g., optional GUI interfaces). Once the encrypted file115has been created, the digital manufacturing system100can register the encrypted file115with a license server, such as the management application130. The license server can provide access, distribution, and reporting policy control for digital assets. For example, a license is created by the content owner and issued to a specific target to be stored in association with that target. In some embodiments, the license is only transported among the components with the authorization APL136. Stated in another way, once the authorization APL136is used, the digital manufacturing system100no longer recognizes the authorization APL136as valid and cannot be used to re-confer rights. In a preferred embodiment, the license server can maintain a licensing network as a node network. In this example, asset licenses can be sent downstream in the node network. Nodes can also interact with one another only when they register with each other. This registration can reflect a contract between two nodes and also sets a policy on how these nodes interact with another (e.g., how asset licenses flow between them). The license server supports at least three levels of trust between nodes: (1) most trusted link; (2) semi-trusted link; and (3) untrusted link. For the most-trusted link level, as licenses are issued downstream, the symmetric key can be sent with the license and stored in the corresponding node. For the semi-trusted link level, a license is issued downstream along with the symmetric key; however, a heartbeat is required to be received at a predetermined amount of time or the license is revoked. This can address those recipient systems that are offline or have limited access. For the untrusted link level, the license is issued downstream, but without the symmetric key. Even further, a link back to the previous license holder is included and the previous license holder must approve all transactions before providing a symmetric key directly to the requestor. The ability to support of varying levels of trust is a novel and advantageous feature of the license server. Turning toFIG.3, a top-level diagram of the management application130, such as the management license server300is shown. The management license server300communicates with the protection component200, for example, through an application program interface (API). The management license server300can register upstream protection applications110or instances of manage applications130, register downstream instances of the management application130or the enforcement application160, receive licenses, update licensing policies, process request to issue licenses, and process requests to re-issue or renew licenses, such as shown inFIGS.6-7. Although not shown inFIG.1, multiple instances of the manage application130can be advantageous for creating local instances of the management application130, which can reside closer to the hardware without the need for overcoming network restrictions. In some embodiments, automatic templates may be setup so that the licensing flow is fully automatic. For example, a user may setup a template such that all designs uploaded to a PLM system are initialized without any control of quantity and no expiration date. When an order is received from an ERP system, the quantity and expiration date can automatically be pulled from the ERP and the DSI can be authorized for production to the supplier defined by the ERP transaction. In this way, the full transaction does not require any approvals. This automated licensing advantageously embeds digital rights management (DRM) directly into the digital manufacturing workflow without requiring an additional application for controlling DRM settings. In some embodiments, approval flow may be defined so that a manual approval is required for certain transactions. For example, the approval flow could be defined to require that all high value parts must be approved manually before a license is issued for manufacturing to certain suppliers. In this scenario, the management license server300sends an approval request to the defined set of approvers before issuing the license to the supplier(s). In some embodiments, security requirements of downstream systems may be specified as part of the policy. Different security levels may be implemented for downstream instances of the management license server300associated with a particular digital manufacturing device and for different instances of the enforce component400. Accordingly, as part of the policy a user may define which downstream systems are allowed to authorize or produce a DSI based on their implemented security level. Furthermore, the policy language may allow a user to specify which types of private files can be accessed by which type of applications. For instance, a user may specify that a CAD file can be access by a build program, but a build file can only be accessed by an authorized machine. Advantageously, in some embodiments a user may control revision licensing through the management license server300. Often, parts produced by a manufacturer will have several revisions or engineering changes (ECs). This can allow for error by the manufacturer in selecting the wrong version or EC of a part. The systems and methods disclosed herein can solve this problem by allowing the IP owner (or distributor) to issue a license for the specific revision or EC required. When producing the part, the enforce component400will extract the proper version of the build file from the DSI or reject a wrong version of the DSI according to the license rules. Upon moving to manufacturing, the enforcement application160, such as an enforcement component400shown inFIG.4, receives both of the encrypted file115and the license from the management license server300. The enforcement component400ensures manufacturing device authorization and adherence to upstream licensing, receives, stores, and enforces device certifications, and initiates and/or updates a supply ledger. The supply ledger may store all operations and transactions by the applications described herein, with privacy and integrity cryptographically protected. Supply ledger data may be stored in, for example, a centralized database, or in a decentralized system such as a blockchain. An authorized user, for example the owner of intellectual property contained within the ledger data, may specify which data can be accessed by other participants in the ecosystem (for example, distributors and manufacturers). A policy associated with the ledger may specify the type and amount of data collected in the ledge. The ledger may also store identifier codes. These identifier codes may be tags on physical parts with a tracking mechanism, such as a barcode or RFID. The identifier code stored in the ledge may be linked back to the digital file corresponding with the part. A design creator registers with the system100and provides credentials, their design and/or build file(s), and a description into the encryption software. Additional items can also be added such as a reduced quality model for display purposes (i.e., a digital image or degraded design file). Subsequently, the design owner documents licensing rights, such as a number of minimum and maximum units to be produced, and period of production and manufacturing rules (e.g., material, color, type of manufacturing device, layer resolution, use of supports, delegation and transform rights in the policy language of the encryption software). The design creator then encrypts the file(s) and polices creating a digital asset that is then transmitted to a distribution platform. An authorized user can select the design and a pre-registered manufacturing device. The system will check the licensing and manufacturing requirements of the digital asset against the profile of the user and the settings and capabilities of the selected manufacturing device. If there is a match, the manufacturing of the object is authorized, the file is the transmitted to the manufacturing device along with a certificate that enables only that device to decrypt the device. Finally, an authorized operator can order the manufacture of the device at which point the device will ensure that it is indeed the target of the asset, that an authorized operator is making the request, and that all of the correct manufacturing rules and parameters of the asset are adhered to including, but not limited to: machine manufacturer and/or model, correct consumable loaded, machine tooling parameters, machine inspections and certification up to date, and authorized quantity is not expired. If all checks pass, then digital build file can be decrypted and the production can occur. The data resulting from the production process such as but not limited to number of units, failure rate, duration of the manufacturing process are compiled and securely send back to the creator/owner of the design. If there is no match a message with the reason will be sent to the user. Accordingly, the digital manufacturing system100advantageously provides encryption of digital design/build files with licensing and manufacturing rules, authorization and authentication to manufacture on digital manufacturing devices, selectively transforms files, delegates with or without additional restrictions, and decrypts the design/build files for manufacturing on an authenticated manufacturing device. Advantageously, the enforce component400in some embodiments may have the ability to pull a DSI directly from a repository (such as a PLM/ERP/DAM/DAS system) according to the license received from the management license server300. In this way the systems and methods disclosed herein may integrate with a manufacturing execution system (IVIES) to receive directions from the IVIES for initiating jobs on a machine. The MES would not need to talk to the machine directly and would not need to send files to the machine. An enforce component400may manage files on the machine based on the license from the management license server300according to instructions from the MES. Build files are large files; accordingly, in some embodiments the enforce component depicted inFIG.4may feed small segments of the build file to the digital manufacturing device rather than providing the full file. The segmented build file approach allows the decrypted file to never be fully contained on the disk and only decrypted within memory buffers. Advantageously, this enables the file to be segmented and buffered from a secure application directly embedded in the machine without the need for a hardware protected file storage system. It is feasible to segment the file into buffer sections that are encrypted with different keys. The decryption keys could be delivered to the machine as needed along with the buffer segments. Note that this embodiment would require an online connection between the machine and the dedicated manage component (depicted inFIG.3) for the digital manufacturing device. With reference now toFIG.5, an exemplary encrypted file115, such as a digital supply item (DSI)500, which can be created using the digital manufacturing system100is shown. The digital supply item500is a digital container that includes the information required for the production of a digital asset within the digital manufacturing system100. In some embodiments, the DSI500is a single file for securing private data for transport to the digital manufacturing device. Within the system disclosed inFIGS.1-4, the DSI500may be created by the protection component200illustrated inFIG.2, while the enforcement component illustrated inFIG.4may decrypt the contents of the DSI and provide appropriate build files to the digital manufacturing device. Access controls and rights to the DSI may be governed by the management component illustrated inFIG.3. The separation of the DSI500from the authorization APL116is an advantage in that the DSI500can be stored and handled without any special security considerations and allows for less expensive cybersecurity protections around the large data sets, while the authorization APL116can be highly protected by the management application130. This separation of the DSI500and authorization APL116also allows for much easier integration with common digital manufacturing workflow applications, as compared to systems that require the protected files to be stored in special secure hardware, which is more expensive and difficult to integrate into the workflow. As shown inFIG.5, the digital supply item500includes an encrypted section for all confidential information and an unencrypted portion for any non-confidential information. In some embodiments, the digital supply item500can be compressed and/or uncompressed. An advantageous feature of digital supply item500is the ability to include both an encrypted section and an unencrypted section that allows any number of files in each section. Additionally, any application in the digital manufacturing workflow can read any public file from the container without restrictions. This allows for an easy method of transferring files between each application. Moreover, even though the public files are not encrypted, the integrity of each file is protected with cryptographic digital signatures. The unencrypted portion of the digital supply item500can be used for identification of a design and include one or more data fields that are organized, for example, as an extensible markup language (XML) file. The one or more data fields of the unencrypted portion can include a version field, a unique ID field, a design name field, a user name field, a date created field, a public key field, a company name field, a key role field, a custom ID field, a description field, a multi-platen field, an image field, and so on. Any identifying or revision parameters, such as IDs or version numbering, can be stored in the digital supply item500by the PLM/ERP/DAM/DAS system and used by the workflow application to manage access of the proper data revision. Additionally, the digital supply item500can be licensed such that only the correct revision can be accessed by the machine manufacturing the device, advantageously preventing the manufacture of the incorrect part revision. The encrypted portion of the digital supply item500can include one or more encrypted files. In some embodiments, each file can be encrypted separately with unique keys and initialization vectors (IVs). Each encrypted file of the digital supply item500can be identified by a file information block (FIB) that includes an unencrypted parameter that identifies the file number of the encrypted file. The FIB can be used to decrypt and recreate the encrypted file in the original format. Parameters for the FIB can include, for example, a key (used to encrypt the file), an IV, a hash digest, a file number, a compression method, an uncompressed size, a file name, and so on. For example, a selected payload file shown inFIG.5can include an identifier file, an image file, a schematic file, and one or more encrypted CAD drawings. An advantage of the architecture of the digital supply item500is that individual files may be either encrypted or not encrypted with the ability to define what applications and/or machines can access those files based on the individual file or the file time. As one example, a DSI500could be named part.dsi. Part.dsi is an uncompressed ZIP archive with a .dsi extension that may store both encrypted and unencrypted files. Part.dsi would show the following files in a zip archive: Identifier.apl (an XML based Authorized Policy List or APL file); Part1.jpg (an image file); Schematic.pdf (a customer file); E1:ecad; and E2:ecad (the latter two being encrypted data files). In this exemplary embodiment, Identifier.apl contains all non-private identification information necessary to describe the DSI to Digital Asset Managers (DAM) including PLM and ERP systems as well as identify the DSI to the management license server300depicted inFIG.3. The management license server300may provide the DSI FIB for E2:ecad. The enforcement component depicted inFIG.2may then decrypt decrypt E2:ecad into part.job file. As another example, a DSI500could be named E47892.dsi. E47892.dsi is an uncompressed ZIP archive with a .dsi extension that may store both encrypted and unencrypted files. Part.dsi would show the following files in a zip archive: Identifier.apl (an XML based Authorized Policy List or APL file); image.jpg (an image file); E1:ecad; E2:ecad; and E3:ecad (the latter three being encrypted data files). In this exemplary embodiment, Identifier.apl contains all non-private identification information necessary to describe the DSI to Digital Asset Managers (DAM) including PLM and ERP systems as well as identify the DSI to the management license server300depicted inFIG.3. The management license server300may provide the DSI FIB for E1:ecad, E2:ecad, and E3:ecad. The enforcement component depicted inFIG.2may then decrypt decrypt E1:ecad, E2:ecad, and E3:ecad into E47892.sli, support.sli and E47892.job files, respectively. In some embodiments, a digital manufacturing device such as an additive manufacturing device may build multiple physical parts on a single build plate. In some embodiments, parts being built on a single build plate may have separate characteristics such as separate licensing policies. To support this use case, a special DSI may be defined that may support machine original equipment manufacturers (“OEMs”) that allow this case without flattening the part information into a single build file. It will be assumed that each part may have a unique DSI and APL, both of which may be created using the standard DSI and APL specifications. A new DSI may be created for the entire build plate. The individual parts files may be stored as DSI files in the public file section. The build file that contains orientation and placement information on the individual parts on the build plate may be stored as a private file. The Multi-Platen field discussed above may be set to an appropriate value to indicate this situation, e.g., the field could be a binary filed. An image may be supported in the same way as a standard DSI. Advantageously, the digital manufacturing device of the embodiments described above can thus build multiple physical parts with different DRM characteristics on s single build plate. As an example of these embodiments, a DSI500could be named plate1.dsi. Plate1.dsi could contain the following files: identifier.apl; image.jpg; part1.dsi; part2.dsi; turbine.dsi (the three preceding files containing information for individual parts on a build plate); and E1:ecad (which contains a file plate.job which describes the part orientation on the build plate). Authorization APLs for part1.dsi, part2.dsi, turbine.dsi and plate1.dsi are received, and verified to be authentic and to have a policy that allows production, by the enforce component shown inFIG.4, and in turn the enforce component authorizes the build. As the preceding description makes apparent to one of ordinary skill in the art, these multiple platen build embodiments improve the efficiency of an additive manufacturing process and are technical improvements to additive manufacturing machines, by providing heretofore unknown capabilities for the machines to build multiple parts at once, and thereby increate throughput, while at the same time maintaining part-specific protection of the digital design files. The entire encrypted portion and the corresponding FIB can be encrypted, for example, with an RSA encryption key of an authorized receiver. The encrypted portion can also be hashed using a hash table such that the hash digest will be stored in the corresponding FIB. Prior to decrypting any of the encrypted portion, the hash digest can be verified with the contents of the FIB. Although described with RSA and hash encryption, any suitable cryptography can be used herein, such as secure hash (including secure hash algorithms (SHA), SHA-1, SHA-2, SHA-3, and so on), RSA key pairs, Advanced Encryption Standard (AES) keys, IV generation, file padding, and so on. Multiple benefits may be realized from using a DSI container such as the one described herein and depicted inFIG.5as DSI500. A DSI500may protect any number of private (encrypted) and public files normally used in the digital workflow. Any PLM/ERP/DAM/DAS within the workflow may read public identifying information and public files from the DSI container. Any identifying or revision parameters, such as IDs, S/Ns or version numbering may be stored in the DSI container by the PLM/ERP/DAM/DAS. Advantageously, each private file may be encrypted with a unique symmetric key. The management license server300depicted inFIG.3may issue licenses (APLs) that will enable the enforcement component depicted inFIG.4to decrypt and provide access control to any private file within the DSI container. In one embodiment, a DSI container could provide both CAD files that were used to create a part and build files that contain machine specific directions on how to manufacture the part. The management license server300depicted inFIG.3may then allow build programs that use CAD files as import to create build files for additional machines and store the encrypted build files back in the DSI container. The management license server300depicted inFIG.3may then, for instance, assign the enforcement component depicted inFIG.4on a first machine created by a first type of manufacturer to have access only to a first type of build files compatible with that first machine, while also assigning the enforcement component inFIG.4on a second machine created by a second type of manufacturer to have access only to a second type of build files compatible with that second machine. In this way, a manufacturing site with multiple machines could utilize a single DSI file to build parts on any machine type. The management license server300depicted inFIG.3may direct the specific machine to have access only to the build files required of that machine. Advantageously, DSI500may have the capability to store multiple protected files and the management license server300can direct specific files to only applications and hardware devices (for example, additive manufacturing machines) that are allowed to have access to those files, with the latter providing a particularly advantageous feature. Advantageously, access to files can be controlled by the file type as well as by the specific file. For example, CAD files can only be accessible by CAD applications and build files can be accessible be machines. Furthermore, as previously mentioned, tracking the provenance of each part throughout the supply chain is very difficult if not impossible when the subcontractors use different data collection methods that are often proprietary and are physically spread across the globe. A distributed ledger advantageously replaces centralized and proprietary databases with a decentralized open data repository. For example, within a blockchain, each node participating has the opportunity to add to the ongoing and constantly updated shared ledger. The shared ledger has strong cryptographic integrity protection which preserves the entire recorded history of transactions within a given blockchain. Additionally, each node can vote on the authenticity of any transaction and reject those transactions which are fraudulent. The decentralized nature of the blockchain means that no single company will have ownership or undue influence on the data recorded in the ledger. Accordingly, in some embodiments, each of the applications of the digital manufacturing system100is executed on independent nodes and can be integrated within a distributed ledger, such as a supply chain blockchain. The blockchain is used to store all recorders of a created part and associated transactions. Each block in the chain outputs the hash of all the transaction records recorded since the previous block was issued along with the hash of the previous block. In this way, this output is a function of all transactions of the digital manufacturing system100. In some embodiments, the transactions are broadcast globally through a peer-to-peer network, thereby allowing all participants in the digital manufacturing system100to observe all transactions presently and in the past. For instance, a subcontractor could report on each part added to an assembly and record the transaction when the subassembly is sold to another contractor in the supply chain. The receiving contractor would then report on the receipt of the subassembly and provide records of operations and transactions associated with the subassembly. Therefore, the final assembly should have a full and open record of the creation of all parts within that assembly. A cryptographic work function can be required for the creation of each block to prevent the forking of the blockchain by an attacker, who is motivated, for example, to create double transactions. In some embodiments, the cryptographic work function includes a mathematical problem to be solved that receives the previous block as input and uses a cryptographic strong hash function to generate a hash of the mathematical problem. Advantageously, the cryptographic work function then prevents an attacker from creating forked blocks faster than distributed miners can create main blocks of the blockchain. The digital manufacturing system100can be implemented within a distributed ledger in any suitable manner, depending on the specific computing platform. For example, for a blockchain-based distributed computing platform featuring smart contract scripting functionality (e.g., Ethereum), the authorization APL created by the digital manufacturing system100is stored in the blockchain and the digital contract is used for licensing. For simpler implementations as described below, a digitally signed hash record of the authorization APL is stored on the blockchain. In either case, a record of the license transaction is preserved by the blockchain. If the entire license is embedded in the blockchain, then the blockchain itself serves as the transportation mechanism for transporting each license between nodes. Any confidential information in the authorization APL is encrypted with the licensee cryptographic key (e.g., the encryption key of the selected protection application110, management application130, and so on), so that the integrity of the transaction is be preserved while retaining the confidential portion of the authorization APL. In some embodiments, the digital manufacturing system100enhances the supply chain blockchain by adding records of a digital twin to each physical part created with digital manufacturing technology. As used herein, the digital twin describes the digital representation of a physical part. When a digital supply chain tracking is married with a physical supply chain tracking, the digital twin refers to the digital files that represent the physical part. This allows the final assembled product to have a complete record of the manufacturing operations as well as the digital design and the encrypted file115included in the completed product. In one embodiment, a receipt of each transaction is stored to the distributed ledger, such as a blockchain. Each new authorization APL created by a selected application (e.g., the protection application110, management application130, and so on) will store the hash of the authorization APL and a unique identifier for the authorization APL to the blockchain. When a downstream application receives the authorization APL, the application can verify the authenticity of the authorization APL by reading the hash of the authorization APL stored on the blockchain. Additionally and/or alternatively, the sending and receiving can be stored as data elements of the authorization APL. For example, the parameter that describes the manufacturing machine that is allowed to produce the part can be stored directly on the blockchain as a parameter. In another embodiment, the full license information can be stored on the distributed ledger, an advantageous capability not present in known systems. For example, the entire plaintext of the authorization APL is stored on the blockchain in the standard format. The blockchain provides permanent storage, authenticity, and transportation of the license information. However, the blockchain need not perform any operations based on information contained in the authorization APL. All applications that receive an authorization APL continuously monitors the blockchain for a new authorization APL. When an Authorization (e.g., the authorization APL116that is targeted for a specific application in the digital manufacturing system100) licensed for the monitoring application is identified, the application will download the selected authorization APL and perform the operations specified by the authorization APL. Additionally and/or alternatively, the sending and receiving can be stored as data elements of the authorization APL. In another embodiment, the quantity authorized can be stored on the distributed ledger. When the authorization APL is issued, the total quantity authorized is initialized and stored as a parameter in the blockchain. The total quantity available will be a public parameter along with a unique identifier for the part. The current licensee of the part is designated as a parameter in the blockchain. The current licensee can transfer ownership of a number of parts to a blockchain account as long as the amount of quantity transferred is less than or equal to the amount specified in the blockchain. The amount of quantity stored in the blockchain under the current owner's account is decremented to reflect the amount transferred. The new owner then receives an authorization APL stored with the quantity transferred. Therefore, there can be multiple owners of a part specified on the blockchain. When the licensee receiving the quantity of parts is a manufacturing machine and the machine produces a physical parts, the amount of quantity stored on the blockchain is decremented by the amount of physical parts produced. In another embodiment, any of the license or manufacturing parameters stored in the authorization APL can be stored as parameters in the blockchain. When a quantity of parts is transferred to a new licensee, any of the license or manufacturing parameters can be modified by the current owner. The new owner will then have the authorization APL embedded in the blockchain with the number of parts transferred by the previous owner and license and manufacturing parameters as specified by the previous owner. Confidentiality protections and key distribution network within a blockchain can also be handled in any manner described herein. For example, in one embodiment, any parameter of the authorization APL that is confidential can be encrypted. The encryption is performed with the licensee's public key so that only the licensee can decrypt the parameter using their private key. The confidential information is unavailable on the blockchain, but the licensee can view the parameter after performing decryption. In another embodiment, the blockchain distributes cryptographic keys. The cryptographic keys used to encrypt files within the encrypted file115can be encrypted with the licensee's public key so that only the licensee can decrypt the cryptographic keys using their private key. In this way, the current owner can identify which specific files or file types the licensee will have access to by only providing the cryptographic keys to those specific files or file types to the licensee. Transactions can also be reported within a blockchain in any manner described herein. As an example, each transaction performed within the digital manufacturing system100is reported to the blockchain. A report is generated each time a transaction is processed by an application within the digital manufacturing system100. The full details of the report are stored on the blockchain, by the node running the selected application. In another embodiment, a receipt of each transaction within the digital manufacturing system100is reported to the blockchain. A report is generated each time a transaction is processed by a selected application within the digital manufacturing system100. Only the hash of the report is stored on the blockchain, by the node running the selected application. The report details are transmitted to other parties through a channel outside of the blockchain. The receiver of the report can hash the report and confirm the hash stored on the blockchain to authenticate the report. In another embodiment, each transaction performed within the digital manufacturing system100is encrypted and reported to the blockchain. A report is generated each time a transaction is processed by a selected application within the digital manufacturing system100. The report is encrypted using symmetric encryption keys. The symmetric encryption keys will be encrypted with the receiver's public asymmetric key. The receiver's public keys can be associated with accounts on the blockchain. Additionally and/or alternatively, the receiver's public keys can be public keys held outside the blockchain. The encrypted report is stored on the blockchain, by the node running the selected application. Turning toFIG.8, an exemplary supply chain that can be used with the digital manufacturing system100is shown. Subcontractors820manufacture subcomponents and/or provide raw materials to be used in a final product that is shipped to customers830. Subcontractors820that ship directly to an integrator that creates the final product (not shown) are considered Tier 1. Subcontractors820that ship to the Tier 1 subcontractors820are considered Tier 2. Digital supply chains810are protected by the digital manufacturing system100. Within each subcontractor820, there are a multitude of manufacturing operations where subcomponents are assembled or created. Each operation on a part is tracked as defined by the product specification of the digital manufacturing system100described herein. Certain operations that are considered highly confidential to a selected subcontractor820need to be protected from outside visibility. However, for this example, it is assumed that each operation is tracked in the ledger as a transaction. In some embodiments, anyone can register and participate in the blockchain and receive the full transaction record. In a preferred embodiment, one approved vendors/suppliers can participate in the transaction ledgers, thereby allowing a certificate authority to certify each vendor and potentially each manufacturing machine within the ecosystem of the supply chain. The supply chain ecosystem preferably includes all subcontractors820of the final integrator who ship parts to the customer830. In the supply chain shown inFIG.8, the following transactions are defined for exemplary purposes only: Transfer: The ownership of a component is transferred from one subcontractor820to another. This occurs when a part is transported to the custody of another entity. The ownership transference could occur between machines within a manufacturing line. Transformation: The part is physically transformed in a means by a manufacturing device. This could be a mechanical operation where the part is physically modified or an electrical operation where the part is programmed with electronic data. Integration: In this operation, multiple parts are combined together to form a new device. When any of these transactions are performed, the device performing these transactions broadcasts the transaction details to all participants. Each transaction is cryptographically signed by the device performing the transaction. Since each device must be certified, the receiver verifies the integrity of the public key of the device sending the transaction information. Thereby, only certified devices can broadcast transactions. Since transactions may be considered confidential, the device can protect the information in any manner. For example, as each block creates an output hash to send to the new block in the chain, this hash is created from a Merkle tree hash of all transactions, such as shown inFIG.9. The transaction details are not required to create the block, only the hash of the transaction. To protect the details of the transaction, the device broadcasts the hash of the transaction to the network. Additionally and/or alternatively, the device encrypts the transaction and broadcasts the encrypted blob of data. Each node in the network then hashes the encrypted blob in order to create the Merkle tree shown inFIG.9. Using the manage application130, the keys to decrypt the transaction information are provided to trusted users of the data. The table below illustrates an exemplary four transactions per block. The first two transactions are in plain text, while the last two are encrypted. All parties with access to the ledger can verify the Merkle tree and, therefore, the blockchain, but only parties with the cryptographic key can decrypt the contents of the hidden transactions. TransactionTransaction branchMerkle RootTransactionHashHashHashBlock HashTransform 324 . . .F34D87C . . .7DF8791 . . .783F7AB . . .F7893641 . . .Transfer 3489 . . .1C889EA . . .asdfhipfdhfas . . .7F79C7D . . .FD34FC . . .dpoiafsdndl . . .D79C78B . . . In an alternative embodiment, to provide privacy, an entity stores all transactions on a part within their process and all previous operations from upstream entities on a ledger. Then, when that part is physically sent to a downstream party in the supply chain, the entity will encrypt that ledger information with the downstream party's public key (which is used for identification in the blockchain). The entity would publish the encrypted transactions (ledger) on the blockchain and the hash of the encrypted data would be used to build the hash tree. Then the downstream party would be able to decrypt the ledger and create a new ledger which includes their operations on the part, then when they ship the part downstream they would encrypt the ledger to the next downstream party. The final assembly and ship entity would have access to the entire history of the component, but parallel parties (competitors) would only be able to access the encrypted data. However, a public record of the history of the transactions would be available to everyone and allow audit trails. This method advantageously encrypts parameters within the blockchain and enabling key distribution to trusted nodes through blockchain transactions. For example, with reference again toFIG.8, when subcontractor T4A creates a device, there will be a digital ledger based from the digital supply chain of DB. When T4A ships the final assembled part to T3A, the output ledger will contain all of the ledger information in DB and T4A. This ledger information is encrypted with T3A's public key, and on the blockchain, a transaction is added from T4A to T3A, but the ledger details will be stored in encrypted form on the ledger. As T3A transforms their product, they will add ledger details about the operations. Then when they ship the final assembled part to T2C, all subcomponents from DB, T4A and T4B will be decrypted by T2C, added to the ledger and then encrypted with T2C's public key. A transaction will be added to the blockchain for this transaction and again the details will be stored in encrypted form. This process will continue all the way to the final assembly. The final integrator830should have a full ledger for the complete provenance of all subcomponents that only the final integrator830has access to decrypt. Further embodiments of the inventions disclosed herein allow for the ability to enable and control multiple digital workflow processes required to produce a physical part. In advanced manufacturing workflows there are often multiple processing steps involved in the manufacturing of a part. These workflow processing steps may be implemented by software or hardware and may require multiple digital manufacturing devices. As further described in the paragraphs which follow, embodiments of the invention address these situations via systems and methods that allow an engineer to specify all digital workflow processes required for manufacturing from the state of the current digital workflow files stored in a digital supply item (such as digital supply item500). These systems provide assurance that each workflow process is completed according to the licensing policy. Additionally, both the confidentiality and integrity of the data flowing between workflow processes is protected by the digital supply item's secure container. In digital manufacturing there are often multiple digital workflow processes required to prepare the files that are consumed by a manufacturing device. For example, in certain types of additive manufacturing digital workflows the geometry of a three-dimensional part will be created using a CAD software program. That 3D geometric representation will then be converted into a STL file with only information regarding the surface geometry retained. Then the STL is converted into the separate layers that will be processed by the manufacturing device and device specific code required to produce those layers is created. At this point, a build file has been created that supports a particular model of a manufacturing device. In some cases, further conversion is required if a particular manufacturing device has special calibration settings that have to be included in the build file. Additionally, if multiple parts are produced at the same time on a single build plate, then another conversion is required to produce a build file including all parts. Other digital manufacturing processes will require a different process, but in most cases, multiple distinct processing steps will be required to produce a build file. Additionally, the exact processing required may not be known at the time of the creation of the geometry files. In addition to processing steps required prior to production at the manufacturing device, there may be post-processing steps required using additional physical machines. For instance, in certain types of workflows used in the manufacture of metal objects by additive manufacturing, the parts must be cut from the build plate. For some metal additive manufacturing processes, there is a heat stage that is required to create certain material properties. Additionally, physical machining may be required to produce the appropriate surface finish required for the end part. Depending on the specific use case as well as the capabilities of the part designer, a designer may have to perform different levels of build file preparation. For instance, the part designer may not have access to the build file preparation software used for certain machines. Therefore, the design owner may require the manufacturer to perform several digital workflow processing steps prior to the part being produced by a machine. However, the design owner wants assurance that the confidentiality and integrity of the part is maintained through the pre-production digital processing steps. To enable protection of confidentiality and integrity across all digital workflow processes, embodiments of the inventions disclosed herein may extend the previously described policy language in the Authorization APL (such as Authorization APL116) to include multiple digital workflow processes required to product a physical part. A new workflow process section is added to the Authorization APL in which each workflow process can be defined. The definition may include allowed inputs, the required user, the allowed outputs, the settings allowed, the features that can be used, the device in which the process is operating and details of process steps allowed. The inputs to a workflow process may be a set of unprotected files or may be stored in a digital supply item. If the input is stored in a digital supply item, then the specific encrypted files that must be extracted will be defined and the Authorization APL will include the DSI keys for those files wrapped with the key of the workflow process. There will be a parameter in the Authorization APL that defines where workflow process must look for the input files. The outputs of a workflow process may include workflow files required by a downstream process, log files used for analysis of the process or ledger files that record the steps performed by the process. These output files may be a set of unprotected files or may be confidential files that should be stored in a digital supply item. If the output files are to be stored in a digital supply item, then there will be an Authorization APL created for that DSI. An Authorization APL parameter is created that defines where the output files, or DSI container, should be stored. The allowed user(s) of the workflow process is defined as either a pre-defined user or set of users, a class of user, a certified user or no restrictions on the user. The workflow process may be defined as a software application running on a client PC, a server, or special hardware, a hardware device with embedded software running on the device or a hardware device with process controls executing entirely in hardware. Any controllable setting or feature of the workflow process will be defined in the Authorization APL. These settings may include static settings that will remain the same for the entire process, or even dynamic settings that may change with processing steps. The exact sub-processing steps within a software or hardware application can be defined, including any settings required for each sub-processing step. The implementation of the workflow process, whether software or hardware based, must be executed within a trusted environment. There are many possible embodiments for securing the workflow processes. A few possible embodiments will be described herein. In one embodiment, the workflow process will be implemented on a hardware device with embedded software applications and an embedded software operating system. The hardware device may be a manufacturing device that performs a physical operation, a server that hosts software applications or a stand-alone computer. A secure enclave will be created within the embedded system of the hardware device. A secure enclave may be implemented within a compute core using, as an example, Intel Software Guard Extensions (SGX) or ARM Trust Zone. Outside of the compute core, a secure enclave may be implemented using a smart card, secure element or Hardware Security Module (HSM). Additionally, the secure enclave may be implemented by security hardening a PC by restricting interfaces and operation access through the operating system. In that enclave, a workflow management application will execute to enforce the defined workflow process based on the restrictions of the Authorization APL. This application will take in defined inputs, which may require decryption and extraction of files from the digital supply item and pass those inputs to the workflow process. The workflow process will then be directed to take the processing steps and enforce the settings defined by the Authorization APL. Once the workflow process is complete, any output files generated will be handled according to the requirements of the Authorization APL. In another embodiment, there may be multiple workflow processes implemented on a single hardware device with embedded software applications and an embedded software operating system. As with the previous embodiment, the workflow processes will be controlled from a secure enclave. The input and output operations will work the same as the previous embodiment, but instead of controlling a single software or hardware process, there will be multiple processes to control. The output of one process may be applied as input to the next process in the workflow as defined by the Authorization APL. In one embodiment, multiple workflow processes execute within a security boundary with an enforcement application (such as enforcement application160) instance implemented for each workflow processes. The security boundary may be a secure enclave within a single electronic device, or a collection of discrete processes running on different systems, but implemented within a physically secure boundary such that there is no risk of loss of data or attacks on the integrity of the data within the security boundary. To control the data flow and process steps, a workflow manager application is used to control each workflow process and the dataflow between components. The process steps defined by the Authorization APL will be utilized by the workflow manager to enforce all workflow processing as desired by the user. In this embodiment, each workflow process has a unique enforcement application instance used to enforce the security requirements for that workflow process. However, in another embodiment a single enforcement application instance is used to enforce the security of multiple workflow processes. Ultimately any workflow process can be assigned an instance of enforcement application, but there must be at least one instance of enforcement application within the security boundary. When the final workflow process within the security boundary completes, the protection application (such as protection application110) will be used to create a secure container for the storage and transport of any output files from the workflow process. These output files may be used by other workflow processes or may contain data generated by the workflow processes that can be used for analysis and confirmation of the process steps. In other embodiments, multiple workflow processes may exist each within separate security boundaries. Within each security process there must be an instance of enforcement application160used to verify the Authorization APL, extract and decrypt private files, enforce the parameters of the Authorization APL and control the processing steps of the workflow process. Additionally, the output files generated by the workflow process must be protected in a secure container using the protection application. The output digital supply item from the security boundary can either be stored in a DAS, with no security requirements, or be sent to the next workflow processing step. The Authorization APL generated by the protection application will be imported to a management application (such as management application130), then management application will issue a new Authorization APL for the next processing step. Management application will control the workflow by issuing an Authorization APL for the next workflow process only when the previous workflow process has completed successfully. Additionally, the location of the DSI container for the next workflow process will be included in the Authorization APL. Advantageously, these embodiments provide a digital rights management solution that can provide multiple applications/operations to be performed in sequence, and furthermore, do so with the ability to encrypt information between the sequential applications/operations. In addition to workflow process steps involved in the manufacturing of a part, the embodiments disclosed herein apply to quality control process steps as well. To fully enable the benefits of digital manufacturing, the quality control process must be secured so that a customer can be assured the end part conforms to the exact product requirements without leaking confidential information. Even in today's most advanced digital manufacturing systems, human operators are tasked with determining part quality and adherence to the process standards specified by the customer. This dependence on human operators can allow for variation in the reproducibility of quality control standards as well as introduce opportunities for purposeful bias. Additionally, in order to perform quality analysis, the operator must have information about the technical specifications the part must meet. These specifications will likely contain intellectual property of the parts including geometric, material and process information. Providing this intellectual property directly to operations increases the risk of intellectual property loss or theft. In the quality control embodiment, an inspection profile may be added to the DSI container so that the confidentiality and integrity of the inspection profile is protected. The inspection profile will be used in conjunction with sensors to determine whether a physical part has defects. Because the DSI container supports any number of files and any file type, the inspection profile may contain a data set for a pre-defined algorithm, a data-set with an algorithm, or an application that will execute an algorithm. Both in situ monitoring and post-process measurement are supported by the inspection profile. The workflow application enables the extraction of the inspection profile within a trusted and secure execution space. It is advantageous to protect the implementation of the inspection process. Both in situ and offline processes will be supported. The trusted process monitoring space may be implemented within a trusted hardware device, within a secure enclave, or within a physically secure computing system. All sensor data used by the inspection process may be inputs to the inspection process either through file transfer, or real-time data capture. The inspection algorithm may be implemented as a pre-installed executable program that can be selected by the workflow application or as an executable program that is transported within the DSI container and installed by the workflow application. Transporting the inspection process algorithms or applications within the DSI container will allow for unique inspection processes to be implemented for each part. The process steps implemented as part of the inspection process will be defined as part of the workflow process within the Authorization APL as described previously. For example, the inspection process may require acquiring X sensor data from Y machine via in situ monitoring, then will require providing the data to an inspection application stored in the DSI container, then sending the output of the inspection application to the digital certification system. In order to provide an auditable record of the digital manufacturing process, a digital certificate of conformity (COC) can be created. This certificate may contain a chain of trust from all processes, including machines and applications, used to produce the part in the digital manufacturing workflow and can provide information on the completion status of each. The workflow application is tasked to create the COC once all workflow operations are complete. In addition to the inspection processes, any pre-manufacturing processes, such as adding a serial number to the build file, as well as the manufacturing process will be addressed by the COC. In this way, the customer will have a trusted and auditable record of the successful completion of each workflow and inspection operation required to produce a part. The described embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the described embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the present disclosure is to cover all modifications, equivalents, and alternatives. | 71,836 |
11861027 | DESCRIPTION OF EXAMPLE EMBODIMENTS 1. Overview Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with enhanced securing of data at rest using an immutable “data safe” to protect information stored in an external storage system. The data safe encrypts information subsequently stored in the storage system and decrypts encrypted information retrieved from the storage system, without exposing outside of the data safe cryptographic “pilot keys” maintained in non-volatile storage within the data safe. Each of these pilot keys is typically used for decrypting a small amount of encrypted information, such that any computational discovery of a pilot key will only allow a small amount of information to be decrypted. Further, by implementing the data safe in a manner that is immutable to processing-related modifications, the data safe cannot be “hacked” to expose any of these pilot keys nor perform unauthorized decryption of information that requires one or more of the pilot keys maintained internal to the data safe. In one embodiment, these pilot keys are directly used in encrypting data and decrypting encrypted data. In one embodiment, these pilot keys are used in encrypting data cryptographic keys and decrypting the cryptographically-wrapped data cryptographic keys, with the data cryptographic keys used in encrypting data and decrypting encrypted data. In one embodiment, the cryptographically-wrapped data cryptographic key and encrypted data are stored in the storage system. 2. Description Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with enhanced securing of data at rest, such as stored in a database. As used herein, a “database” refers to an organized collection of data, stored and accessed electronically, which includes, but is not limited to, buckets, tables, relational databases, non-relational databases, object databases, sequential databases, and filesystems. As used herein, a “database management system (DBMS)” refers to a entity that provides an interface between a client and the database itself, which includes, but is not limited to, relational DBMS, email systems, and special and general purpose DBMS's, and filesystem handlers. As used herein, a “storage system” or “data storage” refers to a directly coupled (e.g., disk, flash memory) or networked storage (e.g., cloud storage, network disks or fileservers), that could be standalone or part of another system (e.g., computer, mobile device, smartphone, disk, solid state device). As used herein “data storage locator information” refers to an identification retrieval or storage information (e.g., real or virtual address, database identification, table, record, and/or hash of location information) where the data is to be read or written. As used herein, “data plane processing” refers to the processing of database requests, while “control plane processing” refers to configuration and other management processing. As used herein, the terms “cryptographically-wrapped” and “wrapped” are used interchangeably, with both meaning cryptographically-wrapped. As described herein, embodiments include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recites an aspect of the embodiment in its entirety. Moreover, some embodiments described may include, but are not limited to, inter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable media containing instructions. One or multiple systems, devices, components, etc., may comprise one or more embodiments, which may include some elements or limitations of a claim being performed by the same or different systems, devices, components, etc. A processing element may be a general processor, task-specific processor, a core of one or more processors, or other co-located, resource-sharing implementation for performing the corresponding processing. The embodiments described hereinafter embody various aspects and configurations, with the figures illustrating exemplary and non-limiting configurations. Computer-readable media and means for performing methods and processing block operations (e.g., a processor and memory or other apparatus configured to perform such operations) are disclosed and are in keeping with the extensible scope of the embodiments. The term “apparatus” is used consistently herein with its common definition of an appliance or device. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to, any block and flow diagrams and message sequence charts, may typically be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments, unless this disables the embodiment or a sequence is explicitly or implicitly required (e.g., for a sequence of read the value, process said read value—the value must be obtained prior to processing it, although some of the associated processing may be performed prior to, concurrently with, and/or after the read operation). Also, nothing described or referenced in this document is admitted as prior art to this application unless explicitly so stated. The term “one embodiment” is used herein to reference a particular embodiment, wherein each reference to “one embodiment” may refer to a different embodiment, and the use of the term repeatedly herein in describing associated features, elements and/or limitations does not establish a cumulative set of associated features, elements and/or limitations that each and every embodiment must include, although an embodiment typically may include all these features, elements and/or limitations. In addition, the terms “first,” “second,” etc., as well as “particular” and “specific” are typically used herein to denote different units (e.g., a first widget or operation, a second widget or operation, a particular widget or operation, a specific widget or operation). The use of these terms herein does not necessarily connote an ordering such as one unit, operation or event occurring or coming before another or another characterization, but rather provides a mechanism to distinguish between elements units. Moreover, the phrases “based on x” and “in response to x” are used to indicate a minimum set of items “x” from which something is derived or caused, wherein “x” is extensible and does not necessarily describe a complete list of items on which the operation is performed, etc. Additionally, the phrase “coupled to” is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modifying or not modifying the coupled signal or communicated information. As used herein, the term processing in “parallel” is used in the general sense that at least a portion of two or more operations are performed overlapping in time. Moreover, the term “or” is used herein to identify a selection of one or more, including all, of the conjunctive items. Additionally, the transitional term “comprising,” which is synonymous with “including,” “containing,” or “characterized by,” is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. Finally, the term “particular machine,” when recited in a method claim for performing steps, refers to a particular machine within the 35 USC § 101 machine statutory class. FIG.1Aillustrates a network100operating according to one embodiment. National intelligence-grade protection of the confidentiality and integrity of data in transit is provided by Q-net technology, including by Q-nodes disclosed in Cox, Jr. et al., U.S. Pat. No. 9,614,669 B1 issued Apr. 4, 2017, which is incorporated by reference in its entirety. Q-nodes communicate between themselves using authorized and authenticated encryption communications. One embodiment achieves national intelligence-grade protection of data at rest in a database using immutable data safe(s). As used herein, a “data safe” refers to an entity that performs encryption and decryption of information in protecting data stored in a storage system. Cryptographic “pilot keys,” maintained in non-volatile storage within the data safe, are used to decrypt encrypted information received from a storage system. Typically, these pilot keys are cryptographic symmetric, and therefore, also used in encrypting information to generated the encrypted information. The pilot keys are not exposed outside of the data safe by data plane processing of database requests, as the encryption and decryption operations are performed within the data safe. In one embodiment, the encryption and decryption performed by a data safe operate according to a version of the Advanced Encryption Standard (AES), or other encryption/decryption methodology. In one embodiment, the pilot keys are asymmetric cryptographic keys used in the decryption of information, with corresponding asymmetric encryption keys used to encrypt the information. For ease of reader understanding, typically described herein is the use of symmetric cryptographic pilot keys for both encryption and decryption, with the understanding that of asymmetric decryption pilot keys and their corresponding asymmetric encryption keys are used in place of symmetric pilot keys in one embodiment. In one embodiment, these pilot keys are directly used in encrypting data and decrypting encrypted data. In one embodiment, these pilot keys are used in encrypting data cryptographic keys and decrypting the cryptographically-wrapped data cryptographic keys, with the data cryptographic keys used in encrypting data and decrypting encrypted data. In one embodiment, the cryptographically-wrapped data cryptographic key and encrypted data are stored in the storage system. One embodiment uses an individual pilot key or data cryptographic key for at most encrypting w different units of data, with w being a positive integer less than or equal to some number such as, but not limited to a number ranging from one to two hundred and fifty-five. In one embodiment, each unit of data is a database record, file, or some small data unit. In one embodiment, the allocation of pilot keys and/or data cryptographic keys is done regardless of client or user information. Rather, encrypting only small amounts of data using a same cryptographic key limits the exposure for a compromised key, and greatly increases the computing barrier that would need to be overcome for decrypting an entire stolen disk or acquired data. As shown,FIG.1Aillustrates a public, private, and/or hybrid network100operating according to one embodiment. Shown are multiple data clients111-119(e.g., computers, mobile devices, smartphones, servers) that will access data at rest in a data storage system (125,130,145,150) protected by a data safe that is part of a data vault120,135. As shown and in one embodiment, network(s)110provide communication for data clients111-119to access protected data stored in one or more of data storage systems125,145,150. As used herein, a “data vault” is an apparatus that includes one or more data safes and provides communications and/or other functionality for the data safe to interface client(s), storage system(s), and/or other entities. Embodiments of a data safe are used to protect data at rest in an unlimited number of storage systems, some of which have different architectures and/or interfaces. Additionally, a data safe receive data requests from an unlimited number of clients, some of which may be directly or remotely connected using a variety of different interfaces. Hence, the entity of a data vault is used to describe a data safe and corresponding interface(s). In one embodiment, a data vault is a Q-node or other node that provides secure communications and/or provides non-secure communications, other interfaces and/or functionality. In one embodiment, a data vault provides secure communications between a client and the data safe and/or communications with a storage system. In one embodiment, a data vault includes the storage system, such as, but not limited to, a disk, solid state device, RAID system, network attached storage (NAS), etc., that typically includes a database management system (DBMS) (e.g., a traditional DBMS, filesystem handler). In one embodiment, one or more of networked devices111-160in network100are Q-nodes that communicate via secure communications via immutable hardware, including with Q-node Centralized Authority Node(s) that authorizes communications between pairs of networked devices111-160. In one embodiment, data vault120includes a data safe that protects data at rest in data storage system125and/or data storage system150. In one embodiment, the data safe of data vault120encrypts and decrypts data associated with data storage system125and/or150based on pilot keys that are stored in the data safe of data vault120. In one embodiment, the data safe of data vault120encrypts and decrypts information including data decrypting keys and possibly other data associated with data storage system125and/or150based on the pilot keys that are stored in the data safe of data vault120. In one embodiment, these data decrypting keys are cryptographically-wrapped and stored along with encrypted data in data storage system125and/or150. In one embodiment, the DBMS of data storage system125and/or150retrieves, modifies and stores database records including encrypted data and/or information in the database of data storage system125and/or150. In one embodiment, data vault135includes a data safe that protects data at rest in data storage system130and/or145(communicatively coupled via network140). In contrast to data vault120, data vault135is positioned logically or physically between the DBMS in data storage system130and the physical storage in data storage system130and/or145that actually stores the encrypted data and possibly wrapped data decryption keys for non-temporary durations. In this manner and in one embodiment, the DBMS of data storage system130initiates retrieving, modification and storing of clear-text, non-encrypted database records, which are protected by data vault135with data safe. In one embodiment, the data safe of data vault135and the DBMS in data storage system130communicate encryption and decryption requests and responses. The associated encryption and decryption operations, as discussed herein including in relation to the data safe of data vault120, are performed by the data safe of data vault135. The DBMS of data storage system130retrieves, modifies and stores database records, that include encrypted data and/or information, in the database of data storage system150(e.g., cloud storage, NAS). In one embodiment, each of data vaults120and135(each including a data safe) are Q-nodes that employ secure communication (e.g., using authenticated encryption) with data clients111-119. In one embodiment, Q-node data vaults120and135accept only trusted queries encrypted with unique keys and by employing its own hardware communications security barrier and by employing its data safe with its own encryption system for protecting data at rest. In one embodiment, hardware security barriers use immutable hardware in accomplishing cybersecurity activities including generating and distributing cryptographically-wrapped secure numbers, encryption, decryption, source authentication, and packet integrity verification. FIG.1Billustrates a process performed in one embodiment. Processing begins with process block170. In process block172, a secure link, between a client and a data vault or DBMS is authorized and provisioned by a Q-node centralized authority node. In process block174, the client generates a read or write request. In process block176, the request is securely communicated over the secure link through a private, public or hybrid network to the data vault or DBMS (e.g., depending on the embodiment). As determined in process block181, if the request is authorized, then processing proceeds to process block185; otherwise, processing proceeds to process block182. In one embodiment, the Q-nodes of a data client and a data vault use authenticated encryption communication in data request and response packets, with the communication having been authorized by a centralized authority node. In one embodiment, a data safe performs additional authorization processing such as, but not limited to, security filtering responsive to authorization information received from a centralized authority node. In one embodiment, this authorization information indicates for a particular data client that one or more particular data requests are authorized or a scope of authorization for data requests is established; otherwise, the request is dropped in process block182. In one embodiment, determining that a received request is authorized is further based on a type of the request (i.e., a read request, write request, and/or other type of request) and/or data storage locator information associated with the request. In one embodiment, the DBMS performs file/data-access permission checking associated with the database. Continuing to process block182, the request is dropped as the data safe (or data vault) or the DBMS determined that it was not authorized in process block181. Processing of the flow diagram of process block1B is complete as indicated by process block183. Continuing and as determined in process block185, if the request is a read request, then processing proceeds to process block186; otherwise processing proceeds to process block190to process the write request. Continuing with process block190as an authorized write request was received, pilot key(s) and data cryptographic key(s) (if to be used) are acquired, such as, but not limited to, based on a random number or other entropy generating mechanism. These pilot key(s) and any used data cryptographic key(s) will be required for decryption of the information (e.g., performed in process blocks186and187for a subsequently received, corresponding read data operation). Continuing with process block192, the information to be written to storage is encrypted using pilot key(s) and possibly data cryptographic key(s). In one embodiment, the resulting encrypted information includes one or more wrapped data cryptographic key(s) generated using the pilot key(s). The pilot key(s) based on which a subsequent decryption operation will be based are stored in the non-volatile storage (e.g., non-volatile memory, non-volatile registers) within the data safe at a position retrievable based on data storage locator information associated with the subsequent read request (which is typically the same data storage locator information associated with the write request). In process block194, the encrypted information is stored in the storage system, typically in a secure manner such as, but not limited to, using secure communications using a Q-node when transported over a network that might be compromised or is not secret. Processing continues to process block199. Continuing with process block186as an authorized read request was received, corresponding information is retrieved from data storage, directly or via a DBMS, and is provided to the data safe. The data safe also acquires one or more pilot key(s) from non-volatile storage within the data safe. In process block187, the information is decrypted based on the retrieved pilot key(s). In one embodiment, decrypting the information (e.g., data) based on the pilot key includes using the pilot key directly in decrypting the retrieved data. In one embodiment, decrypting the information (e.g., encrypted data, wrapped data cryptographic key(s)) based on the pilot key includes using the pilot key to decrypt the data cryptographic (decrypting) key(s) and then using the data cryptographic key(s) in decrypting the retrieved encrypted data. In process block188, the retrieved and decrypted data is sent to the requesting data client, typically in a secure manner such as, but not limited to, using secure communications accusing a Q-node, especially when transporting the information over a network that might be compromised or is not secret. Processing continues to process block199. Continuing with process block199, processing of the flow diagram ofFIG.1Bis complete. Thus in one embodiment consistent with the processing of the flow diagram ofFIG.1B, no pilot key (e.g., that will potentially be used for a future decrypting operation by a data safe) is exposed outside of the data safe during the data path processing of a read request nor write request. However, in one embodiment, control plane processing allows the pilot keys to be securely communicated (e.g., using a Q-node) as part of a backup process. In one embodiment, control plane processing allows the pilot keys to be securely communicated (e.g., using a Q-node) for scalability or load balancing, so that multiple data safes, data vaults including a data safe, and/or redundant storage systems can be used for reading and decrypting the same information. Pilot key(s) in the non-volatile storage and any wrapped data cryptographic key(s) need to be maintained as long as the corresponding encrypted information is stored in the storage system. In one embodiment, when encrypted information is permanently removed from the storage system, the corresponding pilot key(s) are removed from the non-volatile storage in the data safe. FIG.2Aillustrates a database200used in a data storage system according to one embodiment. As shown, each record of a bucket (201,202) of database200is decryptable based on a same pilot key maintained in a data safe; while records of different buckets (201,202) of database200are decryptable based on different pilot keys maintained in a data safe. Also, the number of records per bucket (201,202) of database200differs or is the same in one embodiment. FIG.2Billustrates a database210used in a data storage system according to one embodiment. As shown, a header, metadata or other location (211A,212A) associated with a corresponding data bucket (211A-N,212A-M) is used to store wrapped data cryptographic keys. In one embodiment, data in each record (211B-N,212B-M) is encrypted and decrypted by a data safe using a different data cryptographic key (i.e., one of the wrapped data cryptographic keys (stored in211A,212A) before encryption or after decryption by the data safe). In one embodiment, all wrapped data cryptographic keys stored in a header, metadata or other location (211A,212A) associated with a corresponding data bucket (211A-N,212A-M) are decryptable using the same pilot key; while in one embodiment, each wrapped data cryptographic key stored in a header, metadata or other location (211A,212A) is decryptable based on a different pilot key. FIG.2Cillustrates a database bucket220used in a data storage system according to one embodiment. A header, metadata or other location221associated with bucket220stores N wrapped data cryptographic keys, each of which are decryptable by a data safe based on a corresponding pilot key maintained within the data safe. In one embodiment, all N wrapped data cryptographic keys are decryptable based on a single pilot key maintained in the data safe. In one embodiment, some or all of the N wrapped data cryptographic keys are decryptable based on a different pilot key maintained in the data safe. FIG.2Calso illustrates that in one embodiment, a same or different number of records within bucket220are decryptable based on each of the decrypted wrapped data cryptographic keys stored in bucket220. In one embodiment, W+1 records (222) are decryptable based on Key-1, and Y+1 records (223) are decryptable based on Key-2, with each of W and Y being a non-negative integer. In one embodiment, at least one of W and Y has a value of one, such that at least one of the data cryptographic keys is used in decrypting multiple records. In one embodiment, at least one of W and Y has a value of zero, such that at least one of the data cryptographic keys is used in decrypting only one record. FIG.2Dillustrates a database bucket230used in a data storage system according to one embodiment in which each wrapped data cryptographic key is stored in a record of records231-232of bucket230. In one embodiment, a wrapped data cryptographic key is stored in a record (231-232) without any other encrypted data. In one embodiment, a record (231-232) stores encrypted data and the wrapped-version of the data cryptographic key that will be used by the data safe in decrypting the encrypted data. In one embodiment, a single record of records231contains the wrapped data cryptographic key that is will be used in the decryption of encrypted data stored in each of records231. In one embodiment, a single record of records232contains the wrapped data cryptographic key that is will be used in the decryption of encrypted data stored in each of records232. In one embodiment, this ordering allows a single read operation to read the corresponding records(s) (231,232) containing encrypted data and corresponding wrapped data cryptographic key. FIG.2Eillustrates a process performed by a data safe according to one embodiment. Processing begins with process block250. In process block252, the data safe receives an authorized write request. In process block254, K+1 cryptographic keys are acquired, with K being a positive integer. These K+1 cryptographic keys include one pilot key and K data cryptographic keys, with Z records decryptable based on each of the K data cryptographic keys, with each of K and Z being a positive integer. In process block256, each of K encryption keys are used in order to encrypt a corresponding Z records of data, with the encrypted data records stored in the storage system at the corresponding write positions. In process block258, each of the K data cryptographic keys are encrypted so they can be decrypted based on the pilot key, with the K wrapped data cryptographic keys being stored in the in the data storage system (e.g., in bucket header(s), metadata, or elsewhere). The pilot key is stored in non-volatile storage in the data vault at a position retrieval based on a locator of the stored data in the storage system (which is also the location to be used as part of a read request). Processing of the flow diagram ofFIG.2Eis complete as indicated by process block259. FIG.2Fillustrates a process performed according to one embodiment. Processing begins with process block270. In process block272, the data safe receives an authorized read record request. In process block274, the corresponding encrypted record and wrapped data cryptographic key(s) are acquired. In process block276, the pilot key is acquired from non-volatile storage within the data safe based on a locator of the stored data in the storage system (e.g., a locator of the bucket or records thereof). In process block278, the data safe decrypts the data cryptographic key(s) based on the pilot key, then uses these data cryptographic key(s) to decrypt the retrieved data from record(s) of the bucket. In process block280, the data safe communicates the decrypted data to a secure communications interface (e.g., Q-node interface) of the data vault containing the data safe, with the clear (e.g., decrypted) data corresponding to the read request being securely communicated to the data client. Processing of the flow diagram ofFIG.2Fis complete as indicated by process block289. Each ofFIGS.3A-Cillustrate a network architecture according to one embodiment, such as, but not limited to, a same or different embodiment illustrated byFIG.1Aand discussed herein. FIG.3Aillustrates a network300operating according to one embodiment. As shown, data client302interfaces storage system309(e.g., DBMS306, local and/or remote data storage308) through data vault304that includes a data safe. Data vault304protects storage system309(e.g., databases) from attacks launched over the network303. The data safe encrypts all data/records for secure storage so that this data can be decrypted based on pilot keys (i.e., stored in non-volatile storage in the data safe); hence, providing further protection in case of a data breach (e.g., remotely acquiring data or physically acquiring storage). In one embodiment, each of data client302and data vault304is a Q-node, thus, data requests and responses (e.g., read and write requests and responses) transmitted between data client302and data vault304are encrypted with volatile keys to ensure record confidentiality and are provided with authentication tags to ensure record authenticity. In one embodiment shown inFIG.3A, the interface between the DBMS306and storage308is affected to the extent that wrapped data cryptographic key(s) and encrypted data/records are stored (e.g., more storage space might be required). FIG.3Billustrates a network310operating according to one embodiment. As shown, data client312interfaces network-based storage318(e.g., NAS, cloud storage) through data vault314that includes a data safe. In one embodiment, data vault314is built into a network-based storage device (318). Data vault314protects data storage318from attacks launched over network313. The data safe encrypts all data/records for secure storage so that this data can be decrypted based on pilot keys (i.e., stored in non-volatile storage in the data safe); hence, providing further protection in case of a data breach (e.g., remotely acquiring data or physically acquiring storage). In one embodiment, each of data client312and data vault314is a Q-node, thus, data requests and responses (e.g., read and write requests and responses) transmitted between data client312and data vault314are encrypted with volatile keys to ensure record confidentiality and are provided with authentication tags to ensure record authenticity. In one embodiment shown inFIG.3B, the interface between the data client312and storage318is affected to the extent that wrapped data cryptographic key(s) and encrypted data/records are stored (e.g., more storage space might be required). FIG.3Cillustrates a network320operating according to one embodiment. As shown, data client322interfaces storage system329(e.g., DBMS326, local and/or remote data storage328) over network323. In one embodiment, each of data client322and DBMS326is a Q-node, thus, data requests and responses (e.g., read and write requests and responses) transmitted between data client322and DBMS326are encrypted with volatile keys to ensure record confidentiality and are provided with authentication tags to ensure record authenticity. This protects storage system329(e.g., databases) from attacks launched over the network323. Further, data vault324with data safe encrypts all data/records for secure storage so that this data can be decrypted based on pilot keys (i.e., stored in non-volatile storage in the data safe); hence, providing further protection in case of a data breach (e.g., remotely acquiring data or physically acquiring storage). In one embodiment shown inFIG.3C, DBMS326communicates clear write data requests to data vault324and receives back encrypted information (e.g., data, wrapped data cryptographic key(s)) in a write data response that DBMS326then stored in storage328. In one embodiment shown inFIG.3C, DBMS326communicates encrypted information (e.g., data, data cryptographic key(s)) received in a read response from storage328to data vault324and receives back a decrypted version of the data read from storage328. In this manner, DBMS326allocates space and manages storage of the encrypted data and any wrapped data cryptographic key(s). Further, DBMS326operates on clear, decrypted data, which may provide enhanced database searching capabilities. FIG.3Dillustrates data vault330including data safe340according to one embodiment. Data vault330provides communications interfaces331and339for data safe340. In one embodiment, data client interface(s)331provide secure communications to a data client (e.g., provide the Q-node functionality). In one embodiment, storage system interface(s)339provide communications to directly connected or networked storages systems. Data safe340is implemented in a manner to be immutable to data plane processing modifications. In one embodiment, data safe340is implemented in field-programmable gate array. In one embodiment, data safe340is implemented in one or more application-specific integrated circuits (ASICs). In one embodiment, data safe340is an ASIC core stored in non-transitory computer-readable medium for incorporation into storage, communication, and/or other devices. In one embodiment, data safe340is implemented in hardware that has no read-write instruction memory. In one embodiment, data safe340is implemented using a microprocessor (or other processing unit) with a fixed set of instructions (e.g., in storage that is not modifiable based on data plane processing by data safe340). An implementation on a processor running on top of an operating system is not immutable as operating systems are prone to data plane processing modifications and other vulnerabilities. In one embodiment, an immutable data safe340is implemented in state-machine form with absolutely no stored program functionality. As shown inFIG.3D, a database request343(e.g., read or write request) is received by data safe340and provided to distributor342for distributing a read request345to storage system interface(s)339to acquire the desired data, and distributing a write request349to encryption module350. In one embodiment, a data safe340performs additional authorization processing such as, but not limited to, additional communications-based security filtering by distributor342as the database request must be authorized by a centralized authority node (e.g., via communications341and using interface(s)331) that the data request is authorized based on an identification of a corresponding data client; otherwise the request is dropped. In one embodiment, this determination of whether a received request is authorized is further based on a type of said received request (i.e., is it a read request, write request, or other type of request) and data storage locator information associated with the request. Distributor342communicates a valid/authorized write request349(e.g., includes data to be stored and where to store it) to encryption module350. Cryptographic key generator352creates the cryptographic keys353used for encryption and decryption, such as, but not limited to, according to a version of the Advanced Encryption Standard (AES). For purposes of description ofFIG.3D, use of symmetric cryptographic keys (i.e., a same key is used for encryption and decryption of information) is discussed. However, asymmetric cryptographic keys are used in one embodiment of data safe340. In one embodiment, cryptographic key generator352uses a true random number generator (or other entropy generation mechanism) in creating the pilot and data cryptographic keys (353), which are provided to queue354for storage and for future immediate availability of keys355to encryption module350. In one embodiment, the generated pilot and data cryptographic keys353are of a same length. In one embodiment, encryption module350modifies some or all of cryptographic keys355before using for encryption. In one embodiment, encryption module350encrypts the data to be stored using one or more data cryptographic keys355, and also encrypts the one or more data cryptographic keys355using one or more pilot keys355to generate wrapped data cryptographic key(s). In one embodiment, encryption module350encrypts the data to be stored using one or more pilot keys353. Encryption module350also provides a pilot key storage request361that causes the used pilot key(s) (355) to be stored in non-volatile pilot key storage360at location(s) corresponding to storage locator information of the write request (349). Encryption module350generates a corresponding write request357that includes the encrypted information (e.g., encrypted data, wrapped data cryptographic key(s)). In response, storage system interface339communicates a corresponding storage system write request provided to the storage system. In one embodiment, prior to acquiring a pilot key355from queue354, encryption module350performs a read operation on non-volatile pilot key storage360to see if a corresponding one or more pilot keys363have already been allocated for encrypting/decrypting the corresponding database record(s) (e.g., based on storage locator information of the write request (349)). If valid one or more pilot keys363are returned to encryption module350, these pilot key(s)363are used instead of acquiring one or more new pilot keys (355). However, in one embodiment, if one or more pilot keys363are returned to encryption module350, data safe340causes all data from the storage system which is decryptable based on these one or more pilot keys363to be read, and then rewrites with the data of the write request after encryption using one or more new pilot keys355(e.g., instead of reusing the previous pilot key(s)363) In one embodiment and in response to storage system interface(s)339receiving a write confirmed for the write request provided to the storage system, a database write acknowledgement response379is communicated to client interface(s)331, which sends a write acknowledgement to the data client. In one embodiment, distributor342communicates a valid/authorized read request345to acquire the desired data to storage system interface(s)339, which communicates a corresponding data read request to the storage system. Reactive to the returned (read) information response365, storage system interface(s)339provides the encrypted information369to decryption module370, and provides locator information367to non-volatile pilot key storage360that causes corresponding one or more pilot keys371to be provided to decryption module370. In one embodiment and such as for increasing an operating rate, read request (locator information)345is also provided to non-volatile pilot key storage360that causes corresponding one or more pilot keys371to be provided to decryption module370prior to receiving the returned (read) information365. Decryption module370, based on pilot key(s)371decrypts encrypted information369. In one embodiment, pilot key(s)371are used in decrypting one or more wrapped data cryptographic key(s), with the revealed data cryptographic key(s) used in decrypting the read encrypted data (369). In one embodiment, pilot key(s)371are used in decrypting the read encrypted data (369). Decryption module370provides a database read response (e.g., clear data) to interface(s)331, which then, typically securely, communicates the read data to the data client. In one embodiment, interface(s)331correlates received database requests (343) with data clients and database read responses373and database write responses379so that the appropriate data client can be sent a response. In one embodiment, client information and database request information accompanies the data plane processing of a database request, which is provided to interface(s)331along with the database response (373,379) so that the appropriate data client can be sent a response. FIG.3Eillustrates a Q-node data vault390including data safe392according to one embodiment. Data vault390provides communications interfaces381and391for data safe392. In one embodiment, data client interface(s)381provide secure communications to a data client (i.e., provide the Q-node functionality). In one embodiment, storage system interface(s)390provide communications to directly connected or networked storages systems. As shown, network interface380includes a network handler381(e.g., performing according to network protocols), decryption module382, decryption key queues383, cryptographic key generation module384(typically using a true random number generator), cryptographic key queues385, and encryption module386. One embodiment of the national intelligence-grade protection of the confidentiality and integrity of data in transit is provided by Q-net technology, including by Q-nodes disclosed in Cox, Jr. et al., U.S. Pat. No. 9,614,669 B1 issued Apr. 4, 2017, which is incorporated by reference in its entirety. In one embodiment, cryptographic key queues383,385are non-volatile so that secure data communication can be directly resumed from a power outage, from a low-power network interface380that only intermittently operates (e.g., for a low power Internet of Things device, to reduce bandwidth usages, etc.). In one embodiment, network interface380resumes communication by synchronizing with another network device (e.g., a centralized authority node (Q-node), client or server Q-node). FIG.3Fillustrates a data vault396including data safe398and network interface(s)397according to one embodiment. In one embodiment, interface(s)397provide (typically secure) communications to both data clients and storage systems. FIG.4Aillustrates a network400operating according to one embodiment. As shown, data client402(typically a Q-node) access DBMS405over network403and through a data request modifier node404(typically a Q-node). Also, DBMS405accesses local or remote storage408through data vault406with a data safe. In one embodiment, data request modifier404securely communicates with data client402. Data request modifier404modifies data requests from client402to DBMS416so that read and write requests generated by DBMS416accommodate the storage and retrieval of wrapped data cryptographic key(s) to and from storage408. In one embodiment, the data safe of data vault406inserts these wrapped data cryptographic key(s) in a write information request from DBMS405to storage408. In one embodiment, the data safe of data vault406removes these wrapped data cryptographic key(s) from a database read response from storage408to DBMS405. In one embodiment, data request modifier404also modifies responses being sent to data client402from DBMS405to reflect the original database request (e.g., so not to expose to a data client any modification of a database request). In addition, network400(including data vault406with data safe between DBMS405and storage408) provides DBMS405plaintext versions of read and write requests so that many search actions can be carried out using the built-in search capabilities of DBMS405. FIG.4Billustrates a network410operating according to one embodiment. As shown, data client412access DBMS416over network413and through a data request modifier node414(typically a Q-node). In one embodiment, data request modifier414operates as data request modifier404described in relation toFIG.4A. In one embodiment, data vault417with data safe operates as data vault324ofFIG.3C. In one embodiment, data vault417with data safe of data vault417modifies database requests as described in relation to data safe of data vault406ofFIG.4A. As with one embodiment shown and described in relation to each ofFIGS.3C and4A, the configuration of storage system419(with the data safe of data vault417being accessed by DBMS416) provides DBMS416plaintext versions of read and write requests so that many search actions can be carried out using the built-in search capabilities of DBMS416. FIG.4Cillustrates a data request modifier node440according to one embodiment. Network interface441provides communications with data clients, such as, but not limited to, that as described in relation to network interface380ofFIG.3E. Database interface442provides communication with a DBMS. In one embodiment, network interface441performs the modification of database requests and/or responses. In one embodiment, DBMS interface442performs the modification of database requests and/or responses. FIG.4Dillustrates a process according to one embodiment. Processing begins with process block445. In process block446, received database requests and/or responses are adjusted for the accommodation of extra storage space for storing wrapped data cryptographic key(s) in storage. In process block448, the modified database request or response if forwarded accordingly. Processing of the flow diagram ofFIG.4Dis complete as indicated by process block449. FIG.4Eillustrates a data vault450including a data safe454according to one embodiment. Data vault450includes a DBMS handler and interface(s)452for communicating with one or more DBMS(s). Data vault450includes a memory address handler and interface(s)456for communicating with storage. As shown, data safe454exchanges clear data (453) with DBMS handler and interface(s)452, and exchanges encrypted information (455) with memory address handler and interface(s)456. FIG.5illustrates a network500operating according to one embodiment. As shown, network500includes data client510(i.e., a Q-node), network503, data vault514(i.e., a Q-node with a data safe), and DBMS-1516. In one embodiment, data client510, network503, data vault514, DBMS-1516, and storage508operate such as that described in relation to network300ofFIG.3A. However, network500also includes memory address controller506that provides access to storage508to both DBMS-1516and DBMS-2526. Network500ofFIG.5also includes an insecure data client520(e.g., is not a Q-node). Data client520interacts with DBMS-2526over network503. Because DBMS-1516and DBMS-2526are separate from each other, malware in DBMS-2526cannot compromise DBMS-1516. In one embodiment, memory address controller506guarantees that no insecure records are stored in secure areas of data storage508. Thus, malware arriving from a compromised client (e.g., data client520) cannot reach secure areas of data storage508, nor can such malware work its way back to a Q-node node514,510. In one embodiment, this architectural separation technique is used in a network described in relation toFIGS.1A,3A,3B,3C,4A and/or4B. In view of the many possible embodiments to which the principles of the disclosure may be applied, it will be appreciated that the embodiments and aspects thereof described herein with respect to the drawings/figures are only illustrative and should not be taken as limiting the scope of the disclosure. For example, and as would be apparent to one skilled in the art, many of the process block operations can be re-ordered to be performed before, after, or substantially concurrent with other operations. Also, many different forms of data structures could be used in various embodiments. The disclosure as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof. | 46,922 |
11861028 | DETAILED DESCRIPTION In the present disclosure, all terms not defined herein have their common art-recognized meanings. To the extent that the following description is of a specific embodiment or a particular use of the subject matter of the present disclosure, it is intended to be illustrative only, and not limiting of the claimed subject matter. The following description is intended to cover all alternatives, modifications and equivalents that are included in the spirit and scope of the present disclosure, as defined in the appended claims. The present disclosure provides a sensitive information storage device (also referred to herein as an “SIS device”), systems, and methods for securely storing and managing sensitive information such as login credentials) Social Insurance Numbers (SINs), Social Security Numbers (SSNs), healthcare numbers, bank account numbers, lock combinations, passport numbers, cryptocurrency, tokens, certificates, any digital data or file, etc. In some embodiments, the system can also act as a two factor authentication device as the SIS device can store tokens along with login credentials to verify that the user is a real person. The SIS device provides selective access to the sensitive information upon demonstration of the intention of the SIS device's user to retrieve the information. A system for storing and managing sensitive information comprises: a master controller and a sensitive information storage device (“SIS device”). The SIS device has an island that can be activated by user interaction with the SIS device. In embodiments, the island is a physical component (or non-software-based component) of the SIS device. In general, the island is deactivated by default and when the island is deactivated, sensitive information that is stored on the SIS device cannot be accessed. Only when the island is activated by user interaction can the stored sensitive information be accessed. The user interaction may be in the form of a switch that the user can turn on to activate the island. In one embodiment, when the master controller sends a request to the island for retrieving sensitive information stored on the SIS device, the SIS device cannot retrieve the request from the island and process same until user interaction occurs. Once user intent is demonstrated, the SIS device processes the request and sends a response to the island. The master controller can then read the response and react accordingly. In this manner, the master controller does not have direct access to the sensitive information stored on the SIS device and only the information the user intends to retrieve is revealed to the master controller. In one embodiment, when the user demonstrates user intent, the master controller may have access to the sensitive information stored on the SIS device and the SIS device may rely on the passcode, read/write restrictions, and/or encryption of the SIS device chipset to control the master controller's access to the sensitive information. Referring toFIG.1, this block diagram illustrates a system100comprises an SIS device140and a master controller105, according to an embodiment of the present disclosure. SIS device140is for storing and managing sensitive information and comprises a wireless communication unit152connected to a communication controller158, which is the “island” in this embodiment, and a microcontroller154. The microcontroller154is connected to a memory160. SIS device140also has a “switch”150which has an “on” position and an “off” position. In the on position, the switch150connects the wireless communication unit152with the island158, thereby activating the island158. In the off position, the switch150disconnects the wireless communication unit152from the island158, thereby deactivating the island158. Still referring toFIG.1, in some embodiments, two or more of the microcontroller154, the communication controller158, the memory160, and the wireless communication unit152may be integral parts of a chip, such as a secure element. Alternatively, one or more of the microcontroller154, the communication controller158, the memory160, and the wireless communication unit152may be implemented in separate chips, as in a chipset. Still referring toFIG.1, the master controller105is a communication device that is configured to communicate with SIS device140upon a request to manage and/or access sensitive information. Master controller105generally comprises a wireless communication unit112and a control unit114in communication with a memory120. The master controller105may also have an internal battery123which may be recharged by an external power source125. Alternatively or additionally, the master controller105may be powered by the external power source125. For example, master controller105may be a mobile device such as a laptop computer, notebook computer, tablet computer, netbook computer, mobile computer, feature phone, smartphone, palmtop computer, smartwatch, fitness tracker, virtual or augmented reality headset, virtual or augmented reality glasses, or other computing device such as a desktop computer, server, etc., or a combination thereof. Still referring toFIG.1, the memory120of master controller105may have software applications and data stored therein, such as application122and data128as illustrated inFIG.1. In some embodiments, the control unit114and the wireless communication unit112may be integral parts of a chip. Alternatively, the control unit114and the wireless communication unit112may be implemented in separate chips, as in a chipset. In some embodiments, the wireless communication unit112and/or the control unit114may include hardware, firmware, software, or a combination thereof. In some embodiments, some of the hardware components of the communication unit112and/or the control unit114may include discrete electronic components on a printed circuit board. Still referring toFIG.1, the master controller105may also include an output device130and an input device132. The output device130outputs information to a user visually, audibly, or both. The input device132receives input from the user. Although the output device130and the input device132are illustrated as being separate from each other inFIG.1, in some embodiments, the output device130and the input device132may be integral parts of an input/output device. Although the output device130and the input device132are illustrated as being detachably coupled to the master controller105inFIG.1, in some embodiments, the output device130and/or the input device132may be an integrated component of the master controller105. Input device132may include for example a keyboard, mouse, pen, voice input device, touch input device, etc. Output device130may include for example a display, speakers, printer, etc. Still referring toFIG.1, the memory120,160may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or a combination thereof. In one embodiment, the memory component160of SIS device140comprises the combined non-volatile memory of two or more of the wireless communication unit152, the communication controller158, and the microcontroller154. In another embodiment, the memory component160comprises one or more non-volatile memory storing chips and optionally the combined non-volatile memory of two or more of the wireless communication unit152, the communication controller158, and the microcontroller154. The memory160may be part of a secure element. Still referring toFIG.1, both the SIS device140and master controller105are equipped with necessary hardware and/or software to enable them to establish a communication link170therebetween. In one embodiment, master controller105has a program or application122installed thereon for communicating with and controlling SIS device140. In embodiments, the communication link170is not an Internet-based communication protocol. The communication link170may be based on a wireless communication protocol, including for example Wi-Fi®, Bluetooth®, radio-frequency identification (“RFID”), near-field communication (“NFC”), etc., or a combination thereof. The wireless communication unit112can establish wireless communication with the wireless communication unit152of SIS device140, to receive information from and transmit information to the SIS device140via the communication link170. Alternatively or additionally, the communication link170may be based on a wired connection that allows communication between communication units152and112. In some embodiments, the communication link170provides power to the SIS device140. In further embodiments, the SIS device140may use one or more communication protocols and/or wired connections for its power supply and/or communication function. For example, the SIS device140may use NFC for power while using Bluetooth or Wi-Fi for communication. In other embodiments, the SIS device140uses NFC for both power and communication. In yet another embodiment, the SIS device140uses Wi-Fi for power while using Bluetooth for communication. Of course, other variations are possible. Still referring toFIG.1, as an example for illustrative purposes, both SIS device140and master controller105are NFC-enabled devices and communication link170is based on NFC. NFC is a standards-based connectivity technology that establishes wireless connection between two devices in close proximity of each other, typically in the order of a few centimeters. NFC allows users to transfer information by touching, or tapping, one device with another device. As with proximity card technology, NFC uses magnetic induction between two loop antennas located within two NFC-enabled devices that are within close proximity of each other, effectively forming an air-core transformer. The act of bringing one NFC-enabled device to close proximity of another NFC-enabled device with or without the two devices physically contacting each other, is referred to as an “NFC tap” or “tapping” operation hereinafter. With an NFC tap operation, a user can conveniently perform a variety of tasks, including mobile payment, secure login, wireless pairing, triggering peer-to-peer data exchange, file transfer, file sharing, mobile gaming, user identification, and so on. Many smartphones currently on the market already contain embedded NFC chips that can send encrypted data a short distance to a reader located next to a retail cash register. Still referring toFIG.1, the NFC is an open platform technology standardized in ECMA-340 and ISO/IEC 18092. With these standards, ECMA is European Computer Manufacturers Association, ISO is International Organization for Standardization and IEC is for the International Electrotechnical Commission. Generally, these standards specify the modulation schemes, coding, transfer speeds and frame format of the RF interface of NFC devices, as well as initialization schemes and conditions required for data collision-control during initialization for both passive and active NFC modes. Furthermore, these standards also define the transport protocol, including protocol activation and data-exchange methods. Still referring toFIG.1, when SIS device140is in close proximity of master controller105and the island is activated, information exchange between SIS device140and master controller105occurs through the NFC-based communication link170. This is the NFC tap. In some embodiments, the NFC tap is sufficient to demonstrate user intent. Still referring toFIG.1, the switch150may not necessarily be a physical switch. For example, for an SIS device140that is NFC-enabled, the position of the switch150may simply be determined by whether the device140is within NFC range of the master controller105. More specifically, in this sample embodiment, the switch150is off when the device140is out of NFC range of the master controller105, and the switch150is on when the device140is within NFC range of the master controller105. Still referring toFIG.1, in a sample embodiment, the SIS device140comprises an NFC memory tag or microprocessor, such as a memory smart card. In a further embodiment, the switch150is on the communication unit152and is in the off position by default, wherein the switch150is open such that communication unit152is disconnected from the NFC memory tag or microprocessor. When the switch is off, the SIS device is not in communication mode and remains so unless the user demonstrates intent by turning the switch150on. When the switch150is in the on position, the switch150is closed to connect communication unit152with the NFC memory tag or microprocessor, thereby allowing communication from the master controller105to the SIS device140. The master controller105can then read and write directly to the memory160of the SIS device140. Still referring toFIG.1, in one embodiment, the wireless communication unit152is an NFC antenna and the island158is an NFC controller with an NFC energy harvesting chip. The NFC energy harvesting chip allows SIS device140to operate without the need for an internal power source. The switch150, which may be a physical switch, allows the SIS device's user to control when the NFC energy harvesting chip can harvest power from another NFC device, such as the master controller105, to help prevent a malicious user within range from communicating with the SIS device140without the authorized user's knowledge. By turning on switch150, thereby connecting the NFC antenna with the NFC controller, the user enables the NFC energy harvesting chip to harvest energy from the NFC signal of the master controller when the SIS device is within NFC range of the master controller105to power the components of SIS device140. Once powered, the SIS device140can communicate with the master controller, using any of the above-mentioned communication protocols and/or wired connection or any combination thereof, and process any requests from the master controller. In other words, the SIS device140has no power and cannot operate or be detected unless it is within NFC range of the master controller105(or another NFC emitting device) and the switch150is turned on. Still referring toFIG.1, the “island” of the SIS device does not necessarily have to be the communication controller158. In other embodiments, the island may be the communication unit152, the microcontroller154, or the memory160. The position of the switch150may determine the location of the island in the SIS device140. In some embodiments, which are not shown, the switch150may be positioned elsewhere in the SIS device140. In one example, the switch is connected to the communication unit152to activate and deactivate same, such that signals cannot be received or transmitted unless the switch is on. In another example, the switch is connected to the NFC energy harvesting chip to activate and deactivate same, such that the chip cannot harvest energy unless the switch is on. In yet another example, the switch150may be used to connect and disconnect the communication controller158from the microcontroller154. In another example, the switch150may be used to connect and disconnect the microcontroller154from the memory160. In another example, where the SIS device comprises an NFC memory tag or microprocessor, the SIS device itself may be considered the “island” and the switch150may be a physical switch on the communication unit152or may be turned on when the SIS device is within NFC range of the master controller105. Other switch and/or island configurations may be possible. Still referring toFIG.1, the SIS device140may optionally include a visual, audible, and/or sensory range indicator, such as an LED light, sound, or vibration, which turns on when the SIS device140is within range of another NFC device, to let the user know for example where to place the SIS device140relative to the NFC-enabled master controller105. The range indicator may be part of a range indication NFC antenna circuit separate from the wireless communication unit152and island158, which are normally disconnected while the switch150is in its default off position. The range indication NFC antenna circuit comprises its own NFC antenna, and, optionally, an NFC controller, permanently connected to the indicator so that when the range indication NFC antenna circuit is within range of another NFC enabled device, wherein the range indication NFC antenna circuit provides power to the indicator to activate the same (e.g., light it up), regardless of whether the switch150is turned on. In other words, even when the switch150is off and the main SIS device components have no power, the indicator can still be activated, because it is powered by the separate range indication NFC antenna circuit. Still referring toFIG.1, in an alternative or additional embodiment, both the SIS device140and master controller105are configured to communicate via connected communication protocols such as Bluetooth, Wi-Fi, etc. “Bluetooth” used herein refers to Bluetooth, Bluetooth LE, and all variations and future versions thereof. When using connected communication protocols, there is more potential for malicious attempts to access the SIS device140without the user's knowledge. To mitigate this problem, the island158acts as an intermediary between the wireless communication unit152and the memory160where the sensitive information is stored. When switch150is off, the island is in the deactivated or “communication” position, wherein the island is in communication with the wireless communication unit152but is disconnected from the memory160. When the switch150is turned on, the island158is in the activated or “memory” position, wherein the island is disconnected from the wireless communication unit152and is in communication with the memory160. Suitable hardware, firmware, and/or software are used for the island158. Still referring toFIG.1, by default, the switch150is in the off position and the island158is deactivated or in the communication position. When the wireless communication unit152receives a request from the master controller105for sensitive information, the request is sent to the island158and is stored there until the user shows intent by turning on the switch150. When the switch150is on, the island is activated or placed in the memory position, and the island disconnects from the wireless communication unit152, connects to the memory160, and then processes the request and retrieves the requested sensitive information from the memory160. The switch150is turned off automatically thereafter so that communication between the island158and the wireless communication unit152can resume. Once island158reconnects with wireless communication unit152, the sensitive information is sent out by the wireless communication unit152via communication link170to the master controller105. Still referring toFIG.1, in a sample embodiment, the island158may comprise an island memory chip. When the switch is in its default off position, the island memory chip is connected to the wireless communication unit152but not to the microcontroller154, thus allowing the communication unit152to write data to the island memory chip. The microcontroller154, therefore, acts as the gate keeper to the memory160. When the switch is turned on, the island memory chip is disconnected from the wireless communication unit152and is connected to the microcontroller154. Therefore, when the switch150is on, the microcontroller154can read and process the data on the island memory chip and can also write to the island memory chip. When the switch is turned off again, the connection between communication unit152and the island memory chip is re-established and the wireless communication unit152can then read the data on the island memory chip and transmit same to the master controller105. Still referring toFIG.1, in another sample embodiment, the island may be part of the microcontroller154such that when the switch is off, the microcontroller154is connected to the wireless communication unit152but disconnected from the memory160, and when the switch is on, the microcontroller154is disconnected from the wireless communication unit152and connected to the memory160. When the switch is off, the wireless communication unit152can communicate directly with the microcontroller154. When the switch is on, the microcontroller154can retrieve information from memory160. Still referring toFIG.1, in some embodiments, especially where the communication link170is based on connected communication protocols, the communication may be encrypted for an added layer of security. Still referring toFIG.1, the SIS device140is preferably small enough to be easily portable or embedded in a portable object. For example, SIS device140may be integrated into or in the form of a credit card, key fob, coin, ring, sticker, ID badge, watch strap, implant, phone case, bracelet, etc. The SIS device may itself be a computing device with a processor, input device, output device, etc. The size and form factors are only limited by the size of components used for the SIS device. Still referring toFIG.1, in some embodiments, two or more different devices may operate together to act as the master controller. For example, a Bluetooth and NFC-enabled smartwatch can act as a conduit for a smartphone to communicate with the SIS device. The smartphone can send a request to the smartwatch to retrieve login credentials for an account, and the smartwatch can then be used to retrieve the requested login credentials from the SIS device with a simple NFC tap. The smartwatch can thereafter transmit the login credentials back to the smartphone to be used by the smartphone itself or to be forwarded to an online source that requested the login credentials. In a further embodiment, the SIS device may be built into the watch band of the smartwatch. Still referring toFIG.1, to protect sensitive information, the sensitive information is preferably stored in SIS device140only and none of the sensitive information is stored in master controller105. The master controller's main function is to provide a gateway for the user to add, retrieve, update, and delete information on SIS device140. The master controller105helps the user manage and track the information stored on SIS device140, without storing the sensitive information itself. Still referring toFIG.1, in some embodiments, memory120of master controller105stores a list of online sources that the user has previously created an account. The master controller105may also store the account(s) that is associated with each online source. The master controller may assign an alias to an account if more than one account is associated with the same online source, in order to help the user distinguish between different accounts. Memory160of SIS device140stores a list of accounts and the login credentials associated therewith. Still referring toFIG.1, in some embodiments, memory160stores data on the identity of the online sources. In other embodiments, for added security, memory160does not store the identity of the online sources, such that if the SIS device140is hacked, the hacker cannot link the stored login credentials to their corresponding online sources. In some embodiments, the identity of the online sources is stored in another device that is separate from the SIS device140. Still referring toFIG.1, in some embodiments, for each account's login credentials, master controller105generates an Account ID and stores the Account ID with the identity of the corresponding online source account. The master controller also sends the Account ID to the SIS device140so that the SIS device can store the Account ID in association with the corresponding login credentials. Accordingly, when the master controller needs to retrieve the login credentials for a particular account, the master controller sends a request to SIS device140with the corresponding Account ID and the SIS device140uses the Account ID to look up the login credentials. Still referring toFIG.1, in some embodiments, instead of using an Account ID to look up the login credentials, the master controller may use the memory location of the login credentials to store and retrieve same from the SIS device. This may help simplify the programming of the SIS device, as the master controller handles more of the actual management of the sensitive information by tracking the memory location of each account and/or login credentials associated therewith on the SIS device. In this manner, the master controller can simply request the SIS device to retrieve a specific memory location and, in response, the SIS grabs the specific memory location rather than performing a search for a particular Account ID. Still referring toFIG.1, in some embodiments, the sensitive information on the SIS device140is encrypted, which can only be decrypted by the master controller. Still referring toFIG.1, in some embodiments, the master controller may, upon user request, randomly generate passwords for the user and the randomly generated passwords may be based on user defined criteria such as length, permitted characters, etc. Still referring toFIG.1, the SIS device140can verify whether the master controller105is an authorized master controller that is allowed to access the sensitive information stored on SIS device140. In some embodiments, an identification code is associated with each master controller and the identification code may be based on a plain language description, a name given by its user, a serial number, a phone number, a Media Access Control (MAC) number, etc., or a combination thereof. The SIS device stores the identification code of the master controller that is authorized by the user to establish communication with the SIS device. Any request and/or information sent to the SIS device from the master controller105contains the identification code and, upon receipt of the request and/or information, the SIS device140compares the identification code with the stored identification code of the authorized master controller. When there is a match between the identification code of the master controller105and the identification code of the authorized master controller, the SIS device140verifies the master controller as an authorized master controller, and further communication and operations may ensue. In an event that the master controller is not verified as an authorized master controller, the SIS device140will not allow the unauthorized master controller to access the sensitive information. In some embodiments, SIS device140may store a list of identification codes of multiple authorized master controllers so that more than one master controller may access the same SIS device. Still referring toFIG.1, in alternative or additional embodiments, the communication unit152is passcode protected and only a master controller with the correct passcode can communicate with the SIS device140. In some embodiments, the master controller has an encryption key and the SIS device is configured to only communicate with master controllers that have a specific encryption key. The SIS device140can use the passcode or the encryption key to determine whether a master controller is an authorized master controller. In other words, the SIS device can authenticate a master controller using the encryption key, identification code, and/or passcode. Still referring toFIG.1, in some embodiments, the master controller may store an identification number unique to the SIS device, such as a UID, as a way to control and/or track which SIS device(s) can communicate with the master controller. For example, if the UID of a particular SIS device is not stored in the master controller, the master controller does not communicate with that particular SIS device even if the SIS device is within NFC range and/or the switch turned on. Therefore, the master controller may be configured to only communicate with SIS devices with UIDs that are recognizable by the master controller. This allows the master controller to authenticate an SIS device as an authorized SIS device and to only communicate with authorized SIS devices. Still referring toFIG.1, the master controller105may receive information from and provide information to one or more of online sources, for example a web server184and a personal computer186via connections183,188, respectively. In a sample embodiment, the web server184hosts a website185. Connections183,188may be network connections via a network, which may be one or more wired and wireless networks and the Internet, or a combination thereof. In other embodiments, connections183,188are direct connections (i.e. machine-to-machine connections) using a wireless communication protocol (e.g., Bluetooth) or a wired connection. A direct connection allows communication where no mobile cellular service or Internet access is available, such as remote work environments. A direct connection also keeps the transmission of sensitive information strictly between master controller105and the online source, thereby minimizing the risk of malicious interception or attack by a third party. Still referring toFIG.1, in some embodiments, master controller105may be configured to selectively store less sensitive information in its memory120when convenience is preferred over security. A benefit of this embodiment is that the user is not required to access the SIS device140which may be stored away for transportation or security. This may be the preferred option where the effort to access the SIS device140outweighs the security risk of that particular online source, such as an account relating to an online blog, or where the user has limited direct access to the SIS device. In these embodiments, the user would still authenticate herself with the master controller105to retrieve the less sensitive information, for example with a password, passcode, fingerprint, biometric scan, etc., or a combination thereof, which may or may not be already stored in the master controller105. In further embodiments, the user may choose to rid the need to authenticate to retrieve the less sensitive information from the master controller105. In this case, if a request is made to the master controller105from an online source184,186and the master controller105is within range to communicate with the online source through connections183,188, then the information can be automatically retrieved from the master controller without any user interaction. The user may choose this option for convenience, with the understanding that the only barrier to the information stored in the master controller105is being out of communication range of the online source184,186. Still referring toFIG.1, in some embodiments, the master controller105acts as an input peripheral for an online source, such as for example a USB HID keyboard. In other words, the master controller105can automatically sign into the online source by simulating the keystrokes required for the username and password. Still referring toFIG.1, in one embodiment, the master controller105may store one or more recovery contacts, such as phone numbers, in the SIS device140, so that in the event that the master controller105is lost, stolen, or damaged, a new master controller having one of the recovery contacts can connect to the SIS device140. New master controller105′ may have all the same components, features, and functions as master controller105as described above. In a sample embodiment, when a new master controller105′ tries to communicate with SIS device140, the SIS device140acknowledges that the new master controller is not the old one. The SIS device then sends, via the new master controller, a message containing a temporary passcode to the recovery phone number. If the new master controller105′ is associated with the recovery phone number (e.g. the new master controller has a sim card with the recovery phone number), then the new master controller receives the message with the passcode. The SIS device140then requests that the passcode be entered on the new master controller via the input device132. If the passcode entered on the new master controller105′ matches the passcode sent by the SIS device140, the SIS device140allows the new master controller to access the sensitive information stored therein. In another embodiment, the control unit114of the new master controller105′ may directly and automatically enter the received passcode without using the input device132. In some embodiments, when a new master controller with a new phone number is trying to access the SIS device140, the temporary passcode may be entered into the new master controller to render it an authorized master controller for the SIS device. While the above is described with respect to phone numbers, it can be appreciated that the recover contacts may be include other forms of communication, such as email addresses, social media accounts, etc. Still referring toFIG.1, in one embodiment, a backup SIS device140′ is required for connecting the main SIS device140with the new master controller105′. The backup SIS device140′ may have all the same components, features, and functions as the SIS device140as described above. The backup SIS device140′ is intended to be kept in a secured place separate from the main SIS device140, while the main SIS device140is intended to be carried around by the user. For example, the backup SIS device140′ may be a docking and/or charging station for the master controller105. The backup SIS device140′ has in its memory the encryption key, identification code, and/or passcode of the master controller105so that when the new master controller105′ communicates with the backup SIS device140′, the new master controller can obtain the necessary encryption key, identification code, and/or passcode to communicate with and access the main SIS device140. In some embodiments, the backup SIS device140′ may be associated with and support more than one main SIS device140. Still referring toFIG.1, the new master controller105′ first connects with the backup SIS device140′, retrieves the sensitive information stored on the backup SIS device140′, and then passes the sensitive information to the main SIS device140to reset the SIS device140, thereby overwriting the previous sensitive information stored on the SIS device140. The new master controller105′ stores the identity of the online sources and Account IDs in its memory120while deleting the associated login credentials and other sensitive information from its system. In this embodiment, only the backup SIS device140′ can give the new master controller105′ access to the sensitive information and only the backup SIS device140′ can send a passcode to the recovery phone number. Requiring access to the backup SIS device140′ may prevent a malicious master controller from trying to connect to the main SIS device140under false pretenses, as the main SIS device140is more likely to be lost or stolen on its own. Still referring toFIG.1, while being reset, the SIS device140may compare the sensitive information stored thereon with those stored on the backup SIS device140′ to determine the number of accounts that were in the main SIS device140but are not in the backup SIS device140′. The information stored on the backup SIS device140′ might not be up-to-date, as some accounts may have been added, modified, or deleted since the backup SIS device140′ was last updated. While the comparison does not provide the user the details of any discrepancy, the user is at least made aware of the number of accounts that is missing in the backup SIS device140′. Still referring toFIG.1, the backup SIS device140′ may have the same or different firmware programs than the main SIS device140. In some embodiments, the backup SIS device140′ is configured to have different program structure than the main SIS device140to allow the backup SIS device to more efficiently store and update the sensitive information. For example, the backup SIS device140′ may be defragmented with no memory gaps and may store information, such as serial numbers, about the SIS device140being backed up. This may help avoid overwriting or updating backup SIS device140′ with the wrong main SIS device. In another example, the SIS device140and the backup SIS device140′ may have the same firmware programs but with different parts being implemented. Still referring toFIG.1, in some embodiments, the master controller105can convert the backup SIS device140′ to function as the main SIS device140. For example, the master controller105can reformat the information stored on the backup SIS device140′ into a structure that can be searched and change the settings on the backup SIS device140′ to authenticate and handle requests from the master controller. In another example, the master controller can retrieve all the stored information from the backup SIS device140′ and convert the information into a structure that can be searched, and send the information back to the backup SIS device140′, thereby converting the backup SIS device140′ into a main SIS device140. Of course, other ways of converting the backup SIS device140′ into a main SIS device are possible. Still referring toFIG.1, in one embodiment, the backup SIS device140′ may be set up and/or updated using the master controller. For example, the master controller105is configured to be able to read and temporarily store the data stored on the main SIS device140and transfer same to the backup SIS device140′. Once the transfer is complete, the master controller105deletes the temporarily stored data from its memory120for added security. The master controller may keep track of any account that has been modified since the backup SIS device140′ was last updated and notify the user accordingly. Still referring toFIG.1, when the main SIS device140is lost, damaged, or stolen, the master controller may use the backup SIS device140′ to help the user retrieve her sensitive information. There may be discrepancies between the information stored on the main SIS device140and the backup SIS device140′ since the information stored on the backup SIS device140′ might not be up-to-date, as some accounts may have been added, modified, or deleted since the backup SIS device140′ was last updated. The master controller can compare its account information with those of the backup SIS device140′ to identify any new, modified, or deleted accounts. If an account was added or updated, then the user will have to resolve the account with the corresponding online source, since the master controller only knows which accounts are new or modified but does not have the corresponding login credentials. Still referring toFIG.1, in some embodiments, the master controller can export all data stored thereon in a single export file. The master controller may provide the user the option to include the sensitive information stored on the SIS device in the export file. In embodiments, the user is prompted to connect the SIS device with the master controller (e.g., by turning on the switch150) to allow the master controller to retrieve the sensitive information. The export file may be a plain text file and may or may not be encrypted. In further embodiments, the master controller may provide the user the option to send the export file to an online source of the user's choice, such as a cloud storage service, a server, a personal computer, etc., or to another SIS device, another master controller, or a portable media storage device. The export file may be used to restore data on a new master controller or new SIS device, and to provide the user with a complete copy of the sensitive information, which may be converted to a hardcopy by printing. Still referring toFIG.1, in one embodiment, online sources186,184have a login request program stored thereon to allow them to send a request to the master controller105for sensitive information, such as login credentials for signing into an account. The login request program may be for example a browser extension, an operating system program, etc. When the master controller105receives such a request, a notification is displayed or otherwise shown on the master controller. The notification may include an identification of the online source that is requesting sensitive information and the type of information that is requested, etc. The user then has the option to activate the island158to allow the master controller105to access the information stored on the SIS device140. In one embodiment, the user shows intent by turning switch150on to activated activate the island158. If the SIS device140and master controller105are relying on NFC to connect, the switch is turned on by bringing the SIS device within NFC range of master controller105. If the SIS device140and master controller105are connecting by other communication protocols, then SIS device140may not necessarily have to be in close proximity to the master controller. While the island is activated, the sensitive information stored on the SIS device140can be accessed and the master controller105can then retrieve the sensitive information that is associated with the specific account of the online source from which the request was sent and send the requested sensitive information to the online source to log into the account automatically. In one embodiment, the master controller105may convert the requested sensitive information into the keystrokes required for logging in, such as the username and password, to allow automatic login. In another embodiment, the login request program stored on the online sources184,186may convert the retrieved sensitive information into the keystrokes required for logging in to allow automatic login. Still referring toFIG.1, a request for sensitive information may be generated when the user is trying to log in to a locked personal computer having the login request program installed thereon. For example, a program may be installed on the personal computer that would generate a request whenever the user login screen is displayed. In another example, the request may be generated when the user clicks on the login screen or the login input box. In yet another example, a computer may include a passive NFC sticker that contains the identity of the computer and when the master controller scans the sticker, the master controller receives a request to log into that computer. Still referring toFIG.1, a request for sensitive information may also be generated when the user is trying to access a website185via an online communication device having the login request program installed thereon. The communication device may be the master controller itself or another device such as a smartphone, tablet, person computer, etc. For example, the user may surf the Internet through the browser application122and using the output device130and the input device132of the master controller105. The website185may require the user to log in to a specific user account. In some embodiments, the request may be automatically generated by a program that can recognize online service access points, such as a login page displayed by its website address. In other embodiments, the request is generated by the user. In some embodiments, a browser web extension may monitor which websites are visited so that it can send requests based on the websites. In other embodiments, an operating system program may monitor which programs are active on the computer and may generate requests based on this knowledge. Still referring toFIG.1, once the master controller105receives the request for sensitive information, the master controller checks if the online source from which the request was generated is a recognizable one, i.e., the online source is one for which the SIS device140contains corresponding sensitive information. If the master controller105recognizes the online source and user intent has been demonstrated, the master controller automatically retrieves the relevant sensitive information from the SIS device140and sends same to the online source. After receiving the login credentials, the online source may automatically log in by simulating the keystrokes required for the login credentials. Still referring toFIG.1, if the master controller105recognizes the online source but there is more than one account on the SIS device140that corresponds to the online source, the master controller105will ask the user to select which account to retrieve the sensitive information from, via input device132and output device130. Once the user makes the selection and shows user intent, the master controller105retrieves the sensitive information associated with the selected account from the SIS device140and sends same to the online source. Still referring toFIG.1, if the master controller105does not recognize the online source, the master controller will ask the user to choose the online source on the SIS device140from which to retrieve the sensitive information. This may be done using the input and output devices132,130of the master controller. If there is more than one account associated with the chosen online source, the master controller will further ask the user to choose the account from which to retrieve the sensitive information. Once the online source and/or account is selected and user intention is demonstrated, the master controller retrieves the sensitive information from the SIS device140and sends same to the online source. After retrieving the sensitive information, the master controller may ask the user whether to associate the online source from which the request was generated with the selected online source stored on SIS device140going forward. Still referring toFIG.1, in one embodiment, the SIS device140may be integrated with the master controller105to form a single integrated SIS device141. For example, the SIS device140may be connected to the master controller105by a wired connection170in one body. Optionally, a secondary computing device, such as a computer or smartphone, may be configured to manage the information on the integrated SIS device141and/or provide full communication, input, and/or output functions for the integrated SIS device141if necessary. For example, an external input and/or output device may be the secondary computing device. In this embodiment, provided that user intent has been demonstrated (i.e. the island has been activated), the integrated SIS device141can directly look up the sensitive information stored internally and can automatically send the sensitive information to any online source upon request via a communication link, to allow automatic log in where applicable. In other words, just like the SIS device140, the integrated SIS device141does not allow access to the sensitive information stored therein in the absence of user intent. The integrated SIS device141may be paired (for example, via Bluetooth) with various online sources, to allow the integrated SIS device141to receive sensitive information requests from same. The user can show intent by turning on switch150to allow the sensitive information to be accessed by the integrated SIS device141through its island. Still referring toFIG.1, in some embodiments, the SIS device140or the integrated SIS device141may use biometric security to show user intent. For example, biometric input from the user may be used to turn on switch150. In a further example, the SIS device140or the integrated SIS device141includes a fingerprint scanner and requires a fingerprint match to activate the island158, thereby allowing access to the sensitive information. In other embodiments, the biometric security information can be inputted through the master controller, for example, via a fingerprint scanner on the master controller. Still referring toFIG.1, in a sample embodiment, the integrated SIS device141is or is part of a smartphone having a fingerprint scanner and the smartphone itself is requesting sensitive information from integrated SIS device141. The user may show intent to access the sensitive information by scanning his fingerprint on the smartphone to prompt the integrated SIS device141to accept the request and send the requested information to the smartphone. In another example, the online source is a smartphone with NFC capability and the user may show intent by turning on switch150and/or tapping the smartphone to the NFC-enabled integrated SIS device141to power up and prompt the integrated SIS device141to receive the sensitive information request from the smartphone and to send the requested sensitive information. Still referring toFIG.1, in some embodiments, the master controller105can share sensitive information with another online communication device, such as another master controller, so that an account and its corresponding account information may be shared among multiple SIS devices. The master controller that shares sensitive information is referred to as the “sharing controller” and the device that receives the sensitive information from the sharing controller is referred to as the “receiving controller” hereinafter. The receiving controller may have its own SIS device that is separate from the sharing controller's SIS device. The sharing controller can set restrictions in terms of what the receiving controller can do with the received sensitive information. This may be achieved by, for example, having flag sets for the shared information, so that the receiving controller can only use the information as the flags would permit. The receiving controller's use of the sensitive information includes, for example, logging into an online source with the sensitive information. Still referring toFIG.1, in one embodiment, the sharing controller may allow the receiving controller to use the received sensitive information only once. This may be applicable for automatic sign in, such as to a Wi-Fi network. The sharing controller sends the login credentials to the receiving controller and the receiving controller immediately applies the credentials without displaying or storing same. In this embodiment, the receiving controller needs to request the sensitive information from the sharing controller for every use. Still referring toFIG.1, the sharing controller may allow the receiving controller's SIS device to store the received sensitive information but the receiving controller's use of the sensitive information is monitored by the sharing controller. For example, a notification is sent to the sharing controller every time the receiving controller uses the sensitive information. If the receiving controller cannot notify the sharing controller, then the receiving controller cannot retrieve the sensitive information from the receiving controller's SIS device. In some embodiments, once the notification is sent to the sharing controller, the receiving controller can only retrieve the sensitive information from its SIS device after receiving a confirmation from the sharing controller. This prevents the receiving controller from taking control over the sensitive information and/or logging into an online source without the sharing controller's knowledge and/or permission. Still referring toFIG.1, in other embodiments, the receiving controller can use the shared sensitive information stored on its SIS device without restriction. While a notification may optionally be sent to the sharing controller, the receiving controller does not require the sharing controller's confirmation to use the shared sensitive information. Still referring toFIG.1, in some embodiments, if any changes are made to the shared sensitive information on the sharing controller's SIS device, the sharing controller can notify the receiving controller to update the information accordingly. Likewise, if the sensitive information is deleted on the sharing controller's SIS device, the sharing controller can notify the receiving controller to delete the information. In embodiments where the receiving controller has unrestricted use of the sensitive information, the receiving controller may modify and/or delete the sensitive information on its SIS device and then notify the sharing controller so that the sharing controller can update its SIS device accordingly. Still referring toFIG.1, the receiving controller may have its own backup SIS device that is separate from the sharing controller's backup SIS device. In some embodiments, the sharing controller controls whether the receiving controller can back up the shared sensitive information on the receiving controller's backup SIS device. Still referring toFIG.1, in an illustrative embodiment, a household may have multiple master controllers each paired its own SIS device. The users in the household may share one account for a particular online source, for example, a TV/movie streaming website or app, such as Netflix′. In this embodiment, if one user updates the account information of the shared account, that user's master controller will send a notification along with the updated information to the other master controllers whose paired SIS devices also have the shared account's information stored thereon, so that the other master controllers can update the shared account information on their SIS devices accordingly. Referring toFIG.2, in a sample system103, multiple master controllers105a,105b,105c, and105dcan access the same SIS device140a. For example, a family home may have multiple users that share a single SIS device140a(the “shared SIS device” hereinafter). Initially, only one master controller105ais recognized as an authorized master controller by the shared SIS device140a; however, master controller105acan grant other master controllers105b,105c,105dpermission to access to the shared SIS device140a. Once permission is granted, the identification codes of the other master controllers105b,105c,105dare stored on shared SIS device140aand/or the passcode of the communication unit152of device140aare provided to the other master controllers105b,105c,105dto give the other master controllers subsequent access to the shared SIS device140awithout the need to seek permission from master controller105a. Still referring toFIG.2, in some embodiments, each master controller105a,105b,105c, and105dmay be granted different access levels to the shared SIS device140asuch that one or more of the master controllers may have more control and/or access to shared SIS device140athan the others. For example, one access level may allow a master controller to have unrestricted access and control over the shared SIS device140a, thereby allowing the master controller to add, retrieve, modify, delete, etc. sensitive information on the shared SIS device140a, to back up the shared SIS device, and to share the sensitive information. Another access level may only allow a master controller to retrieve the sensitive information from the shared SIS device140a. Access levels may also be used to restrict access to one or more accounts such that the master controllers can each only access and retrieve information from certain accounts on the shared SIS device140a. Still referring toFIG.2, in one embodiment, if an account is updated on the shared SIS device140a, the master controllers105a,105b,105c, and105dare not notified. However, if an account is added or deleted, the master controllers105a,105b,105c, and105dare notified of this change so that the master controllers can each update its list of accounts accordingly. Still referring toFIG.2, while the illustrated embodiment shows one SIS device being shared by four master controllers, it can be appreciated that the system103may have more or less than four master controllers sharing the SIS device140a. Referring toFIG.3, in a sample system104, a master controller105a(the “shared master controller” hereafter) may be paired with multiple SIS devices140a,140b,140c,140d. For example, a user may have multiple SIS devices, each at a different location (e.g. work, home, worn on the body, etc.). The identification code of the shared master controller105ais stored on each SIS device140a,140b,140c,140dand/or the passcode of the communication unit152of each SIS device140a,140b,140c,140dis provided to the shared master controller105a, so that all the SIS devices140a,140b,140c,140drecognize the shared master controller105aas an authorized master controller. Still referring toFIG.3, in some embodiments, each SIS device140a,140b,140c,140dmay have different sensitive information stored thereon. For example, an SIS device that is carried around by the user may have less sensitive information stored thereon than an SIS device that is stored in a secured location at home since the chances of a portable SIS device being stolen and/or hacked are much greater. If an account is updated, added, or deleted by the user on one SIS device140a, the shared master controller105awill prompt the user to update the other SIS devices140b,140c,140dby establishing communication link170and demonstrating user intent. Still referring toFIG.3, while the illustrated embodiment shows one master controller being shared by four SIS devices, it can be appreciated that the system104may have more or less than four SIS devices sharing the master controller105a. Referring back toFIG.1, the following sample methods are described with respect to NFC enabled devices, unless specified otherwise. Similar methods may be implemented for devices that communicate using communication protocols other than NFC, with modifications that would be apparent to those skilled in the art. Sample Method Referring toFIGS.4and5, in processes500and600, respectively, in one embodiment, a master controller has a software application122(also referred to as “app” hereinafter) installed thereon to manage and store sensitive information on a SIS device. The SIS device is offline unless the user shows intention to allow communication therewith and/or retrieval of information therefrom. Still referring toFIGS.4and5, when the user requests a task in the software application (step502), the master controller searches for a SIS device within the range of the master controller's NFC signal. Even if the user's SIS device is within range, the master controller cannot detect it because the SIS device is offline by default (e.g. the switch is in the off position). In this embodiment, when the SIS device is in the default (off) position, the switch is off and the island is deactivated such that the SIS device's wireless communication unit152is disconnected from the communication controller158. When the user turns the switch on (step602), thereby activating the island, the SIS device's wireless communication unit is connected to the communication controller. Once the switch is turned on and the SIS device within range of the master controller, the SIS device is powered up by the NFC signal from the master controller and the microcontroller154sets the wireless communication unit152to “pass-thru” mode and sets the communication direction as “from MC” (as in “from master controller”) (step604), thereby enabling the SIS device's NFC function and rendering the SIS device detectable by the master controller (step504). If the master controller cannot detect the SIS device, the app may prompt the user to move the SIS device within range and/or turn on switch150(step505). Still referring toFIGS.4and5, with the SIS device powered up and the communication direction set as “from MC”, the app authenticates the master controller by providing the communication controller158with an encryption key, passcode, and/or the identification code of the master controller (step506and step606). If the encryption key, passcode, and/or identification code is incorrect (step608), the communication controller158will send a signal to the master controller indicating that access is denied (step610) and the app will display an error message (step512). Otherwise, the communication controller158will allow the master controller access (step612). Still referring toFIGS.4and5, once access is granted (step508), the master controller sends data to the SIS device (step514) and the data is written to the SRAM or volatile memory of the communication controller158(step614). The memory of the communication controller158acts as a pass-through system to the rest of the SIS device. Any data written to the communication controller is lost when the SIS device has no power. Still referring toFIGS.4and5, the microcontroller154monitors the register flag of the communication controller158to see if the data is ready to be read from the communication controller. If the register flag indicates that data has been written to the communication controller, the microcontroller154proceeds to read the data. Once the data has been read by the microcontroller154, the communication controller158sets the register flag to indicate that the data has been read. The app can check the register flag to determine whether the data written to the communication controller158has been read by the microcontroller154. Still referring toFIGS.4and5, if the app needs to send more data to the SIS device, it can do so after the initial data has been read. The microcontroller154knows to wait before processing the data if the initial data indicates that more data is on its way to the communication controller158. After the communication controller158receives all the expected data, the microcontroller154processes the data to determine the request being made by the app (step616). Still referring toFIGS.4and5, the request made by the app may be one or more of (but not limited to) the following: Add user information; Retrieve user information; Modify user information; Delete user information; Update firmware; Retrieve all user information to be sent to a backup SIS device; Clean up memory storage; Disengage from master controller; Engage with new master controller; Retrieve contact information for engaging new master controller (when existing master controller is unable to disengage); and Check user information for passwords that are the same or similar to flag same to user; and Reset to factory settings and delete all user information. Still referring toFIGS.4and5, for example, if the user wants to add a new account to the SIS device, the user starts the app and the app will prompt the user to enter the information for the new account, including for example online source name, source website (if applicable), username, and password. For each new account, the master controller generates an Account ID. If there is more than one account for a particular online source, the master controller also generates an alias for each account. The master controller only stores the online source name, source website, and alias (if applicable) in association with each Account ID. In this case, the request generated by the app is for adding a new account and the data that is sent to the communication controller158(step514) includes, the type of request, the Account ID, username, password, and optionally the online source name. Still referring toFIGS.4and5, at step616, the microcontroller154processes the request for adding a new account by first scanning the memory160to determine if the Account ID already exists. If the same Account ID exists, the microcontroller154will overwrite the stored account information associated with the Account ID with the newly provided information because each Account ID is unique. If the Account ID does not exist in memory160, the microcontroller154will store the new account information in the next available memory location in memory160. Account information includes but is not limited to: Account ID, username, password, optionally the online source name, and placeholders for future flags and information. Once the new account information is stored, the microcontroller154updates the next available memory location. In one embodiment, the SIS device stores specific data that cannot be overwritten with account information and the data includes, for example, the next available memory location and placeholders for future flags and information, such as the device serial number, firmware version, memory size, etc. Still referring toFIGS.4and5and referring back toFIG.1, in another example, if the user wants to retrieve sensitive information from the SIS device, the user starts the app and the app will prompt the user to select an online source from a list of online sources. If there is more than one account associated with the selected online source, the app will also prompt the user to select an account from the online source. Once the user makes a selection, the master controller looks up the Account ID that is associated with the selected account. In this case, the request generated by the app is for retrieving sensitive information and the data that is sent to the communication controller158(step514) includes the type of request and the Account ID that is associated with the selected account. Still referring toFIGS.4and5and referring back toFIG.1, at step616, the microcontroller154processes the request for retrieving sensitive information by first scanning the memory160to find a match on the Account ID that was written to the communication controller158. Once a match is found, the microcontroller reads the sensitive information associated with the Account ID stored in memory160. Still referring toFIGS.4and5and referring back toFIG.1, in another example, if the user wants to delete sensitive information from the SIS device, the user starts the app and the app will prompt the user to select an online source from a list of online sources. If there is more than one account associated with the selected online source, the app will also prompt the user to select an account from the online source. Once the user makes a selection, the master controller looks up the Account ID that is associated with the selected account. In this case, the request generated by the app is for deleting sensitive information and the data that is sent to the communication controller158(step514) includes the type of request and the Account ID that is associated with the selected account. Still referring toFIGS.4and5and referring back toFIG.1, at step616, the microcontroller154processes the request for deleting sensitive information by first scanning the memory160to find a match on the Account ID that was written to the communication controller158. Once a match is found, the microcontroller deletes all the account information that is associated with the Account ID from memory160. The microcontroller154may update the next available memory location to the location previously occupied by the deleted account information. Still referring toFIGS.4and5and referring back toFIG.1, whichever type of request, once the data is processed the communication controller switches the communication direction to “to MC” (as in “to master controller”) (step617) and the microcontroller154then sends reply data to the communication controller158. The register flag is then changed to indicate that the reply data has been written to the communication controller (step618). The app monitors the register flag to determine when reply data has been written to the communication controller158(step516). When the app sees the register flag indicating that reply data has been written, the app proceeds to read the reply data (step518). If, after a prescribed time, the app cannot see any reply data, the app will display an error message and prompt the user to retry the request (step520). In some embodiments, once the app has read the reply data, the communication controller158changes the register flag to indicate same to the microcontroller154and the microcontroller may write more reply data to the communication controller if necessary. Still referring toFIGS.4and5and referring back toFIG.1, the reply data may be a simple confirmation message as to whether the request has been fulfilled successfully, for example, where the request for adding or deleting sensitive information. If the request is for retrieving information, the reply data will include the requested information. The reply data may also include information regarding the amount of data the master controller should expect so that the app may read multiple reply data writings before acting on the reply data. Once the app has received all the expected reply data, the app can follow through with the request. In some embodiments, the app may simultaneously act on the reply data while receiving the data. Still referring toFIGS.4and5and referring back toFIG.1, if the reply data indicates that the request has been fulfilled successfully (step522), the app may notify the user of same and will delete any sensitive information that was inputted for the request from the memory of the master controller and update the list of accounts accordingly (step524). For example, the master controller may update its internal database entries by storing or deleting the Account ID and the corresponding online source name and website (if applicable) and alias (if applicable) that are associated with the request. If the request is for retrieving sensitive information, the app may process the reply data by displaying the requested sensitive information (such as login credentials along with the corresponding online source name and website and alias (if applicable)) to the user via the output device130, allowing the user to use the information (if applicable), and deleting the retrieved information from the memory of the master controller when the user is done. If the reply data indicates that the request has not been fulfilled successfully, the app will display an error message and prompt the user to retry the request (step520). Still referring toFIGS.4and5and referring back toFIG.1, with respect to allowing the user to use the retrieved information, the master controller may provide the user various options including copying, sending, editing, and deleting the information, and closing the display window. Copying the information allows the user to subsequently paste the information into another app and/or program. Sending the information, for example via a connected communication protocol, allows the user to forward the information to another online communication device, which may include an online source. In one embodiment, the master controller sends the login credentials to the online communication device as keystrokes to log in automatically for the user. If there are multiple online communication devices paired via a communication protocol to the master controller, the master controller may provide a list of same to the user to allow the user to choose which online communication device to send the login credentials. Editing allows the user to modify the login credentials and then the app will send a request to the SIS device to update same. Closing the display window removes the login credentials from the master controller. Still referring toFIGS.4and5and referring back toFIG.1, once the app has read all the reply data, the communication controller158changes the register flag to indicate same. When the microcontroller sees the updated register flag (step620), the microcontroller154goes into low power sleep mode in an infinite loop (step624), thereby reverting the SIS device back to its default offline position. Since the microcontroller154goes in the low power sleep mode after each request is processed, the user needs to show user intent for each subsequent request, which may help prevent any malicious attempts to retrieve more information than what the user had intended. If the reply data has not been read by the app after a prescribed time (step620), the microcontroller154will try to clean up any loose ends and revert the SIS device back to its default offline position (step622) wherein the microcontroller154is in low power sleep mode in an infinite loop. The SIS device will also revert back to its default offline position if the switch150is turned off by the user and/or if there is no user interaction with the SIS device after a prescribed time. Once the SIS device is offline, any data written to the communication controller's SRAM or volatile memory is lost, while the data stored in the non-volatile memory160is maintained. Logging in Using the Master Controller Referring toFIG.6, this flowchart shows a sample process800from the perspective of the master controller, which may be, for example, a smartphone with a fingerprint reader, according to an embodiment of the present disclosure. When the user tries to log into an online source via an online communication device, such as a personal computer, the communication device sends a login request along with the identity of the online source to the master controller (step802). The online source may be, for example, a person computer or a website displayed on a personal computer. After receiving the login request, the master controller notifies the user, for example with a notification via output device130, and requests confirmation from the user via input device132that the user is indeed trying to log into that specific online source (step804). If the master controller does not receive the user's confirmation within a prescribed time (step806), the master controller will send an error message to the communication device (step822). Once the master controller receives confirmation from the user (step806), the master controller requests the user to authenticate himself, for example via the fingerprint reader (step807). If the user cannot be authenticated by master controller, then the request is cancelled (step814). Still referring toFIG.6, if the user is successfully authenticated, the master controller determines if the online source requesting login is recognized by the master controller (step808). If the online source is a recognizable one, then the master controller checks if there is more than one Account ID associated with the online source (step809). If there is only one Account ID associated with the online source, the master controller requests the login credentials associated with the Account ID from the SIS device in accordance with process500as described above with respect toFIG.4(step816). When the SIS device receives the request from the master controller, the SIS device proceeds in accordance with process600as describe described above with respect toFIG.5. Once the master controller receives the requested login credentials from the SIS device (step818), the master controller forwards same to the communication device to automatically log in to the online source (step820). Still referring toFIG.6, if there is more than one Account ID associated with the online source (step809), the master controller asks the user to select one of the accounts that are associated with the online source, for example, via the input and output devices132,130(step810). Once the user selects an account from the list of accounts associated with the online source, the master controller requests the login credentials associated with the Account ID of the selected account from the SIS device in accordance with process500as described above with respect toFIG.4(step816). When the SIS device receives the request from the master controller, the SIS device proceeds in accordance with process600as describe above with respect toFIG.5. Once the master controller receives the requested login credentials from the SIS device (step818), the master controller forwards same to the communication device to automatically log in to the online source (step820). Still referring toFIG.6, if the online source is not a recognizable one (step808), the master controller provides the user with the following options: (i) select an online source from the list of online sources, for example via the input and output devices132,130(step811); (ii) create a new account (step812); or (iii) cancel the request (step814). If the user selects an online source from the list of online sources (step811) and there is only one account associated with the selected online source (step809), the master controller requests the login credentials associated with the Account ID of the selected online source from the SIS device in accordance with process500as described above with respect toFIG.4(step816). If there is more than one Account ID associated with the selected online source (step809), the master controller asks the user to select one of the accounts that are associated with the online source, for example via the input and output devices132,130(step810). Once the account is selected, the master controller requests the login credentials associated with the Account ID of the selected account from the SIS device in accordance with process500as described above with respect toFIG.4(step816). When the SIS device receives the request from the master controller, the SIS device proceeds in accordance with process600as describe above with respect toFIG.5. Once the master controller receives the requested login credentials from the SIS device (step818), the master controller forwards same to the communication device to automatically log in to the online source (step820). The master controller may optionally ask the user if the selected online source is to be associated with the online source from which the request was made for subsequent login requests from the same online source and then save the user's selection accordingly (step824). Still referring toFIG.6, if the user chooses to create a new account (step812), the master controller prompts the user to input the login credentials and other relevant information for the new account (813), and then sends a request to the SIS device to add the new account in accordance with process500as described above with respect toFIG.4(step815). The master controller also saves the online source associated with the new account as a recognizable online source. When the SIS device receives the request from the master controller, the SIS device proceeds in accordance with process600as describe described above with respect toFIG.5. The master controller also sends the login credentials provided by the user to the communication device to automatically log in to the online source (step820). The master controller automatically associates the newly created account with the new online source for subsequent login requests. Still referring toFIG.6, if the user cancels the request, then the master controller will send a message to the communication device (step822). Referring back toFIGS.1-6, in some embodiments, the memory120,160or additional data storage in the master controller or SIS device are examples of computer-readable media (also referred to as “computer accessible media”) for storing instructions that are executable by the microcontroller154or the control unit114to perform the various functions described above, including the methods described above and any variations thereof. Generally, any of the functions described with reference to the figures can be implemented using software, hardware (e.g., fixed logic circuitry) or a combination of these implementations. Program code may be stored in one or more computer-readable media or other computer-readable storage devices. Thus, the processes and components described herein may be implemented by a computer program product. As mentioned above, computer-readable media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The terms “computer-readable medium” and “computer-readable media” refer to transitory and non-transitory storage devices and include, but are not limited to, RAM, ROM, EEPROM, FRAM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store information for access by a device, e.g., the master controller and SIS device. Any of such computer accessible media may be part of the master controller and/or SIS device. Still referring toFIGS.1-6, the operation of the SIS device does not require any involvement by or participation from service providers. Therefore, the systems and methods disclosed herein can be implemented without modifying existing networks or service provider infrastructure. In the Detailed Description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, those of ordinary skill in the art would appreciate that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present disclosure. For each method described in the present disclosure, the order in which the method blocks is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method, or alternate method. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the method may be implemented in any suitable hardware, software, firmware, or a combination thereof, without departing from the scope of the present disclosure. Accordingly, the present disclosure provides devices, systems, and methods for securely storing, managing, and/or retrieving data that may include sensitive information. According to a broad aspect of the present disclosure, there is provided a device comprising: a communication unit operable to communicate with a master controller via a communication link; a memory; and an island activatable by user interaction, wherein activation or deactivation of the island controls the master controller's access to data stored in the memory via the communication link. In one embodiment, the island is a non-software-based component of the device. In one embodiment, the communication link is a direct connection and/or a wireless communication protocol. In one embodiment, the communication link is NFC, Wi-Fi, Bluetooth, RFID, or a combination thereof In one embodiment, both the device and the master controller are NFC-enabled and the user interaction comprises the device being brought within NFC range of the master controller or vice versa. In one embodiment, the device further comprises a switch having an on position and an off position, and the user interaction comprises the switch being placed in the on position; when the switch is in the on position, the island is activated; and when the switch is in the off position, the island is deactivated. In one embodiment, the communication unit comprises an antenna, and when the switch is off, the antenna is disconnected and when the switch is on, the antenna is connected. In one embodiment, the user interaction comprises user biometric input. In one embodiment, when the island is activated, the data is accessible by the master controller via the communication link, and the access comprises one or more of modification of the data; retrieval of the data; addition to the data; and deletion of some or all of the data. In one embodiment, when the island is activated, the island is in communication with the memory but not with the communication unit; and when the island is deactivated, the island is in communication with the communication unit but not with the memory. In one embodiment, the communication unit is operable to receive a request from the master controller via the communication link and/or the master controller is operable to receive the data from the communication unit. In one embodiment, when the island is activated, the island is operable to receive the data from the memory; and when the island is deactivated, the communication unit is operable to read from and/or write to the island. In one embodiment, the island is automatically deactivated in the absence of the user interaction. In one embodiment, the data comprises sensitive information. In a further embodiment, the sensitive information comprises one or more of login credentials, SIN number, SSN number, healthcare number, bank account number, lock combination, passport number, cryptocurrency, tokens, certificates, and a digital file. In one embodiment, the data comprises: one or more accounts or account IDs; and login credentials corresponding the one or more accounts or account IDs. In one embodiment, the data comprises one or more memory locations. In one embodiment, the device further comprises a range indicator. In one embodiment, the device is sized to be portable or to be embeddable in a portable object. In one embodiment, the island is a pass-through system. In one embodiment, the device is powered by the communication link and the device is free of an internal power supply. According to another broad aspect of the present disclosure, there is provided a system comprising: a master controller; an SIS device operable to communicate with the master controller via a communication link, the SIS device comprising: a memory; and an island activatable by user interaction, wherein activation or deactivation of the island controls the master controller's access to data stored in the memory via the communication link. In one embodiment, the island is a non-software-based component of the SIS device. In one embodiment, the communication link is a direct connection. In one embodiment, both the SIS device and the master controller are NFC-enabled and the user interaction comprises the SIS device being brought within NFC range of the master controller or vice versa. In one embodiment, the SIS device comprises a switch having an on position and an off position; the user interaction comprises the switch being placed in the on position; and when the switch is in the on position, the island is activated and when the switch is in the off position, the island is deactivated. In one embodiment, the user interaction comprises user biometric input and the master controller is configured to receive the user biometric input. In one embodiment, when the island is activated, the data is accessible by the master controller via the communication link; and the access comprises one or more of modification of the data; retrieval of the data; addition to the data; and deletion of some or all of the data. In one embodiment, when the island is activated, the island is in communication with the memory but not with the communication unit; and when the island is deactivated, the island is in communication with the communication unit but not with the memory. In one embodiment, the island is automatically deactivated in the absence of the user interaction. In one embodiment, the master controller contains a particular encryption key, a particular identification code, and/or a particular passcode; and the SIS device is configured to only communicate with the master controller having the particular encryption key, the particular identification code, and/or the particular passcode. In one embodiment, the SIS device has an identification number; and the master controller is configured to only communicate with the SIS device having the identification number. In one embodiment, the system further comprises a backup SIS device having stored thereon the particular encryption key, the particular identification code, and/or the particular passcode. In one embodiment, the system further comprises a new master controller operable to obtain the particular encryption key, the particular identification code, and/or the particular passcode from the backup SIS device. In one embodiment, the backup SIS device has backup data stored thereon and the SIS device is operable to compare the data with the backup data. In one embodiment, the backup SIS device has a different program structure than the SIS device. In one embodiment, the SIS device has one or more recovery contacts stored thereon and the system further comprises a new master controller associated with one of the one or more recovery contacts. In one embodiment, the master controller is operable to export the data in an export file. In one embodiment, the master controller and the SIS device are components of an integrated SIS device. In one embodiment, the user interaction comprises user biometric input and the integrated SIS device is configured to receive the user biometric input. In one embodiment, the system further comprises a second master controller, and the SIS device is connectable to the second master controller via a second communication link, and activation or deactivation of the island controls the second master controller's access to the data via the second communication link. In one embodiment, the second master controller is configured to send and/or receive shared data from the master controller. In one embodiment, the master controller and/or the second master controller are configured to set a restriction on the shared data; and the restriction prohibits one or more of: sharing the shared data more than once; storing the shared data in a second SIS device associated with the second master controller; and retrieving the shared data from the second SIS device without the master controller receiving a notification from the second master controller and/or the second master controller receiving a confirmation from the master controller. In one embodiment, the master controller is configured to monitor the second master controller's use of the shared data. In one embodiment, the second master controller is configured to notify the master controller of any change to or deletion of the shared data. In one embodiment, the system further comprises a backup SIS device associated with the second master controller. In one embodiment, a restriction is set on the second master controller's access to the data, and the restriction prohibits one or more of: modification of the data; retrieval of the data; addition to the data; deletion of the data; backing up of the data on a backup SIS device; sharing of the data; and access to some of the data. In one embodiment, the system further comprises a second SIS device connectable to the master controller via a second communication link, the second SIS device comprising: a second memory; and a second island activatable only by a second user interaction, and activation or deactivation of the second island controls the master controller's access to a second data stored in the second memory via the second communication link. According to another broad aspect of the present disclosure, there is provided a method of storing and managing sensitive information, the method comprising: upon detecting user interaction from a user, activating an island in an SIS device, the SIS device configured to communicate with a master controller via a communication link; and controlling access, by the master controller via the communication link, to data stored on the SIS device, based on the activation or deactivation of the island. In one embodiment, the island is a non-software-based component of the SIS device. In one embodiment, both the SIS device and the master controller are NFC-enabled and the user interaction comprises bringing the SIS device within NFC range of the master controller or vice versa. In one embodiment, the SIS device comprises a switch having an on position and an off position; the user interaction comprises placing the switch in the on position; and when the switch is in the on position, the island is activated and when the switch is in the off position, the island is deactivated. In one embodiment, controlling access comprises, when the island is activated, allowing access by the master controller to the data via the communication link, and the access comprises one or more of: modification of the data; retrieval of the data; addition to the data; and deletion of some or all of the data. In one embodiment, controlling access comprises when the island is activated, allowing access to the data by the island while restricting communication between the island and the master controller; and upon deactivation of the island, restricting access to the data by the island while allowing communication between the island and the master controller, and the access comprises one or more of: modification of the data; retrieval of the data; addition to the data; and deletion of some or all of the data. In one embodiment, the method further comprises deactivating the island in the absence of the user interaction. In one embodiment, when the island is deactivated, the island is free of any of the data. In one embodiment, the method further comprises authenticating the master controller or a new master controller; and/or authenticating the SIS device or a new SIS device. In one embodiment, the SIS device has a recovery contact stored thereon, and authenticating the new master controller comprises sending a message containing a passcode to the recovery contact. In one embodiment, the method further comprises retrieving, by the new master controller, an encryption key, an identification code, and/or a passcode from a backup SIS device. In one embodiment, the method further comprises retrieving, by the new master controller, backup data stored on the backup SIS device; and transmitting the backup data to the SIS device. In one embodiment, the method further comprises overwriting the data in the SIS device with the backup data. In one embodiment, the method further comprises comparing, by the SIS device or by the master controller, the backup data with the data. In one embodiment, the method further comprises replacing the SIS device with the backup SIS device. In one embodiment, the method further comprises, prior to activating the island, either: (i) generating a request by the master controller; or (ii) receiving the request from an online source and providing notification of the request on the master controller. In one embodiment, the method further comprises requesting confirmation from the user; and upon receiving the confirmation, authenticating the user. In one embodiment, the request is associated with sensitive information and the request is for one of: addition of the sensitive information to the data; modification of the sensitive information, if the sensitive information is part of the data; retrieval of the sensitive information, if the sensitive information is part of the data; and deletion of some or all of the sensitive information, if the sensitive information is part of the data, and the access comprises one or more of modification of the data; retrieval of the data; addition to the data; and deletion of some or all of the data, and the method further comprises transmitting the request to the SIS device. In one embodiment, the request is for the retrieval of the sensitive information, further comprising retrieving by the master controller the sensitive information associated with the request from the SIS device, upon detection of the user interaction. In one embodiment, the method further comprises transmitting the sensitive information to the online source. In one embodiment, the method further comprises, prior to or after transmitting the sensitive information, converting the sensitive information into keystrokes to allow automatic login at the online source. In one embodiment, the method further comprises deactivating the island and/or providing notification upon completion of the request or failure to carry out the request. In one embodiment, the method further comprises one or more of copying, sending, editing, and deleting the retrieved sensitive information. In one embodiment, the method further comprises: determining whether the online source is recognizable; if the online source is recognizable, determining a number of accounts associated with the online source; and if the number of accounts is more than one, providing a list of accounts and receiving a selection from the list of accounts. In one embodiment, the method further comprises: determining whether the online source is recognizable; if the online source is not recognizable, one of: providing a list of online sources and/or a list of accounts and receiving a selection from the list of online sources and/or the list of accounts; requesting information to create a new account and adding the new account to the SIS device; and cancelling the request. In one embodiment, the method further comprises associating the selection or the new account with the online source. In one embodiment, the method further comprises looking up an account ID or a memory location associated with the online source or the selection; and transmitting the account ID or the memory location along with the request. In one embodiment, the SIS device is configured to communicate with a second master controller via a second communication link, and further comprising controlling access, by the second master controller via the second communication link, to data stored on the SIS device, based on the activation or deactivation of the island. In one embodiment, the method further comprises setting a restriction on the second master controller's access to the data, and the restriction prohibits one or more of: modification of the data; retrieval of the data; addition to the data; deletion of some or all of the data; backing up of the data on a backup SIS device; sharing the data; and access to some of the data. In one embodiment, the method further comprises adding to the data or deleting some or all of the data; and notifying the master controller and the second master controller of same. In one embodiment, the method further comprises, upon detecting a second user interaction on a second SIS device, activating a second island in the second SIS device having a second data stored thereon, the second SIS device configured to communicate with the master controller via a second communication link; and controlling access to the second data by the master controller via the second communication link based on the activation or deactivation of the second island. In one embodiment, the method further comprises modifying, adding to, or deleting at least some of the data; and prompting a user to correspondingly modify, add to, or delete at least some of the second data on the second SIS device. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the embodiments of the present disclosure. Various modifications to those embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the subject matter of the present disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein, but is to be accorded the full scope consistent with the claims, wherein reference to an element in the singular, such as by use of the article “a” or “an” is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. All structural and functional equivalents to the elements of the various embodiments described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the elements of the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. | 100,150 |
11861029 | DETAILED DESCRIPTION Aspects of the present disclosure solve problems associated with content object workflows being limited only to those performed at or by locally integrated applications. These problems are unique to, and may have been created by, various computer-implemented methods for interoperation between Internet-enabled applications. Some embodiments are directed to approaches for accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the content management system. The accompanying figures and discussions herein present example environments, systems, methods, and computer program products for accessing a dynamically extensible set of content object workflows. Overview Disclosed herein are techniques for accessing an extensible set of applications through a content management system to perform workflows over content objects that are managed by the content management system. In certain embodiments, the techniques are implemented in a computing environment comprising a content management system that facilitates interactions over a plurality of content objects that are created by, or modified by, or accessed by, a plurality of applications that implement workflows. The applications available in the computing environment can include native applications (e.g., web apps, mobile applications, etc.) that are provided by the content management system as well as third-party applications that are available in the overall computing environment. Such third-party applications are applications that are not provided and/or maintained by the provider of the content management system but, rather, are applications that are integrated with the content management system to facilitate certain interactions with at least some of the types of content objects managed at the content management system. When a user interacts (e.g., either via a native application or a third-party application) with a subject content object at a subject application that is integrated with the content management system, a set of workflows that are applicable to the subject content object are presented to the user. The presented workflows comprise an extensible set of workflows that correspond to a respective extensible set of remote applications that are integrated with the content management system. As used herein a workflow can be a single, atomic workflow that is carried out solely by a single application, or a workflow can be a compound workflow that is composed of a first portion of a workflow that is carried out by a first application and a second portion of the workflow that is carried out by a second application such that the performance of the compound workflow as a whole serves to accomplish a particular result. A workflow can be represented in a data structure or in computer-readable code. In some cases, a workflow might be represented as a “list” data structure comprising elements that are listed in some order. In some cases, a workflow data structure might codify a “tree”, where traversal down a particular branch implies tests that are performed to determine which subsequent actions are deemed to be ‘next’ in the sequence of operations of the workflow. Applications are deemed to be “remote applications” with respect to the subject application in that the remote applications are not integrated directly with the subject application. Rather, the remote applications and/or the workflows of the remote applications are accessible only through the content management system. Specifically, via a user interface at a subject application, a user can merely identify a remote application's workflow. The content management system will in turn invoke the particular identified workflow to be performed by the corresponding remote application. In certain embodiments, during performance of a workflow of one or more remote applications, the content management system observes and records the interaction activity raised by the remote applications. In some embodiments, certain information associated with the interaction activity at a given remote application is published to the subject application. In certain embodiments, users interact with a user interface to view, identify, and/or invoke workflows associated with the remote applications. In certain embodiments, a subject application can receive a dynamically augmented set of workflows of remote applications. Definitions and Use of Figures Some of the terms used in this description are defined below for easy reference. The presented terms and their respective definitions are not rigidly restricted to these definitions—a term may be further defined by the term's use within this disclosure. The term “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application and the appended claims, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or is clear from the context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A, X employs B, or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. As used herein, at least one of A or B means at least one of A, or at least one of B, or at least one of both A and B. In other words, this phrase is disjunctive. The articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or is clear from the context to be directed to a singular form. Various embodiments are described herein with reference to the figures. It should be noted that the figures are not necessarily drawn to scale, and that elements of similar structures or functions are sometimes represented by like reference characters throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the disclosed embodiments—they are not representative of an exhaustive treatment of all possible embodiments, and they are not intended to impute any limitation as to the scope of the claims. In addition, an illustrated embodiment need not portray all aspects or advantages of usage in any particular environment. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated. References throughout this specification to “some embodiments” or “other embodiments” refer to a particular feature, structure, material or characteristic described in connection with the embodiments as being included in at least one embodiment. Thus, the appearance of the phrases “in some embodiments” or “in other embodiments” in various places throughout this specification are not necessarily referring to the same embodiment or embodiments. The disclosed embodiments are not intended to be limiting of the claims. Descriptions of Example Embodiments FIG.1AandFIG.1Billustrate a computing environment100in which embodiments of the present disclosure can be implemented. As an option, one or more variations of computing environment100or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. FIG.1AandFIG.1Billustrate aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically,FIG.1Apresents a logical depiction of how the herein disclosed techniques are used to interact with a plurality of remote applications integrated with the content management system to perform a selection of extensible workflows over content objects managed by the system.FIG.1Bpresents further details of how the herein disclosed techniques are used to capture the interaction activity at the remote applications (e.g., during performance of the workflows) and publish the activity to a subject application or applications. Representative sets of high order operations are also presented to illustrate how the herein disclosed techniques might be applied in computing environment100. The logical depiction ofFIG.1Adepicts a representative set of users102(e.g., user “u1”, user “u2”, user “u3”, and user “u4”) who desire to interact with various instances of content objects106(e.g., folder “fA”, folder “fB”, file “f1”, file “f2”, and file “f3”) managed at a content management system108. Users102may be users of content management system108, which facilitates interactions (e.g., authoring, editing, viewing, etc.) by the users over content objects106for sharing, collaboration, and/or other purposes. In some cases, such interactions are organized into workflows. Interactions and/or workflows over a particular content object may be performed by one or more of users102and/or even autonomously by one or more computing entities (e.g., processes, agents, applications, etc.). In computing environment100, the interactions and/or workflows executed over content objects106by users102are facilitated by various applications (e.g., application110F, application110D, etc.). These applications (e.g., web apps, mobile applications, etc.) can include native applications that are provided by content management system108as well as third-party applications that are available in the overall computing environment. Such third-party applications are applications that are not provided and/or maintained by the provider of content management system108. Certain applications often establish direct integration with one or more other applications to extend the number and types of workflows that can be performed over content objects106. Consider, for example, that application110Festablishes a direct integration with a second application. Such integration may include registration of the second application with application110F, establishment of APIs to facilitate communication between the applications, and other inter-application integration features. Once established, the integration might facilitate invoking, from application110F, a workflow that is performed in whole or in part by the second application. As earlier mentioned, however, the foregoing direct integration approach limits the workflows available to a particular application to merely those workflows native to that application and to those workflows facilitated by other applications locally integrated with that particular application. With this approach, the number of direct or local integrations required to achieve 100% workflow access over all applications in computing environment100grows exponentially with the number of applications. More specifically, if there are N applications in computing environment100, then application110Fand application110Dand all other applications would each need N-1 direct integrations (e.g., direct application integrations112F, direct application integrations112D, etc.) to achieve 100% workflow access. In this case, over all N applications, there will be N2−N total direct integrations in computing environment100. A more appropriate architecture, as disclosed herein, would scale to require on the order of only 2N integrations to achieve 100% workflow access over all applications. The herein disclosed techniques address such challenges pertaining to content object workflows being limited to those performed directly at integrated applications. Specifically, the herein disclosed techniques address such challenges at least in part by using the content management system108as a common repository of workflows and/or to relay workflow invocation requests. As used herein, a remote application is an application that has no direct or local integration with a subject application. With this approach, content management system108serves as an intermediary that facilitates a one-to-many relationship between a subject application (e.g., application110F) and multiple instances of remote applications114(e.g., application110Dand/or other applications). In the embodiment ofFIG.1A, a remote workflow manager120is implemented at content management system108to facilitate the foregoing intermediation capability and/or other capabilities. As such, rather than N applications in computing environment100having to establish N2−N direct integrations with one another to achieve 100% workflow access, full access can be achieved by N integrations between content management system108and each respective application (operation 1). The foregoing application integrations with content management system108enable the users102to interact with content objects106at any of the applications (operation 2). As facilitated by the herein disclosed techniques, the users can invoke, from any subject application, workflows to be performed over the content objects at any of the remote applications114(operation 3). Remote workflow manager120receives such workflow requests and forwards them to respective target remote applications to facilitate performance of the corresponding workflows over the content objects (operation 4). As merely one example, user “u1” might interact with a contract (e.g., file “f2”) at a SALESFORCE™ application (e.g., application110F). From the SALESFORCE™ application, user “u1” invokes a signature workflow over the contract to be performed at a DocuSign application (e.g., application110D). In this case, the signature workflow might request user “u3” to interact with the DocuSign application to sign the contract. In some cases, users who are interacting with a particular subject application desire to know the interaction activities and/or events that occur at remote applications114. Referring to the embodiment ofFIG.1B, a content management system interface116F1is presented at application110F. As can be observed, the interface displays a list of content objects from content objects106that are associated with “Account X”. Content management system interface116F1also shows a then-current stream of activity associated with the list of content objects. The activity stream indicates the application (e.g., application “S”, application “F”, application “G”, etc.) where the activity occurred, a summary of the activity that includes a user identifier, a content object identifier, an application identifier, and other information. As with the scenario ofFIG.1A, the applications of computing environment100are each integrated with content management system108(operation 1) to facilitate interactions with content objects106at the applications. In the scenario ofFIG.1B, a user “u1” interacts with content management system interface116F1at application110F(e.g., the subject application) to invoke a workflow (operation 3) at application110D(e.g., the remote application). In this case, when the workflow is performed (operation 4), the interaction activity over the content object is observed and recorded (operation 5). As an example, remote workflow manager120at content management system108might receive a workflow request from the subject application, invoke the workflow at a remote application, and monitor and record the interaction activity at the remote application. As shown, the interaction activity is published by remote workflow manager120to the application that originated the workflow (operation 6) request. The interaction activity may be published to other applications as well. In some cases, such interaction activity is manifested in a set of activity updates1181that are display in content management system interface116F2. Specifically, a new update (e.g., “User u3signed file f2in DocuSign”) is published as an activity update. The user can then choose to interact with content objects of the content management system through the user interfaces of the application that originated the workflow request (operation 7). One embodiment of the foregoing techniques for extensible content object workflow access is disclosed in further detail as follows. FIG.2depicts an extensible content object workflow access technique200as implemented in systems that facilitate use of a dynamically extensible set of content object workflows. As an option, one or more variations of extensible content object workflow access technique200or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The extensible content object workflow access technique200or any aspect thereof may be implemented in any environment. As used herein, the terms “extensible workflow”, or “extensible workflow access”, or “extensible content object workflow” refer to mechanisms for integrating additional content object processing through the content management system in a manner facilitates user invocation of a remote application from a subject application without requiring the subject application to have a direct integration with the remote application or its workflows. FIG.2illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is presented to illustrate one embodiment of certain steps and/or operations performed over a network of devices (e.g., user devices, computing systems, etc.) to invoke and execute workflows at an extensible set of remote applications, and to monitor, record, and publish the interaction activities at those remote applications. As can be observed, the steps and/or operations can be grouped into a set of setup operations210, a set of remote app access operations220, and a set of remote app monitoring operations230. Setup operations210of extensible content object workflow access technique200commences by identifying a content management system that facilitates interactions over a plurality of users and a plurality of content objects (step212). Such interactions can involve both user-to-user interactions and user-to-content interactions. One or more applications (e.g., apps) are integrated with the content management system to facilitate interactions over the users and/or content objects performed at the apps (step214). As an example, a sales contract document managed by the content management system might be shared using a first application (e.g., SALESFORCE™) to facilitate the development of the contract after which, the contract might be submitted to a second application (e.g., DocuSign) to facilitate execution (e.g., signing) of the contract. In this case, the SALESFORCE™ and DocuSign applications might be registered with the content management system to facilitate authorized access to the sales contract document managed (e.g., stored, updated, etc.) at the content management system. The herein disclosed techniques further facilitate invoking the workflow at the second application from the first application, as described in the following. Specifically, according to remote app access operations220, a content object that is associated with a first application integrated with the content management system is identified (step222). According to the foregoing example, the identified content object is the sales contract associated with the SALESFORCE™ application. A second application (e.g., remote application) that is integrated with the content management system is selected (step224). The selected application often has some association with the subject content object, such as being suitable for performing certain workflows over the content object. In the foregoing example, DocuSign is the second application. An indication (e.g., message or API call) from the first application to invoke a workflow at the second application is received (step226). For example, a message or API call received from the SALESFORCE™ application might be processed by the content management system to invoke a workflow at the DocuSign application. According to remote app monitoring operations230, the interaction activity associated with the workflow being performed at the second application is recorded (step232). As merely one example, the interaction activity recorded might pertain to a signature event in the DocuSign application. The interaction activity is then published to the first application and/or any other selected applications (step234). In some cases, the interaction activity might be presented in a human-readable form in an application whereas, in other cases, the interaction activity might comprise information (e.g., status flags) that is collected in one or more native data structures of the application. One embodiment of a system, data flows and data structures for implementing the extensible content object workflow access technique200and/or other herein disclosed techniques is disclosed as follows. FIG.3is a block diagram of a system300that implements dynamically extensible content object workflows. As an option, one or more variations of system300or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The system300or any aspect thereof may be implemented in any environment. FIG.3illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is being presented to show one embodiment of certain representative components and associated data structures and data flows implemented in a computing environment to facilitate the herein disclosed techniques. As shown, the components, data flows, and data structures are associated with a set of users that interact with each other (e.g., user1021, . . . , user102N) and a set of content objects106managed at a content management system108. The components, data flows, and data structures shown inFIG.3present one partitioning and associated data manipulation approach. The specific example shown is purely exemplary, and other subsystems, data structures, and/or partitionings are reasonable. As shown, system300comprises an instance of content management server310operating at content management system108. Content management server310comprises a message processor312and an instance of a remote workflow manager120, which comprises an app selection service314, a workflow controller316, an activity monitor318, and an activity publisher320. A plurality of instances of the foregoing components might operate at a plurality of instances of servers (e.g., content management server310) at content management system108and/or any portion of system300. Such instances can interact with a communications layer322to access each other and/or a set of storage devices330that store various information to support the operation of components of system300and/or any implementations of the herein disclosed techniques. For example, the servers and/or storage devices of content management system108might facilitate interactions over content objects106by the users (e.g., user1021, . . . , user102N) from a respective set of user devices (e.g., user device3021, . . . , user device302N). A content management system manages a plurality of content objects at least in part by maintaining (e.g., storing, updating, resolving interaction conflicts, etc.) the content objects subject to the various interactions performed over the content objects by users of the content objects at their respective user devices. The content objects (e.g., files, folders, etc.) in content objects106are characterized at least in part by a set of object attributes340(e.g., content object metadata) stored at storage devices330. Furthermore, the users are characterized at least in part by a set of user attributes342stored in a set of user profiles332at storage devices330. Further details regarding general approaches to handling object attributes including content object metadata are described in U.S. application Ser. No. 16/553,144 titled “EXTENSIBLE CONTENT OBJECT METADATA”, filed on Aug. 27, 2019, which is hereby incorporated by reference in its entirety. Further details regarding general approaches to handling remote workflows are described in U.S. patent application Ser. No. 16/726,093 titled “EXTENSIBLE WORKFLOW ACCESS” filed on Dec. 23, 2019, which is hereby incorporated by reference in its entirety. The users access instances of applications at their respective user devices to interact with content objects106managed by content management system108. As shown, the applications can comprise instances of native applications (e.g., native application3041, . . . , native application304N) or instances of third-party applications (e.g., application110F, which might be a SALESFORCE™ or “F” application; application110S, which might be an electronic Sign or “S” application, etc.). Various information pertaining to integration of such applications with content management system108are codified in an app registry336stored in storage devices330. At least some information of app registry336comprises instances of application-specific information346. In some cases, certain portions of the information in app registry336might be locally accessible at the user devices by the applications. For example, a first local app registry might be accessible by application110Fand/or native application3041and/or other applications at user device3021, and a second local app registry might be accessible by application110Sand/or native application304Nand/or other applications at user device302N. The instances of the applications operating at the user devices send or receive various instances of messages324that are received or sent by message processor312at content management server310. In some cases, messages324are sent to or received from content management server310without human interaction. One class of messages324corresponds to application-specific information received at content management system108in response to executing application integration operations. For example, instances of application-specific information346that correspond to a particular application might be issued by an enterprise and stored in app registry336when the application is registered with content management system108. According to the herein disclosed techniques, when users interact with content objects106at applications operating on the user device, application requests are issued as instances of messages324to content management system108. The application requests are issued to the system to select one or more applications (e.g., remote applications) that are associated in some way (e.g., according to object type) with the content objects. Message processor312receives the application requests and forwards them to app selection service314. App selection service314accesses the object attributes340, application-specific information346, and/or other information to select applications that are associated with the content objects corresponding to the respective application requests. Based at least in part on the selected applications presented to them, users submit workflow requests as instances of messages324to be received by workflow controller316. Such workflow requests might be issued in the form of API calls that indicate the target remote application, workflow type, subject content object, and/or other information. Workflow controller316processes such workflow requests to invoke and execute (e.g., control) workflows at the identified target applications. Another class of messages324corresponds to interaction events that occur during the course of workflow execution at the applications. Interaction events might be such activities as creating, viewing, modifying, or deleting content objects. When such interaction events occur, the mechanisms (e.g., API communications, etc.) established when integrating the applications with content management system108are accessed to issue interaction event messages to activity monitor318through message processor312. Sets of event attributes344that correspond to the interaction events are stored in a store of event records334at storage devices330. Activity publisher320accesses such event attributes in event records334to generate activity updates that are published to various applications. As merely one example, when a workflow invoked from application110Fto be performed at application110Sis completed, that event is recorded in event records334and an activity update is pushed to application110Findicating the workflow is complete. The foregoing discussions include techniques for integrating applications with a content management system108(e.g., step214ofFIG.2) and examples of related application-specific information346stored in an app registry336, which techniques are disclosed in further detail as follows. FIG.4illustrates an application integration technique400as implemented in systems that facilitate access to a dynamically extensible set of content object workflows. As an option, one or more variations of application integration technique400or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The application integration technique400or any aspect thereof may be implemented in any environment. FIG.4illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure presents certain specialized data structures for organizing and/or storing certain application-specific information associated with the integration of applications (e.g., third-party applications) with a content management system. The application-specific information associated with the integrations facilitate at least some embodiments of the herein disclosed techniques. Specifically, the specialized data structures associated with the application-specific information are configured to improve the way a computer stores and retrieves certain data in memory when performing the herein disclosed techniques. The application-specific information can be organized and/or stored in accordance with the data structures using various techniques. For example, the representative data structures associated with application-specific information346shown inFIG.4indicate that the constituent data of the data structures might be organized and/or stored in a tabular structure (e.g., relational database table) that has rows that relate various attributes with a particular data entity. As another example, the underlying data might be organized and/or stored in a programming code object that has instances corresponding to a particular data entity, and properties corresponding to the various attributes associated with the data entity. A representative instance of a select data structure relationship460between certain data entities contained in application-specific information346is shown inFIG.4. When certain instances of applications110are integrated (e.g., registered) with a content management system108, respective sets of application-specific information346are populated in an app registry336. In some cases, certain portions of the application-specific information346are populated in response to various inputs (e.g., selections, entered text, etc.) received from system administrators and/or application developers by interacting with a user interface (e.g., admin and/or developer console). For example, an application developer might first register an application, and a system administrator might later define certain workflows associated with the applications. As shown, some or all of the information from app registry336might be replicated to instances of local app registries436. For example, a local app registry might be stored as a set of metadata associated with a particular application operating at a user device that is remote to the content management system. The metadata of the local app registry can be accessed to facilitate certain herein disclosed techniques (e.g., issuing interaction event messages, etc.). As indicated in a set of select application attributes462in the application-specific information346, each of the applications110that are registered with the content management system is identified by an application identifier (e.g., stored in an “appID” field), an application name (e.g., stored in an “appName” field), an enterprise identifier (e.g., stored in an “enterpriseID” field), an endpoint URL (e.g., stored in an “endpoint” field), a set of OAuth2 credentials (e.g., stored in an “OAuth2 [ ]” object), and/or other attributes. As can be observed, the application identifier or “appID” is referenced by other data structures to associate the data underlying those structures with a particular application. Certain attributes (e.g., “enterpriseID”, “endpoint”, etc.) from select application attributes462might be included in interaction event messages from the applications to facilitate identification of the particular instances of the applications that issue the messages. Various workflows are also defined in the application-specific information346in accordance with a set of select workflow definition attributes464. Specifically, a particular workflow associated with an application identified in an “appID” field is defined by a workflow identifier (e.g., stored in a “workflowID” field), a workflow name (e.g., stored in a “name” field), a workflow description (e.g., stored in a “description” field), a set of operations associated with the workflow (e.g., stored in an “operations [ ]” object), and/or other attributes. As can be observed, each operation of the workflow is described by an operation sequence index (e.g., stored in an “index” field), an operation state description (e.g., stored in a “state” field), a parent operation associated with the operation (e.g., stored in a “parent” field), and/or other attributes. As depicted, the then-current values associated with the “index”, “state”, and “parent” fields constitute a then-current set of workflow traversal conditions470that determine certain actions to be performed in the execution of the workflow. For example, if a then-current instance of the workflow traversal conditions470indicates “index=8” and “status=complete”, then an action might be taken to move to an operation having a next higher index value (e.g., “index=9”). The foregoing discussions include techniques for selecting applications (e.g., remote applications) from the set of applications integrated with a content management system (e.g., step222and step224ofFIG.2), which techniques are disclosed in further detail as follows. FIG.5depicts a remote application identification technique500as implemented in systems that facilitate access to a dynamically extensible set of content object workflows. As an option, one or more variations of remote application identification technique500or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The remote application identification technique500or any aspect thereof may be implemented in any environment. FIG.5illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is presented to illustrate one embodiment of certain steps and/or operations that facilitate selecting one or more remote applications that are associated with a respective set of content objects. As depicted in the figure, the steps and/or operations are associated with step222and step224ofFIG.2. A representative scenario is also shown inFIG.5to illustrate an example application of remote application identification technique500. The remote application identification technique500commences by presenting at a subject application a list of content objects managed by a content management system (step502). As shown in the accompanying scenario, a list comprising file “f1”, file “f2”, and file “f3” is presented to user “u1” in a content management system interface116F3of application110F. In certain embodiments, content management system interface116F3is a user interface embedded as an in-line frame element (e.g., iFrame) in a web page. An application request associated with the content objects is received from the subject application (step504). As merely one example, when the content management system interface116F3is presented, an application request522is issued to an instance of app selection service314. Application request522might comprise information describing the subject application (e.g., application110F), the content objects (e.g., file “f1”, file “f2”, and file “f3”), and/or other information. The application request is processed to determine the characteristics of the content objects associated with the request (step506). For example, app selection service314might use the object identifiers included in application request522to collect information (e.g., object type) about the content objects from the data stored in content objects106. Based at least in part on the characteristics of the content objects associated with the application request, a set of one or more remote applications is selected (step512). To do so, app selection service314scans the app registry336to identify integrated (e.g., registered) applications that are associated with the content objects presented in content management system interface116F3. A set of selectable workflows524is included in a message to application110F. The set of selectable workflows may correspond to multiple workflows of a single remote application, or the set of selectable workflows may correspond to multiple workflows across many remote applications. Responsive to the set of selectable remote workflows, a user interface device is presented at the subject application. The user interface device in turn serves to permit the user to invoke a selected workflow of one of the remote applications (step514). As shown, the interface device might include an icon and hyperlink (e.g., pointing to an endpoint) for each of the selected applications that can be clicked to invoke one or more workflows at the applications. Such display elements might be presented (e.g., by right-clicking on a content object icon) in an extensible workflow selection modal526at content management system interface116F3. Techniques for invoking workflows from such modals or using other mechanisms are disclosed in further detail as follows. FIG.6presents a workflow initiation technique600as implemented in systems that facilitate access to a dynamically extensible set of content object workflows. As an option, one or more variations of workflow initiation technique600or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The workflow initiation technique600or any aspect thereof may be implemented in any environment. FIG.6illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is presented to illustrate one embodiment of certain steps and/or operations that access a content management system to invoke and execute extensible workflows at various applications. As depicted in the figure, the steps and/or operations are associated with step226ofFIG.2. A representative scenario is also shown in the figure to illustrate an example application of workflow initiation technique600. The workflow initiation technique600commences by receiving from a subject application a workflow request associated with a content object managed by a content management system (step602). As illustrated in the representative scenario, the workflow request might be issued by user “u1” from a content management system interface116F4at application110F. More specifically, the workflow request is invoked by selecting application “D” from an extensible workflow selection modal526associated with file “f2” listed in the interface. As can be observed in a representative workflow request post624, application110Fposts a set of attributes to a “wf_request” API endpoint that describes a request to invoke workflow “signature” over content object (e.g., file) “f2” at remote application “appD”. The workflow request (e.g., API call) is received by message processor312and forwarded to workflow controller316to determine a remote application and extensible workflow from the workflow request (step604). For example, the payload of representative workflow request post624is parsed by the workflow controller to determine the aforementioned parameters (e.g., “remoteAppID”=“appD”, “wfID”=“signature”) and/or other information. Using the parameters extracted from the workflow request, the extensible workflow is invoked at the remote application (step606). As shown, workflow controller316launches an extensible workflow626(e.g., “signature” workflow) over file “f2” at application110. An alert to user “u3” to interact with the workflow (e.g., as the first or only signatory) may also be issued by workflow controller316or by extensible workflow626. Further details regarding general approaches to automatically determining a remote application and/or its invokable workflows are described in U.S. application Ser. No. 16/553,161 titled “WORKFLOW SELECTION”, filed on Aug. 27, 2019, which is hereby incorporated by reference in its entirety. The foregoing discussions include techniques for monitoring and recording such interactions with extensible workflows (e.g., step232ofFIG.2), which techniques are disclosed in further detail as follows. FIG.7presents a workflow activity observation technique700as implemented in systems that facilitate access to a dynamically extensible set of content object workflows. As an option, one or more variations of workflow activity observation technique700or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The workflow activity observation technique700or any aspect thereof may be implemented in any environment. FIG.7illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is presented to illustrate one embodiment of certain steps and/or operations that facilitate recording interaction events performed over content objects at various applications. As depicted in the figure, the steps and/or operations are associated with step232ofFIG.2. A representative scenario is also shown in the figure to illustrate an example application of workflow activity observation technique700. The workflow activity observation technique700commences by monitoring a plurality of applications for interaction events (step702). As illustrated, an instance of message processor312may continuously listen or poll for interaction events performed at a plurality of applications that include the application110Daccessed by user “u3”. As can be observed, user “u3” might be interacting with an extensible workflow626being executed over file “f2” at application110D. When interaction event messages are received (step704), the interaction event messages are parsed to retrieve respective sets of interaction event attributes from the messages (step706). As shown, message processor312receives an interaction event message722in response to user “u3” interacting with file “f3” at application110D. As indicated by a set of select interaction event attributes724, the interaction attributes associated with the interaction event messages include an application identifier (e.g., stored in an “appID” field), an interaction type description (e.g., stored in an “action” field), a timestamp (e.g., stored in a “time” field), a user identifier (e.g., stored in a “userID” field), an enterprise identifier (e.g., stored in an “entID” field), a link identifier (e.g., stored in a “linkID” field), a content object identifier (e.g., stored in an “objID” field), and/or other attributes. If other attributes are to be considered (“Yes” path of decision708), then various other attributes associated with the interaction event messages are retrieved (step710). In this case, message processor312might access the datastores of content objects106, user profiles332, app registry336, and/or other data sources to retrieve certain attributes associated with the interaction attributes of the interaction event messages. All retrieved attributes are then recorded as event attributes associated with the interaction event messages (step712). As stated, if other attributes are to be considered (“Yes” path of decision708), the event attributes comprise some or all of the retrieved interaction event attributes and the retrieved other attributes. If merely the interaction event attributes are considered (“No” path of decision708), the interaction event attributes comprise some or all of the retrieved interaction attributes. In the shown scenario, message processor312stores in event records334sets of event attributes that correspond to interaction event message722. The foregoing discussion includes techniques for publishing the interaction activity (e.g., interaction events) observed at extensible workflows being performed at various applications (e.g., step234ofFIG.2), which techniques are disclosed in further detail as follows. FIG.8depicts an activity publication technique800as implemented in systems that facilitate access to a dynamically extensible set of content object workflows. As an option, one or more variations of activity publication technique800or any aspect thereof may be implemented in the context of the architecture and functionality of the embodiments described herein. The activity publication technique800or any aspect thereof may be implemented in any environment. FIG.8illustrates aspects pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the system. Specifically, the figure is presented to illustrate one embodiment of certain steps and/or operations that facilitate publishing the interaction activity (e.g., interaction events) observed at extensible workflows being performed at various applications (e.g., remote applications). As depicted in the figure, the steps and/or operations are associated with step234ofFIG.2. A representative scenario is also shown in the figure to illustrate an example application of activity publication technique800. The activity publication technique800commences by accessing event attributes associated with at least one interaction event performed over a content object managed by a content management system (step802). As illustrated, an instance of workflow controller316might retrieve the event attributes from the event records334earlier described. Any applications associated with the content object are determined (step804). Such determination might be performed by an instance of activity publisher320based at least in part on a set of event attributes received from workflow controller316. For example, activity publisher320might query the application-specific information346in app registry336using a content object identifier and/or a content object type and/or other information included in a set of event attributes to enumerate any applications associated with the content object. Workflow controller316may also provide information to activity publisher320that describes a subject application that initiated the workflow that pertains to the interaction event and content object. Strictly as one example, the workflow controller316might receive event records334from a workflow request arising from a first application. The workflow controller316might process the request to form one or more messages that are then forwarded to a second target remote application to invoke performance of further workflows. As merely one example, a first user might store a content object in the form of a contract. Then, the same or different user might interact with the contract using the aforementioned SALESFORCE™ application (e.g., application110F). From the SALESFORCE™ application, the user might decide to invoke a signature workflow over the contract so the contract can be signed via a DocuSign application workflow. In due course (e.g., when a signatory logs in), the workflow controller316invokes the signature workflow (e.g., from within the content management system interface116F3). If the signing event corresponds to the last signatory, then the signature workflow might send a message to raise an event signaling to the SALESFORCE™ application that the last signature has been collected. This chaining of events, possibly involving one or more round trips between applications, can be carried out between any of a plurality of applications. In the foregoing example, the event signaling that the last signature has been collected might invoke a workflow at the SALESFORCE™ application to indicate that the contract has been signed by all parties, which in turn might change the opportunity status corresponding to the contract to “Closed—Won”. In some situations, the prior invoked workflow (e.g., to change the opportunity status corresponding to the contract) might trigger invocations of still further workflows. As pertains to the foregoing example, the acts being carried out by and between the workflows of the applications might raise any number of activity update messages, which messages are constructed from the event attributes associated with the interaction event (step806). Activity updates corresponding to the interaction events are then published to the set of applications associated with the content object (step808). As can be observed, an activity update message might comprise human-readable graphical display elements that are presented in a user interface (e.g., as a “feed”). More specifically, activity publisher320might publish one or more instances of activity updates1182at a content management system interface116F3displayed in application110F(e.g., the subject application from which the workflow corresponding to the interaction activity was initiated). Activity publisher320might also populate a subject application native data structure822associated with application110Fwith data that corresponds to the activity updates. For example, such native data structures might be populated with a status flag that can be accessed by the subject application (e.g., application110F) to facilitate various operations performed at the subject application. These data structures can include status indications to facilitate synchronous or asynchronous chaining of acts to be carried out by and between the workflows of the various applications. Strictly as one example, the foregoing data structures might be populated with status flags that serve as shadow copies of statuses of the workflows of the remote applications. The status flags can be consulted periodically and/or upon certain events such that the acts to be carried out by and between the workflows of the various applications can be carried out asynchronously. Activity updates (e.g., activity updates1182) might include statuses that are then-current as of the time a user logs in to the content management system. Furthermore, in some cases, particular types of activity updates might be highlighted in the user interface so as to alert the user (e.g., the shown user “u1”) to take some particular action over a particular shared content object. As pertains to the foregoing example, the acts being carried out by and between the workflows of the applications and/or activities being carried out by workflows themselves might cause the state of the workflow to change. In particular, in situations where a execution of a workflow as a whole involves interactions between a module of the content management system and an application or app of one or more third-party systems, it sometimes happens that a state of the workflow changes during processing of the workflow by the one or more third-party systems. Often, this brings forth the need to track the workflow state in the content management system, even as the workflow or portion thereof is being executed by a third party. What is needed are techniques that allows the state of a workflow to be monitored within the content management system—even in the case that the workflow or portion thereof is being executed outside of the content management system. One possible approach involving a workflow execution state variable is shown and discussed as pertains toFIG.9A. FIG.9Adepicts a system9A00that implements an extensible content object workflow having a dynamically-updated workflow execution state variable that is modified by operations of third-party systems. The figure is being presented to explain how a content object workflow can be steered by changes in a workflow execution state variable, and to illustrate how a workflow execution state variable can take on new values based on interactions with third-party systems that participate in a portion of the content object workflow. To explain, consider an example scenario where a content object workflow is defined to handle a contract from “cradle to grave”. Such a contract might originate as a content object of a content management system, which origination might cause invocation of a content object workflow to track and steer processing of the contract. At some point, such as when all signatories agree to the terms of the contract, negotiations over the terms of the contract might conclude. That event might cause the workflow to be steered to a next step where the fact that negotiations over the terms of the contract have concluded is registered with a third-party sales management facility (e.g., SALESFORCE™). The third-party sales management facility (e.g., the shown first third-party application918), in response to an indication that negotiations over the terms of the contract have concluded, might gather biographical information on the signatories such that all signatories can be contacted to sign the contract. Completion of this biographical information gathering might be an event that causes the workflow to move to the next step, possibly to send the contract out to a third-party application (e.g., to the shown second third-party application920) of another third-party system for collecting electronic signatures from all signatories. Still further, and continuing this example scenario, when all signatories have provided their electronic signatures, then that event might cause the workflow to be steered to a next step where the status of the closed contract is communicated to all stakeholders (e.g., stakeholders beyond the signatories themselves). The foregoing scenario can be implemented in an architecture similar to that as shown inFIG.9A. Specifically, and as shown, content management system108interacts with a first third-party application and a second third-party application. During, and responsive to such interactions (e.g., interaction9101and interaction9102), workflow execution state variable904takes on different values. To begin with, upon invocation of a workflow, or in some earlier step, an operation of the CMS might initialize a state variable (operation908). As such, an initial value “V0” might be set based on some initialization activity or execution of a portion of workflow906. Then, upon carrying out an interaction/response protocol (e.g., involving interaction9101and response9111) between the content management system108and a first third-party application, the workflow execution state variable904takes on a different value “V1”. After further processing within the content management system, and upon carrying out a second interaction/response protocol (e.g., involving interaction9102and response9112) between the content management system and a second third-party application, the workflow execution state variable904takes on a different value “V2”. In this scenario, this particular content object workflow concludes by reporting the then-current workflow status. As shown, operation916serves to send out status messages. The workflow then concludes. FIG.9Bdepicts a workflow execution state variable modification technique9B00as implemented in systems that facilitate uses of extensible content object workflows. The figure is being presented to illustrate how an extensible content object workflow can be defined so as to respond to workflow execution state variable modifications that arise from interactions between a CMS and any number of third-party applications. More specifically, the figure is being presented to illustrate one embodiment involving an initialization phase930and a third-party interaction phase932, which in combination serve to handle workflow execution state variable modifications on an ongoing basis. During initialization phase930, a workflow having one or more workflow execution state variables is defined (step931). Such a workflow has at least one workflow execution state variable for which values are to be determined at the time of execution of the workflow. At the time that the workflow is defined, the various third-party systems that are involved in the progression of the workflow are considered for loading (step933) into a mapping data structure936that defines a correspondence (e.g., a mapping) between a particular value of a workflow execution state variable and one or more aspects of a third-party system. For example, a row of a tabularized data structure might have a first column that contains a particular value of a workflow execution state variable (e.g., “Negotiation Complete”) as shown with respect to workflow906, and a second column that has an indication of how to interact with a third-party system once a particular value of the workflow execution state variable has been established (e.g., at T=T1). As another example, another row of the tabularized data structure might have a first column that contains a particular value of a workflow execution state variable (e.g., “Gathering Signatures”) and a second column has an indication of how to interact with a third-party system once that particular value of the workflow execution state variable has been established (e.g., at T=T2). Usage of such a data structure is depicted with respect to various steps that occur during the third-party interaction phase. Specifically, and as shown, upon an event (e.g., event9341), the third-party interaction phase is entered and the first third-party system is accessed. Operations of a first third-party application918are carried out possibly involving execution of a workflow of the first third-party system such that “Value1” (as shown) is assigned to the workflow execution state variable. The first third-party application notifies the CMS of a change in its workflow execution state. When the particular “Value1” of the workflow execution state variable is received (step937) at the CMS, then operations of the CMS (e.g., execution of the workflow at the CMS) are carried out to determine one or more next actions, based at least in part on the value of the workflow execution state variable received at the CMS. As used herein, the terms “workflow execution state variable” or “workflow state variable” refers to a mechanism for designation of a particular location in a sequence of operations that, when executed (in whole or in part), serve to accomplish a particular result. In some embodiments discussed herein, the mechanism for designation of a particular location in a sequence of operations is to assign a string or numeric value that refers to the particular location of the sequence of operations. In some embodiments, a string value that refers to a particular location in the sequence of operations may correlate to a numeric value that refers to the same particular location in the sequence. In some embodiments, the semantics of “workflow execution state variables” or “workflow state variables” are correlated as between a content management system and a third-party system such that the semantics are shared irrespective of any particular representation of any workflow execution state variable value. In some embodiments, a data structure is used to correspond a given particular representation of a particular workflow execution state variable value with one or more next actions. In some cases, and as shown, such a data structure is consulted so as to determine one or more methods to initiate and/or carry out particular one or more next actions (step9391). For example, and referring to a particular implementation of the foregoing tabularized data structure, if the next workflow state is “Gathering Signatures”, then a row of the data structure that has “Gathering Signatures” in the first column is sought. Upon finding such a state variable value, the value in a “Next Action” column is processed. Strictly as one example, the value in the “Next Action” column might be a URL that refers to an endpoint921in an Internet domain. Accessing such a URL in turn may invoke further steps or a further workflow. In this example, and as depicted in Table 1, the next action indication (e.g., a URL) that corresponds to the workflow execution state variable value “Value1” is an endpoint in an Internet domain. More specifically, in this example, the next action indication that corresponds to the workflow execution state variable value “Value1” is an endpoint that invokes a workflow of a second third-party system. In some implementations, and as depicted in the “Actor” column of Table 1, an indication of a particular user or particular user type (e.g., “Admin_User”, or “Manager_User”) might be accessed to inform the selection of a next action. This is often useful since, at the time the workflow is created, the workflow author might not necessarily know who (e.g., which user or which type of user) will being performing any particular step, however in some cases the user type (or particular user) who advances a workflow to a new state might provide important information. In some cases, a particular user or particular user type can be codified into a mapping data structure. This is shown in Table 1 by the example where the combination of “Value98” and “User1” invokes a next action “http://www.AnotherThirdParty/InitiateWorkflow1”, whereas the combination of “Value98” and “User2” invokes a next action “http://www.AnotherThirdParty/InitiateWorkflow2”. TABLE 1Mapping data structure exampleWorkflowVariableValueActorNext Action Indication“Gathering<any actor>http://www.SecondThirdParty/Signatures”InitiateSigningWorkflowValue1Admin_Userhttp://www.SecondThirdParty/InitiateWorkflowValue2Manager_Userhttp://www.CMS/InitiateWorkflowAtEntryPointNValue98User1http://www.AnotherThirdParty/InitiateWorkflow1Value98User2http://www.AnotherThirdParty/InitiateWorkflow2 Continuing the example where the next action of “http://www.SecondThirdParty/InitiateWorkflow” is raised, the CMS interacts with the second third-party system (step943) and, at some moment in time, the second third-party system notifies the CMS of a change in its workflow execution state and emits a new workflow execution state variable value (e.g., Value2). The CMS receives the new workflow execution state variable value (e.g., Value2) and a data structure is again consulted so as to determine one or more methods to carry out the determined next action (step9392). At step947, the determined next action is initiated. Referring to the particular implementation of the foregoing tabularized data structure, if the next workflow state value is “Value2”, then a row of the data structure that has “Value2” in the first column is sought, and the contents of the corresponding row are interpreted to resolve to a next action. In some cases, and as described above, the next action might be an action that corresponds to a workflow of a third-party system. However, in some cases, the next action might be an action that corresponds to a particular designated entry point of a workflow of the CMS. In such a case, the contents of the corresponding row are interpreted to cause invocation of the designated workflow at the designated entry point. The foregoing workflow execution state variable modification technique and processing of a workflow based on received values from third-party applications can be deployed in many scenarios. Moreover, variations of the foregoing workflow having one or more workflow execution state variables can be configured to achieve some particular outcome. Strictly as one example, a workflow having one or more workflow execution state variables can be configured to respond to a contract “close” event (e.g., based on an occurrence of a corresponding event at a first third-party system such as SALESFORCE™). Upon receipt of a value (e.g., “contractClosed=TRUE”), the workflow conditionally proceeds (e.g., based on the received value from the first third-party system) and interacts with a second third-party system (e.g., an e-signature authority such as DocuSign). Upon some event in the second third-party system, and upon receipt of the value from the second third-party system, the workflow conditionally proceeds (e.g., based on the received value from the second third-party system). In this example involving signing a contract, the workflow of the CMS might proceed to send messages to all signatories of the contract so as to indicate that the contract has been closed and that all signatories have e-signed the contract. Details of how a CMS can interact with multiple third-party systems to complete a workflow and/or how multiple third-party systems can interact with a CMS to complete a workflow, is shown and discussed as pertains to the contract closing/signing example ofFIG.9C. FIG.9Cdepicts an example of multi-party assignments of a workflow execution state variable value as implemented in systems that facilitate uses of dynamically extensible workflows. Specifically, the figure presents an example workflow execution scenario9C00where events raised at a first system can trigger workflow actions at a second system, and where events raised at the second system can trigger workflow actions at a third system, and so on. The example pertains to a contract closing workflow where the shown contract closing workflow includes workflow progression from a deal closing event through to when the deal is fully executed by all signatories. In this particular example, when negotiations over terms and conditions of a deal are concluded, a user can close the deal (e.g., as shown by closed deal event946). That is, when the deal is deemed to be a “Closed Deal”, the user, possibly involving a user interface, can raise a closed deal event and cause a workflow state variable to have a value equal to “Closed Deal” (operation951). The occurrence of setting a workflow state variable and/or the raising of a closed deal event can cause other events, including triggers. In this case, and as shown, the occurrence of setting the workflow state variable to have a value equal to “Closed Deal” can raise a trigger event (e.g., trigger event9481) and/or file operation event961. The trigger event itself (e.g., as raised by the first third-party application918), and/or the file operation event itself (e.g., saving or updating a file of the content management system108) in turn causes the content management system to recognize the occurrence of one or both the events (operation952). Upon recognition of the occurrence, the content management system will determine the value of the then-current workflow state variable (e.g., the value “Closed Deal”). In some cases, the content management system determines the value of a particular then-current workflow state variable based on receipt of a message and/or data item from a third-party system. As heretofore discussed, the next actions of the workflow can be determined based on the contents of a data structure. A shown here, the content management system determines a next step, which in this case is to invoke a workflow of a second third-party (operation953). In this case, the workflow of a second third-party is a particular workflow that corresponds to collecting signatures (operation954). The workflow of the second third-party, specifically the workflow to collect signatures, can be invoked by any one or more of a variety of triggers. In the example shown, a sign request962raised by the content management system can serve as a triggering event. Additionally or alternatively, an explicit triggering event (e.g., trigger event9482) can be used to invoke the workflow of a second third-party. As depicted, the workflow of the second third-party system is particularly configured to be able to gather e-signatures for any number of signatories. When the e-signatures for all signatories (e.g., all signatories that correspond to the sign request) have been collected (e.g., when there are no more signatures to collect), then the second third-party system can set the workflow state variable to have a value equal to “Signed” (operation955). The workflow state variable value can be communicated to another system. In this case, the value is communicated to the content management system in a sign completion message963. Receipt by the content management system of the sign completion message serves as a triggering event such that a next state of the workflow is determined (operation956). Determination of the next state of the workflow might include execution of some portion of the workflow at the content management system, which in turn might cause the workflow execution state variable to have a value equal to “Executed” (operation959). This new value (e.g., “Executed”) of the workflow state variable in turn causes further actions at the first third-party system. In this example case, and as shown, trigger event9483causes invocation of an operation to advise the deal execution team of details of the deal (operation960). When there are multiple parties or multiple vendors involved in a workflow, it can happen that any particular value of any given workflow execution state variable might be represented differently as between the multiple parties or multiple vendors. For example, a first vendor might represent one or more workflow execution state variables using string data types (e.g., “Closed”, or “Executed”), whereas a second vendor might represent one or more workflow execution state variables as numeric data types (e.g., 1, or 2), even though the meaning of a workflow execution state variable having a value equal to “Closed” is the same as a workflow execution state variable having a value equal to 1, or even though the meaning of a workflow execution state variable having a value equal to “Executed” is the same as a workflow execution state variable having a value equal to 2. The foregoing example is merely one possible example of carrying out a workflow as between multiple actors (e.g., as between a CMS and multiple third-party systems). To illustrate, a first alternative example is an upload of a file to a particular folder of a CMS, which might cause one or more actions that, in turn, prompts one or more users to review (e.g., and approve) the uploaded file. In this case, completion of such a review of that file might trigger yet another workflow that moves the file and alters the permissions on that file (e.g., so as to prevent outside actors from taking further action). A second alternative example refers to an upload of a contract that might require a deal value be assigned. If the deal value is deemed to be over a particular threshold, then that determination and a corresponding change in the value of a workflow execution state variable might cause initiation of a subsequent “deal review” workflow that is configured to update customer data in a second system. That customer data update and the corresponding change in the value of a workflow execution state variable might cause initiation of yet another workflow that assigns the contract to an account manager. At least inasmuch as a given value of a workflow execution state variable is one of several possible predicates for determining a next action or next state, then some means (e.g., a data structure and lookup mechanism) for correlating the value of a workflow execution state variable to a particular meaning is needed. One embodiment involves a semantic mapping technique that correlates from any representation of the value of a workflow variable into a respective particular meaning is shown and discussed as pertains toFIG.9D. FIG.9Ddepicts a workflow variable semantic mapping technique9D00as implemented in systems that facilitate uses of extensible content object workflows. The figure is being presented to illustrate how a first workflow variable value (e.g., the shown application-dependent workflow variable971) can be correlated with (e.g., mapped-to) a second workflow variable value. In some cases, the mapping technique might change the type and/or representation of the value. The particular embodiment ofFIG.9Dimplements a semantic mapping module950. Such a semantic mapping module can be situated anywhere in any environment. For example, such a semantic mapping module can be situated fully or partially in a CMS domain949, or such a semantic mapping module can be situated fully or partially in a third-party domain945. Alternatively, such a semantic mapping module can be partially in a CMS domain and partially in a third-party domain. In some cases, a semantic mapping module can use a tabularized semantic equivalence table. Table 2 captures representative examples of semantic equivalence. TABLE 2Representative examples of semantic equivalenceCMS Workflow ExecutionThird-party Workflow ExecutionState Variable ValueState Variable ValueString=″Closed″Integer=1String=″Executed″Integer=2String=″Closed″JavaScript Object Notation “1”String=″Executed″JavaScript Object Notation “2” Irrespective of any particular form or formatting of semantic equivalence, and irrespective of the particular location of any components of the semantic mapping module, the module operates as follows: at step965, a workflow variable is received from a source (e.g., a third-party system, as shown) and, at step966, the type of representation of the value is determined. At step967, a target workflow variable type and representation is determined and the value of the workflow variable received from the source (e.g., the mapped-from value969) is mapped into a value of the workflow variable value (e.g., the mapped-to value958). In this particular mapping example, the mapped-to value958is assigned as value V2, which value V2is used to replace the previous value of the workflow execution state variable, namely V1, as shown. In this particular workflow example, the workflow906concludes by reporting out the then-current status. In some embodiments, when the CMS receives a workflow state indication from a source, the CMS maps from one workflow execution state variable value representation type to another workflow execution state variable value representation type. Table 3 provides an example set of such workflow execution state variable value type conversions. In some cases, the mapped-to value that results from a conversion from one representation type to another representation type is used by the CMS to steer the workflow. In some cases, and as exemplified in Table 3, an indication of workflow state might be provided by operation of execution of a callback. This is shown in Table 3, where a callback URL is called so as to indicate that a “Requested Workflow Operation Is Complete”. TABLE 3Representative examples of value representationtype conversionThird-party WorkflowCMS WorkflowVariable Value RepresentationVariable Value RepresentationInteger=1String=″Closed″Integer=2String=″Executed″JavaScript Object Notation “1”String=″Closed″JavaScript Object Notation “2”String=″Executed″http://www.CallbackURLString=″Requested WorkflowOperation Is Complete″ In some situations, a workflow having an application-dependent workflow variable may operate differently based on the third-party application itself, and/or the computing equipment used by the third-party application, and/or the computing environment that supports or bounds the computing equipment used by the third-party application, and/or the user or user type that influences operation of the third-party application, etc. In some cases the third-party application may be integrated into a CMS, however nevertheless, operation of the workflow may be altered by any aspects of any user and/or operation of the workflow may be altered by any one or more computing system components that are involved in execution of the third-party application. In some cases, a single application may have application-dependent workflow variables, the values of any individual application-dependent workflow variable can be represented differently—even within the same single application. For example, a first application-dependent workflow variable may be represented at a first time using a first variable type even though the same first application-dependent workflow variable might be represented at a second time using a second variable type that is different from the first variable type. Similarly, a first application-dependent workflow variable of a workflow may be represented at a first time using a first variable type even though the same first application-dependent workflow variable might be represented at a second time using a second variable type that is different from the first variable type. Asynchronous Interactions In some cases, such as heretofore discussed, one or more third-party systems might explicitly notify the CMS (e.g., via an asynchronously emitted message to the CMS) when there is a change in the workflow execution state variable value at the third-party system. However, in certain situations and/or in certain time periods, the third-party system might not have notified (or might not have been able to notify) the CMS when there is a change in the workflow execution state. Some alternative means for tracking changes happening by and between the CMS and third-party systems and/or for tracking workflow execution state variable modifications are needed. Further details regarding general approaches to handling multiple third-party workflows are described in U.S. patent application Ser. No. 16/948,829 titled “CROSS-ENTERPRISE WORKFLOW ADAPTATION” filed on Oct. 1, 2020, which is hereby incorporated by reference in its entirety. FIG.9Edepicts an alternative workflow execution state variable modification technique9E00as implemented in systems that facilitate uses of extensible content object workflows. Such an alternative workflow execution state variable modification technique can be used at any time, but more particularly when a given third-party application is only loosely integrated with the CMS, or when a given third-party application is not integrated at all with the CMS except for publishing a third-party application's API endpoint or a web service that can be called by the CMS. In such cases, the third-party might need to be polled in order to determine when there has been or should be a change in the workflow execution state. Referring now to the example, even though a given third-party application is only loosely integrated with the CMS, it is possible for the CMS to continually interact with the third-party application. In one embodiment, a workflow906having one or more workflow execution state variables is defined (step932). In this embodiment, and to accommodate a third-party application that is only loosely integrated with the CMS, a data structure that defines a mapping between a first representation (e.g., a third-party's representation) of a workflow execution state variable and a second representation (e.g., a CMS representation) of the same workflow execution state variable is defined (step935). Such a mapping between a first representation of a workflow execution state variable and a second representation a workflow execution state variable might be specifically defined in order to map a polling value (e.g., a workflow execution state variable value that is determined by polling to an endpoint of a particular third-party system). At some later time, there may be ongoing activities or conditions at the CMS that cause a particular event over a content object (e.g., event9342) and/or there may be ongoing activities or conditions in the overall environment such that the CMS may need to take action based on expiration of a time period (e.g., timeout event9371). In response to the foregoing situations, the CMS might need to identify a then-current workflow execution state. To do so, the CMS—based on the conditions—may determine (step941) which third-party system, from among many possible third-party systems, is to be polled or queried, etc. The determined third-party system is then accessed (e.g., via interaction1) and a value (e.g., “Value1”) is returned in response. The foregoing depicts merely a general situation where a certain set of conditions at the CMS causes the CMS to spontaneously query a particular third-party system to determine a then-current workflow state at the particular third-party system. However, there are many specific situations where a specific set of conditions at the CMS causes the CMS to spontaneously query a particular third-party system to determine a then-current workflow state at the particular third-party system. Strictly as one such example, it can happen that a workflow being executed at the CMS is interrupted or canceled by a user or administrator. In such a case the then-current workflow state at the particular third-party system might need to be known in order to proceed (e.g., to clean-up computing data structures). To illustrate, consider the case where the negotiation of a bid/offer has been completed (e.g., the workflow execution state variable has surpassed a “Negotiation Complete”) and a third-party system is being engaged to collect e-signatures (e.g., workflow execution state variable value=“Gathering Signatures”). It might happen that the bid/offer is withdrawn, thus eliminating the need for “Gathering Signatures”. Accordingly, the CMS might (at step938) interact with the determined third-party application to cancel the collection of e-signatures. To further illustrate, consider the case where third-party system being engaged to collect e-signatures is taking too long (e.g., longer than a pre-defined timeout period). In such a case, the CMS might respond to a timeout event by interacting with the determined third-party application to cancel the collection of e-signatures. In some cases, multiple third-party system are engaged concurrently. As such, it can happen that the CMS might be interacting with a first third-party system at the same time that the CMS is interacting with a second third-party system. Accordingly, there may be multiple sets of conditions that would cause the CMS to respond to additional events (e.g., event9343and/or timeout event9372). To accommodate the multiple set of conditions that might cause the CMS respond to additional events, the CMS, based on a particular set of conditions, may identify (step940) an additional (e.g., second) third-party system. The determined additional third-party system is then accessed (step942) and a value (e.g., “Value2”) is returned as a response to interaction2. The CMS might then correlate (e.g., join) the semantics of the workflow execution state variable when equal to “Value1” with the semantics of workflow execution state variable when equal to “Value2”. Then, based on the correlation, the CMS will proceed (step944) to next actions in the workflow. FIG.9Fdepicts a system9F00as an arrangement of computing modules that are interconnected so as to operate cooperatively to implement certain of the herein-disclosed embodiments. The partitioning of system9F00is merely illustrative and other partitions are possible. As an option, the system9F00may be implemented in the context of the architecture and functionality of the embodiments described herein. Of course, however, the system9F00or any operation therein may be carried out in any desired environment. The system9F00comprises at least one processor and at least one memory, the memory serving to store program instructions corresponding to the operations of the system. As shown, an operation can be implemented in whole or in part using program instructions accessible by a module. The modules are connected to a communication path9F05, and any operation can communicate with any other operations over communication path9F05. The modules of the system can, individually or in combination, perform method operations within system9F00. Any operations performed within system9F00may be performed in any order unless as may be specified in the claims. The shown embodiment implements a portion of a computer system, presented as system9F00, comprising one or more computer processors to execute a set of program code instructions (module9F10) and modules for accessing memory to hold program code instructions to perform: creating a workflow having an application-dependent workflow variable that is used when processing the workflow, wherein the application-dependent workflow variable has a variable type that is dependent upon a specific application that is operating with the workflow (module9F20); invoking a first application to process the workflow, wherein the application-dependent workflow variable that is used by the first application corresponds to a first variable type when used by the first application (module9F30); and invoking a second application to operate with the same workflow as the first application, wherein the application-dependent workflow variable that is used by the second application corresponds to a second variable type when used by the second application on the same workflow as the first application, and wherein the first variable type for the application-dependent workflow variable is different from the second variable type even when used in the same workflow (module9F40). Variations of the foregoing may include more or fewer of the shown modules. Certain variations may perform more or fewer (or different) steps and/or certain variations may use data elements in more, or in fewer, or in different operations. Still further, some embodiments include variations in the operations performed, and some embodiments include variations of aspects of the data elements used in the operations. FIG.9Gdepicts a system9G00as an arrangement of computing modules that are interconnected so as to operate cooperatively to implement certain of the herein-disclosed embodiments. The partitioning of system9G00is merely illustrative and other partitions are possible. The system9G00serves to convert, from one variable type of a workflow to another variable type of the same workflow based on different aspects of different applications that are processing the workflow. The underlying computing equipment and/or computing environment may change as between the different applications. Multiple different applications can be coordinated by a CMS as follows: At a first time, in a first environment, a CMS may initiate steps for invoking execution of a first portion of a compound workflow wherein at least a first portion of the compound workflow comprises one or more operations of a content management system that interacts with at least one third-party application in a second environment (module9G20). The third-party application in the second environment may assign a first value of at least one workflow execution state variable corresponding to the compound workflow, wherein the first value of the at least one workflow execution state variable is stored in a first value representation type (module9G30). Processing by the third-party application may cause invocation of a second portion of the compound workflow in a second environment wherein, in the second environment, a second value of the workflow execution state variable is stored in a second value representation type that is different from the first value representation type (module9G40). When the content management system receives the workflow execution state variable is stored in a second value representation type, the content management system converts the second value from the second value representation type to the first value representation type—even when the two different value representation types are used in the same workflow (module9G50). Next steps in the workflow are determined based on the workflow execution state variable that has been converted into the first value representation type. Various mechanisms may be employed to convert from one application-dependent workflow variable representation type to another application-dependent workflow variable representation type. Strictly as examples, aspects of the computing environment, and/or aspects of the computing systems that implement the workflow, and/or aspects of the user and/or his/her user type, and/or aspects of the third-party itself (e.g., enterprise name or type) can be used to determine how to convert from one application-dependent workflow variable representation type or value to another application-dependent workflow variable representation type or value. System Architecture Overview Additional System Architecture Examples FIG.10Adepicts a block diagram of an instance of a computer system10A00suitable for implementing embodiments of the present disclosure. Computer system10A00includes a bus1006or other communication mechanism for communicating information. The bus interconnects subsystems and devices such as a central processing unit (CPU), or a multi-core CPU (e.g., data processor1007), a system memory (e.g., main memory1008, or an area of random access memory (RAM)), a non-volatile storage device or non-volatile storage area (e.g., read-only memory1009), an internal storage device1010or external storage device1013(e.g., magnetic or optical), a data interface1033, a communications interface1014(e.g., PHY, MAC, Ethernet interface, modem, etc.). The aforementioned components are shown within processing element partition1001, however other partitions are possible. Computer system10A00further comprises a display1011(e.g., CRT or LCD), various input devices1012(e.g., keyboard, cursor control), and an external data repository1031. According to an embodiment of the disclosure, computer system10A00performs specific operations by data processor1007executing one or more sequences of one or more program instructions contained in a memory. Such instructions (e.g., program instructions10021, program instructions10022, program instructions10023, etc.) can be contained in or can be read into a storage location or memory from any computer readable/usable storage medium such as a static storage device or a disk drive. The sequences can be organized to be accessed by one or more processing entities configured to execute a single process or configured to execute multiple concurrent processes to perform work. A processing entity can be hardware-based (e.g., involving one or more cores) or software-based, and/or can be formed using a combination of hardware and software that implements logic, and/or can carry out computations and/or processing steps using one or more processes and/or one or more tasks and/or one or more threads or any combination thereof. According to an embodiment of the disclosure, computer system10A00performs specific networking operations using one or more instances of communications interface1014. Instances of communications interface1014may comprise one or more networking ports that are configurable (e.g., pertaining to speed, protocol, physical layer characteristics, media access characteristics, etc.) and any particular instance of communications interface1014or port thereto can be configured differently from any other particular instance. Portions of a communication protocol can be carried out in whole or in part by any instance of communications interface1014, and data (e.g., packets, data structures, bit fields, etc.) can be positioned in storage locations within communications interface1014, or within system memory, and such data can be accessed (e.g., using random access addressing, or using direct memory access DMA, etc.) by devices such as data processor1007. Communications link1015can be configured to transmit (e.g., send, receive, signal, etc.) any types of communications packets (e.g., communication packet10381, communication packet1038N) comprising any organization of data items. The data items can comprise a payload data area1037, a destination address1036(e.g., a destination IP address), a source address1035(e.g., a source IP address), and can include various encodings or formatting of bit fields to populate packet characteristics1034. In some cases, the packet characteristics include a version identifier, a packet or payload length, a traffic class, a flow label, etc. In some cases, payload data area1037comprises a data structure that is encoded and/or formatted to fit into byte or word boundaries of the packet. In some embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement aspects of the disclosure. Thus, embodiments of the disclosure are not limited to any specific combination of hardware circuitry and/or software. In embodiments, the term “logic” shall mean any combination of software or hardware that is used to implement all or part of the disclosure. The term “computer readable medium” or “computer usable medium” as used herein refers to any medium that participates in providing instructions to data processor1007for execution. Such a medium may take many forms including, but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks such as disk drives or tape drives. Volatile media includes dynamic memory such as RAM. Common forms of computer readable media include, for example, floppy disk, flexible disk, hard disk, magnetic tape, or any other magnetic medium; CD-ROM or any other optical medium; punch cards, paper tape, or any other physical medium with patterns of holes; RAM, PROM, EPROM, FLASH-EPROM, or any other memory chip or cartridge, or any other non-transitory computer readable medium. Such data can be stored, for example, in any form of external data repository1031, which in turn can be formatted into any one or more storage areas, and which can comprise parameterized storage1039accessible by a key (e.g., filename, table name, block address, offset address, etc.). Execution of the sequences of instructions to practice certain embodiments of the disclosure are performed by a single instance of a computer system10A00. According to certain embodiments of the disclosure, two or more instances of computer system10A00coupled by a communications link1015(e.g., LAN, public switched telephone network, or wireless network) may perform the sequence of instructions required to practice embodiments of the disclosure using two or more instances of components of computer system10A00. Computer system10A00may transmit and receive messages such as data and/or instructions organized into a data structure (e.g., communications packets). The data structure can include program instructions (e.g., application code1003), communicated through communications link1015and communications interface1014. Received program instructions may be executed by data processor1007as it is received and/or stored in the shown storage device or in or upon any other non-volatile storage for later execution. Computer system10A00may communicate through a data interface1033to a database1032on an external data repository1031. Data items in a database can be accessed using a primary key (e.g., a relational database primary key). Processing element partition1001is merely one sample partition. Other partitions can include multiple data processors, and/or multiple communications interfaces, and/or multiple storage devices, etc. within a partition. For example, a partition can bound a multi-core processor (e.g., possibly including embedded or co-located memory), or a partition can bound a computing cluster having plurality of computing elements, any of which computing elements are connected directly or indirectly to a communications link. A first partition can be configured to communicate to a second partition. A particular first partition and particular second partition can be congruent (e.g., in a processing element array) or can be different (e.g., comprising disjoint sets of components). A module as used herein can be implemented using any mix of any portions of the system memory and any extent of hard-wired circuitry including hard-wired circuitry embodied as a data processor1007. Some embodiments include one or more special-purpose hardware components (e.g., power control, logic, sensors, transducers, etc.). Some embodiments of a module include instructions that are stored in a memory for execution so as to facilitate operational and/or performance characteristics pertaining to accessing a dynamically extensible set of content object workflows. A module may include one or more state machines and/or combinational logic used to implement or facilitate the operational and/or performance characteristics pertaining to accessing a dynamically extensible set of content object workflows. Various implementations of database1032comprise storage media organized to hold a series of records or files such that individual records or files are accessed using a name or key (e.g., a primary key or a combination of keys and/or query clauses). Such files or records can be organized into one or more data structures (e.g., data structures used to implement or facilitate aspects of accessing a dynamically extensible set of content object workflows). Such files, records, or data structures can be brought into and/or stored in volatile or non-volatile memory. More specifically, the occurrence and organization of the foregoing files, records, and data structures improve the way that the computer stores and retrieves data in memory, for example, to improve the way data is accessed when the computer is performing operations that pertain to accessing a dynamically extensible set of content object workflows, and/or for improving the way data is manipulated when performing computerized operations pertaining to accessing a dynamically extensible set of applications through a content management system to perform workflows over content objects managed by the content management system. FIG.10Bdepicts a block diagram of an instance of a cloud-based environment10B00. Such a cloud-based environment supports access to workspaces through the execution of workspace access code (e.g., workspace access code10420, workspace access code10421, and workspace access code10422). Workspace access code can be executed on any of access devices1052(e.g., laptop device10524, workstation device10525, IP phone device10523, tablet device10522, smart phone device10521, etc.), and can be configured to access any type of object. Strictly as examples, such objects can be folders or directories or can be files of any filetype. A group of users can form a collaborator group1058, and a collaborator group can be composed of any types or roles of users. For example, and as shown, a collaborator group can comprise a user collaborator, an administrator collaborator, a creator collaborator, etc. Any user can use any one or more of the access devices, and such access devices can be operated concurrently to provide multiple concurrent sessions and/or other techniques to access workspaces through the workspace access code. A portion of workspace access code can reside in and be executed on any access device. Any portion of the workspace access code can reside in and be executed on any computing platform1051, including in a middleware setting. As shown, a portion of the workspace access code resides in and can be executed on one or more processing elements (e.g., processing element10051). The workspace access code can interface with storage devices such as networked storage1055. Storage of workspaces and/or any constituent files or objects, and/or any other code or scripts or data can be stored in any one or more storage partitions (e.g., storage partition10041). In some environments, a processing element includes forms of storage, such as RAM and/or ROM and/or FLASH, and/or other forms of volatile and non-volatile storage. A stored workspace can be populated via an upload (e.g., an upload from an access device to a processing element over an upload network path1057). A stored workspace can be delivered to a particular user and/or shared with other particular users via a download (e.g., a download from a processing element to an access device over a download network path1059). In the foregoing specification, the disclosure has been described with reference to specific embodiments thereof. It will however be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the disclosure. The specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense. | 102,868 |
11861030 | The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed. DETAILED DESCRIPTION Families and other groups enjoy collecting photographs, videos, documents, and other memorabilia. The current trend is for these items to take the form of digital assets, which are far easier to copy and share than their physical counterparts. But current sharing techniques suffer several drawbacks. One way is to transmit the digital asset electronically from person to person. Another is to simply display the digital asset on an electronic device, and then share the device itself, for example by passing a smartphone from person to person. These methods are inefficient, and may fail to preserve access to the digital assets in the future. Newer sharing methods employ cloud accounts or social media accounts. But cloud accounts are generally password-protected. And while a social media user may wish to share some digital assets with some other users, the social media user may not wish to share all of the social media associated with the account. Furthermore, should the user become incapacitated, access to the digital assets may be lost. The disclosed technologies provide a technology platform that provides secure group-based access to sets of digital assets, which is referred to herein as a “secure access system.” The system may allow a user to upload digital assets to the system for secure access by other users. The system may also allow a user to remove digital assets from the system. The system may receive a request to provide, to a group of users, secure access to a set of digital assets. For example, a family member may request secure access be established for members of the family to access a set of digital photos, which may be referred to as a “family album.” Continuing this example, in response to the request, the system may generate a secure credential, associate that secure credential with the family album, and distribute the secure credential to the members of the family. Later, the system may receive a request to view the family album. The request may include the secure credential. Upon verifying the secure credential, the system may provide views of the digital assets to the requester. In some cases, it may become desirable to enable other individuals or groups to view the digital assets. For example, one of the family members may marry, and may wish the spouse's family to have access to the family album. With prior techniques it may be necessary to create new user accounts, share passwords, and employ similar methods to provide this access. Embodiments of the disclosed technology may provide this access in a simpler manner. The system may allow a member of the family to invite a new individual or group to be linked to the family. Upon acceptance of the invitation, the system may provide the secure credential to the identified individual or group automatically without user intervention. The individual or group members may now access the family album in the same way as the family members. In some embodiments, this capability is extended to the new members. That is the user or members of the user group that has been linked to the family may invite another new individual or group to be linked to the family. Some embodiments employ virtual reality technology to transform the family album into a family museum. In these embodiments, the digital assets are represented by virtual objects in a virtual structure in a virtual three-dimensional environment such as a virtual museum, which may be referred to herein as a “family museum.” For example, digital photos may be represented as framed pictures hanging on the walls of the interior of the museum. The system may provide virtual access to the family museum as before, in response to receiving a request and verifying the secure credential. In some embodiments, the virtual museum may have multiple wings, each with separate access control according to respective secure credentials. In these embodiments, a family may have access to the entire museum, while others may have access to only a single wing. Other arrangements are contemplated. FIG.1illustrates a system100for providing secure group-based access to sets of digital assets according to some embodiments of the disclosed technology. The system100may include a secure access system102, which may be implemented as one or more software packages executing on one or more server computers104. In some embodiments, the server104may implement a blockchain node108. In some embodiments, the system may access blockchain nodes implemented elsewhere. The system may include one or more databases106. The databases106may store digital assets, secure credentials, family museum layouts, user information, and similar data. Users112A-N may access the secure access system102with user electronic devices122A-N over a network130. Each client user electronic device122may be implemented as a desktop computer, laptop computer, smart phone, smart glasses, embedded computers and displays, and similar electronic devices. In some embodiments, the system may be operable to generate non-fungible tokens (NFTs) for the digital assets, and to record these NFTs on a blockchain. In some embodiments, the system may be operable to generate NFTs for the albums and museums, and to record these NFTs on a blockchain. In some embodiments, the digital assets may be stored in a decentralized manner that is managed by a blockchain. In some embodiments, the system may encrypt the digital assets for additional security. FIG.2is a flowchart illustrating a process200for providing secure group-based access to sets of digital assets, according to some embodiments of the disclosed technology. For example, the process200may be employed in the system100ofFIG.1. The elements of the process200are presented in one arrangement. However, it should be understood that one or more elements of the process may be performed in a different order, in parallel, omitted entirely, and the like. Furthermore, the process200may include other elements in addition to those presented. For example, the process200may include error-handling functions for exceptions. The process200may include providing a first user interface for display, at202. The first user interface may include a first active display element operable to create a user group of users. The first user interface may include a second active display element operable to select a set of digital assets. For example, this operation may enable a user to create a user group of family members, and to select a collection of photographs to share with the group as a family album. Referring again toFIG.2, the process200may include, responsive to operation of the first and second active display elements of the first user interface: generating a secure credential, associating the secure credential with the set of digital assets, and providing the secure credential to the users in the user group, at204. This operation may provide the family members with secure access to the family album. Referring toFIG.1, the secure access system102may store the set of digital assets in association with the secure credential in databases106. Referring again toFIG.2, the process200may include providing a second user interface for display, at206. The second user interface may include a third active display element operable to request to view the set of digital assets. The second user interface may include a fourth active display element operable to provide the secure credential. This operation may allow a family member to request access to the family album by providing the secure credential generated at204. The process200may include verifying the secure credential responsive to operation of the third and fourth active display elements, at208. For example, referring to FIG.1, the secure access system102may compare the secure credential provided by the requesting user to the secure credential stored in the databases106. Referring again toFIG.2, the process200may include providing a third user interface for display responsive to successfully verifying the secure credential, at210. The third user interface may include a view of the set of digital assets. For example, the user interface may include the set of family photographs. The process200may include providing a fourth user interface for display, at212. The fourth user interface comprising a fifth active display element operable to link a new user to the user group of users. This operation may allow a family member to link another user or group of users with the family for the purpose of accessing the family photographs. For example, when a member of the family marries, that member may link the family to the spouse and the spouse's family. In some cases, the system may require a user to accept an invitation before allowing the user to be linked. The process200may include, responsive to operation of the fifth active display element, sending the secure credential to the new user automatically without user intervention, at214. This operation may automatically provide the user groups secure credential to the new user without user intervention responsive to the linking. Continuing the example, the system may provide the secure credential to the linked spouse and the spouse's family. In contrast to current systems, the secure credential may be provided without any further actions by the family members, the spouse, or the spouse's family. In some embodiments, the user interfaces may be two-dimensional.FIGS.3-11show example user interfaces according to these embodiments.FIG.3illustrates a “my museum” user interface300according to some embodiments of the disclosed technology. Each user may have a museum, which may have one or more sets of digital assets. Each set may be referred to as a “gallery” or a “wing” of a museum. The “my museum” user interface300includes multiple active display elements. The active display elements include display elements302for selecting the galleries. The active display elements include an active display element304operable to upload additional digital assets to the system. For example, the active display element304may be operated to upload photos and videos. The display elements include a display element306for selecting museums of other family members. FIG.4illustrates a “gallery” user interface400according to some embodiments of the disclosed technology. The “gallery” user interface400includes multiple active display elements. In this example, the active display elements include active display elements402in the form of thumbnails of photos and videos that can be selected for viewing. The active display elements include an active display element404operable to upload additional digital assets to the gallery. The active display elements may include an active display element406operable to return to the “my museum” user interface. The active display elements may include an active display element408operable to change the grid layout for the thumbnails. FIG.5illustrates a “recent activity” user interface500according to some embodiments of the disclosed technology. The “recent activity” user interface500includes multiple active display elements. In this example, the active display elements include active display elements502that indicate recent activity including messages sent, new connections between users, the creation of new albums, and milestones. The active display elements may include an active display element504operable to upload additional digital assets to the system. In some embodiments, users may not be allowed to post to the “recent activity” user interface500of other users. Instead, the user interface500reflects activity of users. A user may select or exclude groups of users from which activity should be posted to the user interface500. Users may be allowed to comment on activity posted to the user interface500, including the owner. FIG.6illustrates a “comments” user interface600according to some embodiments of the disclosed technology. The “comments” user interface600includes multiple active display elements. In this example, the active display elements include a display area602for displaying the subject of the comments, a display area604for displaying the comments, and a keyboard606for entering new comments. FIG.7illustrates a “family tree” user interface700according to some embodiments of the disclosed technology. The “family tree” user interface700includes multiple active display elements. In this example, the active display elements include active display elements702representing individuals. Each of these active display elements may include a photograph, a name, and a relationship designator such as “brother” or “wife”. Each of these active display elements may be operable to open a “family tree popup” user interface for the respective user, or to add a new user or user group. Groups of users such as families may be indicated by large circles704encompassing two or more users. In some embodiments, each circle704may represent an immediate family, and visual features of the lines radiating from the central nodes in the circles may represent the relation of the connected persons within the immediate family. For example, thick lines may represent parents, and thin lines may represent children and siblings. Broken and colored lines may represent statuses such as divorce, adoption, and death. The lines may be implemented as active display elements operable by a user to change the relationship or status, or to remove or “prune” a user or an entire branch from the family tree. In some embodiments, the system updates the “family tree” user interface700automatically upon the happening of a predetermined event. For example, when a new connected person is added, the system may automatically update the “family tree” user interface700to include that person. FIG.8illustrates a “family tree popup” user interface800according to some embodiments of the disclosed technology. The “family tree popup” user interface800relates to a particular user, and includes multiple active display elements. One active display element802is operable to view the user's museum. Another active display element804is operable to view the user's profile. Other active display elements806are operable to add or remove group members such as family members. FIG.9illustrates a floor plan900for a “family museum” according to some embodiments of the disclosed technology. The family museum may be implemented as a virtual structure for browsing through virtual reality technology. Virtual objects within the virtual structure may non be visible outside the virtual structure. Access to the family museum may be restricted at the main entrance. The family museum may include a lobby that is open to anyone with access to the family museum. The family museum may include one or more wings, each with entrances that further restrict access. For example, the owner of the family museum may reserve wing A for family only, wing B for friends only, wing C for colleagues only by associating different secure credentials with each wing. The lobby and wings may house virtual objects representing digital assets. For example, a family portrait902may hang on the wall of wing A, while a statue904may reside in the lobby. The owner of the family museum may reconfigure it at will, for example to add, reconfigure, or remove wings; to add, remove, and move virtual objects; and to change access permissions for the wings. In some embodiments, the floor plan900may be implemented as a user interface having active display elements operable to perform these functions. In some embodiments, the owner may assign a role and permissions for modifying the family museum to another user, also referred to herein as a “moderator”. In some embodiments, the owner or moderator may associate a permission with a virtual object or wing that identifies at least one user and an action permitted by the user on the virtual object or wing. For example, a user may be permitted to crop a particular photograph. As another example, only family members may be permitted to download the digital assets represented by the virtual objects. Other permissions may include allowing screenshots of the virtual objects. In some embodiments, users may be allowed to submit a reaction to one of the virtual objects. The system may associate the reaction with the virtual object, and may allow users to view the reaction, either automatically or by operating an active display element of a user interface. In some embodiments, the system may include a feature to automatically remove duplicates of the digital assets. For example, a favorite wedding photo may be uploaded by multiple members of the family. The system may automatically remove all but one copy. Alternatively, the system may inform the owner of the museum of the duplicates, allowing the owner to invoke the process of removing the duplicates. In some embodiments, the system may automatically tag uploaded digital assets. For example, the system may employ facial recognition technology to identify individuals in a photo, and may tag those individuals. In some embodiments, the system may notify individuals who have been tagged. As another example, the system may identify objects in photos and tag the photos according to the objects. For example, the system may tag a photo including a bride and groom as a wedding photo. As noted above, users may visit a family museum using virtual reality technology to obtain a three-dimensional experience. While visiting the museum, a user may be represented in the museum by an avatar, and may view avatars of other visitors.FIG.10depicts a virtual reality view of an example family museum according to some embodiments of the disclosed technology. FIG.11depicts a block diagram of an example computer system1100in which embodiments described herein may be implemented. The computer system1100includes a bus1102or other communication mechanism for communicating information, one or more hardware processors1104coupled with bus1102for processing information. Hardware processor(s)1104may be, for example, one or more general purpose microprocessors. The computer system1100also includes a main memory1106, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus1102for storing information and instructions to be executed by processor1104. Main memory1106also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1104. Such instructions, when stored in storage media accessible to processor1104, render computer system1100into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system1100further includes a read only memory (ROM)1108or other static storage device coupled to bus1102for storing static information and instructions for processor1104. A storage device1110, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus1102for storing information and instructions. The computer system1100may be coupled via bus1102to a display1112, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device1114, including alphanumeric and other keys, is coupled to bus1102for communicating information and command selections to processor1104. Another type of user input device is cursor control1116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor1104and for controlling cursor movement on display1112. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. The computing system1100may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided or encoded on a computer readable or machine readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The computer system1100may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system1100to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system1100in response to processor(s)1104executing one or more sequences of one or more instructions contained in main memory1106. Such instructions may be read into main memory1106from another storage medium, such as storage device1110. Execution of the sequences of instructions contained in main memory1106causes processor(s)1104to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “non-transitory media,” and similar terms, as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device1110. Volatile media includes dynamic memory, such as main memory1106. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1102. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. The computer system1100also includes a communication interface1118coupled to bus1102. Network interface1118provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface1118may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface1118may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or a WAN component to communicate with a WAN). Wireless links may also be implemented. In any such implementation, network interface1118sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface1118, which carry the digital data to and from computer system1100, are example forms of transmission media. The computer system1100can send messages and receive data, including program code, through the network(s), network link and communication interface1118. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface1118. The received code may be executed by processor1104as it is received, and/or stored in storage device1110, or other non-volatile storage for later execution. Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. For example, a method bay be referred to as a “computer-implemented” method. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. As used herein, a circuit might be implemented utilizing any form of hardware, or a combination of hardware and software. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system1100. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The foregoing description of the present disclosure has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments. Many modifications and variations will be apparent to the practitioner skilled in the art. The modifications and variations include any relevant combination of the disclosed features. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical application, thereby enabling others skilled in the art to understand the disclosure for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents. | 31,307 |
11861031 | DETAILED DESCRIPTION In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration, various embodiments of the disclosure that may be practiced. It is to be understood that other embodiments may be utilized. As will be appreciated by one of skill in the art upon reading the following disclosure, various aspects described herein may be embodied as a method, a computer system, or a computer program product. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, such aspects may take the form of a computer program product stored by one or more computer-readable storage media having computer-readable program code, or instructions, embodied in or on the storage media. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space). As discussed, performing background verification for an incoming individual that seeks membership to an organization (e.g., an applicant for a job position, a candidate for hiring, etc.) can be a time-intensive and costly process. Furthermore, individuals may be burdened with maintaining documentation concerning the individuals' credentials and other relevant data throughout their academic, vocational, and/or employment history to be able to present such documents to each new organization. A distributed ledger interface server, which interacts with a distributed ledger that maintains and tracks such data throughout the individual's academic, vocational, and/or employment history can help to ease the burden faced by organizations and their individual members (e.g., employees, students, trainees, interns, externs, etc.). FIG.1is a block diagram of an example computing device101in an operating environment100in which one or more aspects described herein may be implemented. The computing device101or one or more components of computing device101may be utilized by any device, computing system, or server interacting with a distributed ledger, or with a distributed ledger interface system, to perform the functions described herein. For example, the computing device101may be utilized by an operator of one or more servers that facilitate the use of a distributed ledger to perform a background verification of an individual, e.g., upon or prior to hiring for employment. The one or more servers, which may be referred to as “distributed ledger interface system,” “distributed ledger interface server,” “interface system,” or “interface server” for simplicity, may be an interface for various computing systems and devices associated with organizations or individuals to interact with a distributed ledger (e.g., a blockchain) for one or more functions described herein. The one or more functions may include, for example, the entering of information as it pertains to an individual that may be a member of an organization (e.g., an employee) or a candidate for a position at an organization, the verification of the information, the encryption or decryption of the information, results of a secondary verification of the information (e.g., based on external sources), etc. The computing device101may have a processor103for controlling overall operation of the device101and its associated components, including RAM105, ROM107, input/output module109, and memory115. As explained above, the computing device101, along with one or more additional devices (e.g., terminals141,151,161, and171, and security and integration hardware180) may correspond to any of multiple systems or devices described herein, such as personal mobile or desktop devices associated with an individual (e.g., “user devices”); servers, computing systems, and devices associated with an organization (“organization computing device” or “organization computing system”); servers, computing systems, and devices associated with various institutions that are sources of individual-specific data that are pertinent to background verification for an individual (e.g., “data value source systems”); the distributed ledger interface system; internal data sources; external data sources; and other various devices used to facilitate background verification through a distributed ledger system. These various computing systems may be configured individually or in combination, as described herein, for receiving signals and/or transmissions from one or more computing devices, the signals or transmissions including data values for various aspects of an individual's background (e.g., “background aspects”), and metadata indicating the veracity or verification results for those data values. Input/Output (I/O)109may include a microphone, keypad, touch screen, and/or stylus through which a user of the computing device101may provide input, and may also include, or be communicatively coupled to, one or more of a speaker for providing audio output and a video display device (e.g., as in display108) for providing textual, audiovisual and/or graphical output. Also or alternatively, the display device may be separate from the input/output module109(e.g., as in display108). Furthermore, display108may be used by a user of the computing device101to view information stored in the distributed ledger125. The distributed ledger125may be a replicated version, an instance, or a view of a decentralized database platform storing immutable information (e.g., data values pertaining to background aspects) in blocks cryptographically linked to one another, e.g., via blockchain technology. Furthermore, the distributed ledger125may comprise of various data structures pertaining to one or more individuals (e.g., individual-specific data structures) as will be described herein. The update interface123may comprise one or more database management tools, applications, plug-ins, and/or code used to update databases (e.g., by creating, replacing, adding, and/or deleting data). For example, the update interface123may be used to enter information into the distributed ledger125. Software may be stored within memory115and/or storage to provide instructions to processor103for enabling device101to perform various actions. For example, memory115may store software used by the device101, such as an operating system117, application programs119, and data pertaining to individual member of an organization or an individual candidate applying for a position or role in an organization (e.g., member/candidate data120). The various hardware memory units in memory115may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Certain devices and systems within computing systems may have minimum hardware requirements in order to support sufficient storage capacity, processing capacity, analysis capacity, network communication, etc. For instance, in some embodiments, one or more nonvolatile hardware memory units having a minimum size (e.g., at least 1 gigabyte (GB), 2 GB, 5 GB, etc.), and/or one or more volatile hardware memory units having a minimum size (e.g., 256 megabytes (MB), 512 MB, 1 GB, etc.) may be used in a device101(e.g., a distributed ledger interface system, a user device, an organization computing device, a data value source system, etc.), in order to receive and analyze the signals, transmissions, etc., including receiving signals and/or transmissions from one or more computing devices. The signals or transmissions may include individual-specific data values for various aspects of an individual's background that could be used to assess the individual's candidacy for a role or position within an organization and which can be verified according to systems and methods presented herein. The signals or transmissions may further include ratings and other indicia to measure the veracity, reliability, or accuracy of specific data values for the background aspects, based on verification results. The signals or transmissions may further include queries or requests to data value source systems to verify one or more data values entered in the distributed ledger or self-reported by the individual, and may further include responses to the queries or requests. Furthermore, the signals and transmissions may include requests to participating devices at various nodes of a distributed ledger to approve data to be entered into the distributed ledger, responses to these requests, and encryption or decryption results. Memory115also may include one or more physical persistent memory devices and/or one or more non-persistent memory devices. Memory115may include, but is not limited to, random access memory (RAM)105, read only memory (ROM)107, electronically erasable programmable read only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by processor103. Processor103may include a single central processing unit (CPU), which may be a single-core or multi-core processor (e.g., dual-core, quad-core, etc.), or may include multiple CPUs. Processor(s)103may have various bit sizes (e.g., 16-bit, 32-bit, 64-bit, 96-bit, 128-bit, etc.) and various processor speeds (ranging from 100 MHz to 5 GHz or faster). Processor(s)103and its associated components may allow the computing device101to execute a series of computer-readable instructions, for example, receiving signals and/or transmissions described herein from one or more computing devices. The computing device101may operate in a networked environment100supporting connections to one or more remote computers, such as terminals141,151,161, and171. Such terminals may be user devices belonging to individual members of an organization or individual candidates who seek to join an organization and may undergo background verification (e.g., user device171). The user devices may include, for example, mobile communication devices, mobile phones, tablet computers, touch screen display devices, etc. The terminals may further include organization computing systems141, e.g., for their respective organizations. In some aspects, organization computing systems may be used by the organizations to review or verify an individual member's or an individual candidate's self-reported profile (e.g., resume, curriculum vitae, transcripts, legal information, online profile, etc.), or information obtained from data value source systems, and/or information recorded in the distributed ledger. Furthermore, the organization computing systems141may be used to record additional information pertaining to the individual into the distributed ledger, e.g., via the distributed ledger interface system, or to perform verification of information already recorded in the distributed ledger. The terminals may further include data value source systems161, such as computing systems or servers of educational institutions that issue transcripts and other academic assessments for individuals, governmental or municipal offices that hold legal information (e.g., citizenship information, civil and criminal records, etc.) pertaining to the individual, road safety offices, credit rating agencies, and the like. A data value source system161may act as a source for data values for various background aspects of an individual member of an organization or an individual candidate for a role or position in an organization. A terminal may include the distributed ledger interface server151, which may facilitate the use of a distributed ledger to perform background verification for an individual. The network connections depicted inFIG.1include a local area network (LAN)132, a wide area network (WAN)130, and a wireless telecommunications network133, but may also include other networks. When used in a LAN networking environment, the computing device101may be connected to the LAN132through a network interface or adapter129. When used in a WAN networking environment, the device101may include a modem127or other means for establishing communications over the WAN130, such as network131(e.g., the Internet). When used in a wireless telecommunications network133, the device101may include one or more transceivers, digital signal processors, and additional circuitry and software for communicating with terminals141,151,161, and171via one or more network devices135(e.g., base transceiver stations) in the wireless network133. Also illustrated inFIG.1is a security and integration layer180, through which communications are sent and managed between the device101and the remote devices (141,151,161, and171) and remote networks (130-133). The security and integration layer180may comprise one or more separate computing devices, such as web servers, authentication servers, and/or various networking components (e.g., firewalls, routers, gateways, load balancers, etc.), having some or all of the elements described above with respect to the computing device101. As an example, a security and integration layer180of a computing device101may comprise a set of web application servers configured to use secure protocols and to insulate the device101from external devices141,151,161, and171. In some cases, the security and integration layer180may correspond to a set of dedicated hardware and/or software operating at the same physical location and under the control of same entities as device101. For example, layer180may correspond to one or more dedicated web servers and network hardware in distributed ledger interface system, information datacenter or in a cloud infrastructure supporting cloud-based functions for the distributed ledger. In other examples, the security and integration layer180may correspond to separate hardware and software components which may be operated at a separate physical location and/or by a separate entity. As discussed below, the data transferred to and from various devices in the operating environment100may include secure and sensitive data obtained with permission of a user, such as confidential individual-specific data values (e.g., credentials, remarks, legal records, etc.) for various background aspects (e.g., previous employment, academic and/or training aspects, legal information (e.g., criminal and civil legal records, etc.), online profile, confidential self-reported data from individual members of an organization or candidates for a position in an organization, and confidential data verifying and/or questioning the veracity of other confidential data. Therefore, it may be desirable to protect transmissions of such data by using secure network protocols and encryption, and also to protect the integrity of the data when stored on the various devices within a system, such as the computing devices in the system100, by using the security and integration layer180to authenticate users, organizations, and data value source systems and restrict access to unknown or unauthorized users. For example, the security and integration layer180may be used to restrict information to participant devices of nodes in which the distributed ledger is shared. In various implementations, security and integration layer180may provide, for example, a file-based integration scheme or a service-based integration scheme for transmitting data between the various devices in an electronic display system100. Data may be transmitted through the security and integration layer180, using various network communication protocols. Secure data transmission protocols and/or encryption may be used in file transfers to protect the integrity of the data, for example, File Transfer Protocol (FTP), Secure File Transfer Protocol (SFTP), and/or Pretty Good Privacy (PGP) encryption. In other examples, one or more web services may be implemented within the various devices101in the system100and/or the security and integration layer180. The web services may be accessed by authorized external devices and users to support input, extraction, and manipulation of the data (e.g., into distributed ledger125) by the various devices101in the system100. In still other examples, the security and integration layer180may include specialized hardware for providing secure web services. For example, secure network appliances in the security and integration layer180may include built-in features such as hardware-accelerated SSL and HTTPS, WS-Security, and firewalls. Such specialized hardware may be installed and configured in the security and integration layer180in front of the web servers, so that any external devices may communicate directly with the specialized hardware. Although not shown inFIG.1, various elements within memory115or other components in system100, may include one or more caches, for example, CPU caches used by the processing unit103, page caches used by the operating system117, disk caches of a hard drive, and/or database caches used to cache content from database121. For embodiments including a CPU cache, the CPU cache may be used by one or more processors in the processing unit103to reduce memory latency and access time. In such examples, a processor103may retrieve data from or write data to the CPU cache rather than reading/writing to memory115, which may improve the speed of these operations. In some examples, a database cache may be created in which certain data from a database121is cached in a separate smaller database on an application server separate from the database server. For instance, in a multi-tiered application, a database cache on an application server can reduce data retrieval and data manipulation time by not needing to communicate over a network with a back-end database server. It will be appreciated that the network connections shown are illustrative and other means of establishing a communications link between the computers may be used. The existence of any of various network protocols such as TCP/IP, Ethernet, FTP, HTTP and the like, and of various wireless communication technologies such as GSM, CDMA, WiFi, and WiMAX, is presumed, and the various computing devices in the system components described herein may be configured to communicate using any of these network protocols or technologies. FIG.2illustrates a schematic diagram showing an example network environment and computing systems (e.g., devices, servers, application program interfaces (APIs), etc.) that may be used to implement aspects of the disclosure. At a high level the network environment200may comprise a user device associated with an individual (e.g., user device202); a plurality of computing systems, servers, or devices corresponding to a plurality of organizations (e.g., organization computing system(s)220); one or more computing systems, servers or devices corresponding to institutions or sources that may function as the source for individual-specific data for background aspects of an individual (e.g., data value source system(s)240); and a server, device, or computing system for facilitating the interaction between the various computing systems and a distributed ledger for performing various functions described herein (e.g., distributed ledger interface server270). The above described computing systems of the network environment may be interconnected over a communications network268. The communication network268may comprise one or more information distribution networks of any type, such as, without limitation, a telephone network, a wireless network (e.g., an LTE network, a 5G network, a WiFi IEEE 802.11 network, a WiMAX network, a satellite network, and/or any other network for wireless communication), an optical fiber network, a coaxial cable network, and/or a hybrid fiber/coax distribution network. The communication network268may use a series of interconnected communication links (e.g., coaxial cables, optical fibers, wireless links, etc.) to facilitate communication between the distributed ledger interface server270, the user device202, the organization computing system(s)220, and the data value source system(s)240. Each of the above-described systems may function on one or more computing systems or devices. In some aspects, one or more of the above described systems and servers may be connected; be a part of another one of the above-described systems and servers; or be components, features, or functions of a larger computing system. The one or more computing systems or devices of the above-described systems (e.g., the distributed ledger interface server270, the user device202, the organization computing system(s)220, and the data value source system(s)240) may include, for example, desktops, servers, smart phones, tablets or laptop computers with wired or wireless transceivers, tablets or laptop computers communicatively coupled to other devices with wired or wireless transceivers, and/or any other type of device configured to perform the functions described herein and to communicate via a wired or wireless network. The computing systems or devices may include one or more of the components of computing device101, illustrated inFIG.1. For example, the computing systems or devices may include at least a processor and a communication interface (e.g., a network interface) to communicate with one another. The user device202may comprise a mobile communication device, a mobile phone, a tablet computer, or a touch screen display device, etc., and may be associated with an individual who may be a member (e.g., an employee, an intern, an extern, a trainee, a fellow, a student, etc.) of an organization (e.g., an employer, an educational or vocational institution, etc.) or an individual who may be a candidate for a role or a position within an organization. User device171, as shown inFIG.1may be an example of user device202. As described in relation to computing device101ofFIG.1, an example user device202may include volatile, non-volatile, and/or hard disk memory (e.g., memory115) for storing information. The memory may store applications, such as a background verification (BGV) application206for performing background verification using the distributed ledger interface system270. The BGV application206may be hosted, managed, and/or facilitated by the distributed ledger interface server270(e.g., via the background verification (BGV) application program interface (API)281), as will be described herein. Furthermore, the BGV application206may be the same as, or at least share one or more functionalities with, BGV application224of the organization computing system(s)220, or the BGV applications246and258of the data value source system(s)240, and may also be hosted by the BGV API281. The BGV application206may use or be implemented upon a user interface (UI)204of the user device202. The UI204may facilitate the exchange of information with the distributed ledger interface server270and the organization computing system(s)220to allow the organization computing system(s)220to perform the background verification of the individual associated with the user device202. The UI204may include a display (e.g., as in display108of computing device101inFIG.1). For example, the display may be used to present information stored in the distributed ledger290. Also or alternatively, the user device202may comprise or show a replication of the distributed ledger290as distributed ledger216. For example, the individual may be allowed to see certain aspects of the distributed ledger290(e.g., the individual-specific data structure291associated with the individual) and may be allowed to access a replication of the distributed ledger290, distributed ledger216, which has the individual-specific data structure. Furthermore, as a node participant of the distributed ledger, the individual may control the access to the distributed ledger for other devices and systems by using the digital key208. As will be discussed further herein, the digital key may be generated by the distributed ledger interface server270(e.g., via digital key generator280) and may be provided to the user device via the BGV application206. The digital key may be any form of digital signature employing cryptography to provide validation and security. For example, the individual may be a candidate for a role or a position at an organization. The individual may use the digital key to grant permission to an organization computing system220of that organization, so that the organization has access to the individual-specific data structure of the individual in order to perform background verification of the individual. The digital key may be based on a use of a public key (e.g., allowing encryption of a message) and a private key (e.g., allowing a decryption of the message), or a combination thereof, as will be described herein. The public and private key may utilize robust cryptographic algorithms to assure the confidentiality and authenticity of electronic communications and data storage. The individual may build a self-reported profile210, which may comprise self-reported information pertinent to the candidacy for a role or position within an organization. The self-reported profile may comprise, or may be based on, one or more of a resume, curriculum vitae, self-reported grades or performance evaluations, a cover letter, a work sample, a recommendation, an online profile, etc. The self-reported profile may be stored in the memory of the user device202. The user device202may further comprise an input/output module214to allow the user to input information (e.g., via a keyboard, touchscreen, microphone, etc.), or receive output information (e.g., via external speakers or other peripherals). Input/output module109of computing device101inFIG.1may be an example of input/output module214. Furthermore, a network interface212may facilitate communication with other computing devices and systems in environment200over communication network268. The organization computing system(s)220may include one or more computing systems, devices, or servers corresponding to one or more organizations (e.g., employers, educational institutions, etc.). For example, in the context of systems and methods described herein, one organization computing system may be associated with an organization in which the individual may be a member of, and another organization computing system may be associated with an organization for which the individual is seeking a role or position within. Thus, an organization computing system220may keep track of the members of the organization associated with the organization computing system220, e.g., via a member profiles database226. Furthermore, an organization computing system220may keep track of candidates applying for roles or positions within the organization, e.g., via a candidate profiles database228. The organization computing system device141, as shown inFIG.1may be an example of an organization computing system220. As described in relation to computing device101ofFIG.1, an example organization computing system220may include volatile, non-volatile, and/or hard disk memory (e.g., memory115) for storing information. The memory may store applications, such as BGV application224for performing various methods described herein for performing background verification using the distributed ledger interface system270. The BGV application224may be the same as, or at least share one or more functionalities with, BGV application206of the user device220, and may also be hosted by the BGV API281. The BGV application224may use or be implemented upon a user interface (UI)222to facilitate the exchange of information with the distributed ledger interface server270, the user device202, other organization computing systems, and data value source systems240to allow the organization computing system to perform the background verification of the individual associated with the user device202. The UI222may include a display. For example, the display may be used to present information stored in the distributed ledger290. Also or alternatively, the organization computing system220may comprise or show a replication of the distributed ledger290as distributed ledger234, based on the organization being a participant node in one or more individual-specific data structures of the distributed ledger. In some aspects, the organization computing system220may be made a node participant of the individual-specific data structure, and thus be given access to a replica234of the distributed ledger290, after receiving permission (e.g., a public digital key) from a preexisting node participant (e.g., the user device of the individual associated with the individual-specific data structure). The organization computing system220may further comprise an input/output module232to allow the organization to input information (e.g., via a keyboard, touchscreen, microphone, etc.), or receive output information (e.g., via external speakers or other peripherals). Furthermore, a network interface230may facilitate communication with other computing devices and systems in environment200over communication network268. The data value source systems240may comprise computing systems, servers, and/or devices associated with organizations that provided or served as the original source for data values for background aspects of an individual in the individual-specific data structure. For example, a registrar computing system associated with a college can be the data value source system for a course grade recorded in the individual-specific data structure because the college registrar was the original source for generating the grade. Example data value source systems240may include the educational institution computing system242and the municipal office computing system256. More or fewer data value source systems may be used without departing from the invention. Each of these data value source systems240may include a UI (e.g., UI244and257), and applications (e.g., BGV apps246and258), a network interface (e.g., network interfaces248and260), and an input/output module (e.g., input/output modules250and262), performing functions similar to the analogous or the similar components in the user device202and organization computing systems220, as explained previously. The data value source systems240may be contacted to perform a secondary verification of data values stored in the individual-specific data structure associated with the individual. A query engine (e.g., query engines254and266) may assist in the secondary verification by searching for original records to verify a data value (e.g., a grade, a criminal conviction, etc.) recorded in the individual-specific data structure of the individual. The original records may include, for example, transcripts of the individual stored in a transcripts database252in the educational institution computing system242, or criminal records stored in a criminal records database264in the municipal computing system256. The distributed ledger interface server270may comprise one or more computing systems or servers managing the interactions between the above-described systems (e.g., the distributed ledger interface server270, the user device202, the organization computing system(s)220, and the data value source system(s)240) and the distributed ledger290to perform one or more functions for background verification described herein. The distributed ledger interface server270may be an example of computing device101shown inFIG.1. At a high level, the distributed ledger interface server270may comprise one or more databases (e.g., individuals database271, nodes database272, and/or organizations database276, etc.), a linking engine273, an update interface274, a background verification (BGV) application program interface (API)281to host or manage background verification applications206,224,246and258; a natural language processing (NLP) system278, a geolocation and/or geocoding APIs (“geo API”)277, an encryption/decryption system279, a digital key generator280, and a network interface275. The individuals database271may store identifiers and/or profiles for individuals using the distributed ledger interface server to facilitate background verification. The nodes database272may be used to identify node participants of a given individual-specific data structure within the distributed ledger290. The list of node participants may expand, contract, or be otherwise updated, based on which devices may be granted permission to access and/or otherwise contribute to the information storage of an individual-specific data structure. The node participants may include one or more computing systems and devices of environment200. The organizations database276may store a list of identifiers of organizations, e.g., by the identifiers of the corresponding organization computing systems220. The update interface274and linking engine273may form a database management application, software, or plug-in that may be used to perform create, read, update, or destroy (CRUD) functions with respect to data stored in the one or more databases. For example, the linking engine273may be used to form associations or link suitable data from different databases together, and/or to create new data based on associations or linkages. The update interface274may be used to update databases (e.g., by adding or deleting) data stored in the one or more databases based on instructions from other parts of the distributed ledger interface server270(e.g., computer readable instructions stored in memory of the BGV API281) or information received from one or more other systems and devices of environment200(e.g., user device202, organization computing systems220, data value source systems240, etc.). Furthermore, the update interface274may be used to enter information into the distributed ledger290. The distributed ledger290may be a database, repository, or ledger that may be updated by storing information using block chain technology and may comprise a plurality of individual-specific data structures291that may be replicated and available in a plurality of computing systems and devices (e.g., as in distributed ledger216, distributed ledger234, etc.). Each individual-specific data structure291may use a block chain approach (e.g., validation, cryptographic encryption, mining, etc.) to add individual-specific data pertaining to background aspects of an individual. For example, each individual-specific data structure may comprise one or more blocks (e.g., blocks292A-292C), and may be created or extended by blocks linked to one another. If participant nodes of the individual-specific data structure request data values to be entered, the requested data values may be hashed, e.g., based on a cryptographic algorithm associated with the distributed ledger290and/or the individual-specific data structure291. Before each data value can be entered, the data value may need to be verified from each of the participant nodes of the individual-specific data structure in which the data value is requested to be entered. After the verification, the data value may be entered into the individual-specific data structure as a block (e.g., block292C comprising data value294C) linked to a previously entered block (e.g., block292B comprising data value294B). Furthermore, each individual-specific data structure may have predetermined participant computing devices or systems (e.g., “nodes,” “node participants,” “participant nodes,” etc.) that may be able to access, validate, and/or view data entered in the individual-specific data structure. For example, an individual-specific data structure for an individual member of a first organization who desires to join a second organization may have, as its node participants, the user device associated with the individual, the organization computing system associated with the first organization, and the organization computing system associated with the second organization. Each node participant may be able to access, validate, and/or view data associated with the individual-specific data structure via a replicated distributed ledger on their respective device (e.g., as in distributed ledger216, distributed ledger234, etc.) Information entered into the distributed ledger290may be encrypted by the encryptor/decryptor279, and verified by one or more node participants of the distributed ledger prior to being recorded in the distributed ledger290. The node participants for any individual-specific data structure within the distributed ledger290may be specified and/or determined from the nodes database272. The distributed ledger290may include one or more individual-specific data structure(s)291for one or more individuals. An individual-specific data structure291may be used to store information related to an individual that is relevant to the individual's membership to an organization or candidacy for a position or a role in an organization. The information may be added into the individual-specific data structure as a data value294A-294C in blocks292A-292C. Prior to each data value being added into the distributed ledger, the data value may be verified and encrypted in accordance with methods presented herein. Each block, which stores a data value that has been entered, may be linked to a previously entered block of data value, so as to make the entered information immutable and indelible. Furthermore, since each data value may be verified by participant nodes of the individual-specific data structure291, the information entered may be transparent and undisputed among the participant nodes of a given individual-specific data structure291. The BGV API281of the distributed ledger interface server270may manage, host, and/or facilitate the applications206,224,246, and258running on user device202, organization computing systems220, educational computing system242, and municipal computing system256, respectively. The BGV API281, through applications206,224,246, and258, may help to facilitate background verification for an individual via the distributed ledger interface system. For example, indications sent by the user device202to the organization computing system220to indicate that an individual is resigning from an organization, or permissions granted by the user device202to an organization computing system220to access an individual-specific data structure292, may be sent via the BGV app206, and managed by the BGV API281. In such and like examples, the BGV API281may allow the distributed ledger interface server270to receive and/or relay the indications, permissions, and other messages between one device to another device in environment200. The BGV API281, may utilize one or more of the databases of distributed ledger interface server270, facilitate the exchange of information between the distributed ledger interface server270and one or more other computing systems and devices of environment200, and may manage one or more functions of the systems and components of distributed ledger interface server270. For example, the BGV API281may relay notifications of an individual ending membership to one organization; enabling the first organization to unlock access to the individual-specific data structure in the distributed ledger that is associated with the individual; generating and sending a digital key to the user device associated with the individual; allowing the individual to have another organization access the individual-specific data structure by way of the digital key; resetting and/or generating a new digital key; and facilitating secondary verification of data values for various background aspects associated with the individual by querying the data value source systems for information. Furthermore, the BGV API281may allow respective node participants of an individual-specific data structure of the distributed ledger to interact with the distributed ledger (e.g., by creating, validating, and/or accessing information recorded in the distributed ledger). The geolocation and/or geocoding APIs (“geo API”277) may be used to identify or estimate a real-world geographic location of a premise associated with an organization (e.g., an office, a factory, a farm, a warehouse, a school, etc.). The geo API277may be used with other applications (e.g., to initially identify a physical premise associated with an organization), or may be used in reverse (e.g., to determine the identity of a premise based on an inputted real-world geographic location). The NLP system278may include various processors, libraries, and AI-based systems (e.g., machine learning (ML) tools264) to analyze and convert natural language to one that could result in a computing system to perform substantive functions (e.g., to determine the job duty of an individual at an organization after parsing a resume). The NLP system278may be guided by a library and/or databases and AI-based tools for various uses in natural language processing, including the undergoing of supervised and unsupervised learning from language data. The NLP system278may support tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution. These tasks may be needed to build more advanced text processing services. The encryptor/decryptor279may be a plug-in, application, API, or program tool for encrypting or decrypting data values, e.g., for being recorded in the distributed ledger290. The digital key generator280may be a plug-in, application, API, or program tool for generating a digital key based on one or more cryptographic algorithms (e.g., symmetric key algorithms, public key algorithms, etc.). The digital key generator280may be used to generate a digital key to send to the user device. The individual may grant other devices and systems permission to access an individual-specific data structure associated with the individual by sending the digital key (e.g., a public digital key) to the intended recipient device or system (e.g., an organization computing system associated with an organization in which the individual likes to join). Furthermore, network interface275may facilitate communication with other computing devices and systems in environment200over communication network268. FIGS.3-5illustrate example flow diagrams that may be used to implement aspects of the disclosure. For examples, each ofFIGS.3-5may refer to various stages of an example method for performing a background verification using a distributed ledger interface system. Method300may be performed by the distributed ledger interface server270shown inFIG.2and/or computing device101shown inFIG.1. For simplicity, “distributed ledger interface server” may be used to refer to the performer of one or more steps of methods300-500. Other devices and systems described inFIG.2may perform one or more steps as specified. Further, one or more steps described with respect toFIGS.3-5may be performed in an order other than the order shown and/or may be omitted without departing from the invention. Referring now toFIG.3, method300may begin when the distributed ledger interface server270receives an indication that an individual who is a member of an organization (e.g., an employee of the organization) is ending membership to the organization. For example, the member may be an employee whose employment may have been terminated, or the employee may be voluntarily resigning. The member may indicate the ending of the membership through sending a notification via the BGV app206on the user device202. The indication may be relayed to the organization computing system associated with the organization from which the individual is ending membership (e.g., one of organization computing system(s)220). For simplicity, this organization may be referred to as the “first organization,” and the associated computing system may be referred to as “first organization computing system.” The distributed ledger interface server270, which may comprise the BGV API281hosting BGV app206may receive the indication, and may facilitate the sending of the indication to the first organization computing system. Thus, at step302, the distributed ledger interface server270may relay, to the first organization computing system, the indication that the individual who is a member of the first organization is ending membership. After receiving the indication, the first organization computing system may select to unlock access to the individual-specific data structure291associated with the individual in the distributed ledger290. Access to the individual-specific data structure291may be previously locked during the course of the individual's employment at the first organization for security, since the individual-specific data structure may store sensitive and/or confidential information about the individual. In some implementations, the first organization computing system may be instructed or otherwise be in an obligation (e.g., as per employment agreement) to unlock access to the individual-specific data structure291in the distributed ledger, e.g., to allow for job mobility, since other potential employers may need to see the information stored in the individual-specific data structure before hiring the individual. The first organization computing system may select to unlock access on BGV app224, hosted by BGV API281on the distributed ledger interface server270. The selection may be registered as an executable command. At step304, the distributed ledger interface server270may receive the command to unlock access to the individual-specific data structure291in the distributed ledger. Based on the command, the distributed ledger interface server270may unlock the individual-specific data structure (e.g., as in step306). At step308, the distributed ledger interface server270may send a digital key to the user device202associated with the individual. The digital key may be received via the BGV app206of the user device202. The digital key may be any form of digital signature employing cryptography to provide a layer of validation and security for one or more requests, such as the request by the organization computing system of another organization to access the individual-specific data structure291associated with the individual. For simplicity, an organization for which the individual is a candidate may be referred to as the “second organization” and its organization computing system may be referred to as the “second organization computing system.” In some aspects, the digital key may be available to the individual (e.g., stored on the user device) prior to the user device sending of the indication of ending membership. For example, the process of the second organization (e.g., an employer for which an organization has applied to for a role or a position) performing a background check for the individual may begin before the individual has given notice to the individual's current employer. Based on the results of the background check, the individual may appropriately decide whether or not to inform the individual's current employer of ending membership to the current employer. The digital key can be generated and sent to the user device before the individual has decided whether to end membership to the first organization. Also or alternatively, a new digital key may be generated and sent to the user device after the sending of the indication of ending membership. It is contemplated that the individual, e.g., in the hopes of securing new employment, may inform the second organization of background information recorded in the individual-specific data structure. Also or alternatively, the second organization may have been made aware of background information about the individual recorded in the individual-specific data structure. Thus, the second organization, via the second organization computing system, may request to access the individual-specific data structure, e.g., via BGV app224hosted by BGV API281of the distributed ledger interface server270. At step310, the distributed ledger interface server270may relay the request to access the individual-specific data structure to the user device associated with the individual. The digital key may be used to grant the second organization computing system access to the individual-specific data structure (e.g., as in step312). For example, a private key and public key may be generated by the digital key generator280of the distributed ledger interface server270, and may be sent to the user device202. In order to grant the second organization computing system the request to access the individual-specific data structure291, the individual may electronically sign a permission to access the individual-specific data structure291using the private key, and may send the signed permission along with a public key to the second organization computing system. The second organization computing system may use the public key to validate the signed permission to access the individual-specific data structure. In some aspects, the exchange of the one or more digital keys between the distributed ledger interface server270, the second organization computing system220, and the user device202, the request from the second organization computing system220, and the granting of the request by the user device202may be performed via apps224and246, hosted by BGV API281of the distributed ledger interface server270. In some aspects, the distributed ledger interface server270may monitor the digital key to determine whether it has been used, e.g., to access the individual-specific data structure (e.g., as in step314). If used, the distributed ledger interface server270may generate and send a new digital key to the user device202at step316. Generating a new digital key after use may further strengthen security and restrict access to the individual-specific data structure291. If the digital key has not been used yet, the distributed ledger interface server270may wait for the second organization computing system220to access the individual-specific data structure using the digital key (e.g., public key provided by the user device). After the second organization computing system has accessed the individual-specific data structure292associated with the individual, the distributed ledger interface server270may facilitate one or more functions for verifying various background information about the individual stored in the individual-specific data structure. For example, the individual may have presented a self-reported profile of the individual to the second organization, e.g., in the hopes of securing employment. The self-reported profile may be based on, or may comprise, one or more of a cover letter, a resume, a curriculum vitae, an online profile, a work sample, a recommendation, a referral, a self-reported grades or performance evaluation, etc. In at least one aspect, the second organization may want to compare the self-reported profile of the individual to the data values stored in the individual-specific data structure. For example, the second organization may want to determine whether an applicant's resume matches up to actual information about the applicant recorded by previous employers and educational institutions on the distributed ledger. The second organization may, via the BGV app224of the organization computing system220, request to review the self-reported profile of the individual with the individual-specific data structure associated with the individual. Thus, at step318, the distributed ledger interface server270may determine whether it has received the request to review the individual's self-reported profile against individual-specific data structure. If there is such a request, the distributed ledger interface server270may parse the self-reported profile of the individual to identify data values corresponding to background aspects (e.g., as in step320). For example, the NLP system278may process the natural language text of various documents (e.g., resume, cover letter, etc.) submitted by an employee seeking a position or role in the second organization. The NLP system278may use learned data and machine learning algorithms to recognize terms and phrases that correspond to data values for one or more background aspects (e.g., “3.7” for “GPA” or “Berkeley” for “Educational Institution”). Each identified data value for a corresponding background aspect may be compared to the data value recorded in the individual-specific data structure (e.g., as in step322). For example, the data value “3.7” for a background aspect of “GPA” may be compared to a recorded value of “3.49” in the individual-specific data structure. The distributed ledger interface server270may determine whether there is a match in the data values (e.g., whether the data values satisfy a similarity threshold) at step324. If there is no match (e.g., the data values fail to satisfy the similarity threshold), the distributed ledger interface server270may indicate this to the second organization computing system at step326. For example, the second organization may receive a notification via BGV app224that the review of the self-reported profile of the individual failed at least on the background aspect corresponding to the data values that failed to match. If the identified data value corresponding to the background aspect does match the data value recorded in the individual-specific data structure, however, the distributed ledger interface server may continue comparing other identified data values (e.g., see steps334and332). If there are no more remaining data values identified from the parsing of the self-reported profile, the distributed ledger interface server270may notify the completion of the verification. e.g., on BGV app246of the second organization computing system220and/or BGV app206of the user device202at step336. It may be possible that the individual-specific data structure has the wrong data value recorded and/or the self-reported profile has the correct data value. Thus, in some aspects, the second organization may seek to double check whether information about the individual recorded in the individual-specific data structure of the distributed ledger is accurate (e.g., a “secondary verification”). The second organization may, via the BGV app224of the organization computing system220, request the secondary verification. Thus, at step328, the distributed ledger interface server270may determine whether it has received the request for the secondary verification, e.g., of a data value from the self-reported profile that has failed to satisfy the similarity threshold. If this request has been received, the individual may be contacted (e.g., via a notification sent to user device202) for consent to having the distributed ledger interface server270perform the secondary verification. Also or alternatively, the individual may have previously agreed to allowing secondary verification, e.g., as part of the terms of using the distributed ledger interface server270. At step330, the distributed ledger interface server270may determine whether the individual consents to the secondary verification. If the individual consents to the secondary verification (e.g., by sending an electronic indication to the distributed ledger interface server270via user device202), the distributed ledger interface server270may determine a data value source system that may be appropriate for the secondary verification (e.g., as in step332). As discussed previously, data value source systems240may comprise computing systems, devices, and servers that may be the original source of the specific data value being verified. Also or alternatively, the data value source systems240may be associated with organizations or institutions that may be the authority in determining the veracity of the data value being verified. For example, computing systems associated with the individual's alma mater may be a data value source system for determining the veracity of the individual's grades. In some aspects, determining the data value source systems may comprise determining, broadly, whether the data value comprises geographical information (e.g., identity and/or location of an employer, identity and/or location of an educational institution, etc.) or an organizational information (e.g., job descriptions, remarks, performance evaluations, etc.). If the data value source system concerns an organizational data value, the distributed ledger interface server270may proceed to performing one or more steps of method400described inFIG.4. If the data value source system concerns an organizational data value, the distributed ledger interface server270may proceed to performing one or more steps of method500described inFIG.5. If the individual does not consent to having the secondary verification, the distributed ledger interface server270may continue to review remaining data values identified from the self-reported profile against the individual-specific data structure, e.g., by determining whether there are any remaining data values in the self-reported profile in step334. If there are no more remaining data values identified from the parsing of the self-reported profile, the distributed ledger interface server270may notify the completion of the verification, e.g., on BGV app224of the second organization computing system220and/or BGV app206of the user device202. As discussed previously, the background information may be categorized and/or subcategorized into various background aspects. The background aspects may include, for example, aspects of previous employment, academic and/or training aspects, legal information (e.g., criminal and civil legal records), aspects about the individual's online profile, etc. As discussed previously, a data value may refer to individual-specific data describing the background aspects qualitatively and/or quantitatively. Furthermore, data values corresponding to a background aspect may be confidential or sensitive for the individual. Referring now toFIG.4, one or more steps of method400may be performed for secondary verification of a data value in the individual-specific data structure that comprises organizational information (e.g., job descriptions, remarks, performance evaluations, reasons for termination or hire, etc.). Thus, method400may be performed after the distributed ledger interface server270determines that the data value needing secondary verification is organization-specific (e.g., as in step332). At step402, the distributed ledger interface may determine and establish connections with the data value source system240. In some aspects, for example, where the data value source system is already a node participant in the individual-specific data structure of the distributed ledger, the data value source system240may already be communicatively linked to the distributed ledger interface server270. As discussed previously, a data value source system240may function as a source for data values for various background aspects of the individual. A data value source system may comprise, for example, the educational institution computing system242or the municipal computing system256, as they may be the original source of the organization-specific data value needing secondary verification (e.g., a GPA, a grade, a performance, a job description, a remark, a performance evaluation, a reason for termination or hire, etc.). In some aspects, each data value stored in the individual-specific data structure may include metadata identifying the source of the entry of the data value. The distributed ledger interface server270may use the metadata to identify and/or connect to the data value source system240that was the source of the data value. For example, a data value of “3.7” corresponding to the background aspect of “GPA” may include metadata identifying a computing system associated with an educational institution. If secondary verification for the GPA is requested, the distributed ledger interface server270may determine and establish connections with the computing system associated with the educational institution. The distributed ledger interface server270may query the determined data value source system240for records pertaining to the individual (e.g., as in step404). In some aspects metadata in the data value that identified the data value source system240may also include an identifier of the individual used by the data value source system240. The identifier may be presented in the query. The data value source system240may indicate, and the distributed ledger interface server270may receive the indication of, whether there are records pertaining to the individual in the data value source system (e.g., as in step406). The records may comprise any archival data (e.g., saved transcripts, employment records, etc.) about the individual that may assist in determining the veracity and/or accuracy of the data value undergoing secondary verification. If there are no records, the secondary verification of the data value may fail to proceed based on the lack of records in the data value source system240, and this failure may be indicated to the second organization computing system (e.g., as in step407). If records are found, the distributed ledger interface server270may indicate, to the data value source system240, the data value needing secondary verification (e.g., as in step408). For example, the data value needing secondary verification may include, but is not limited to, a date (e.g., as in step410), a job description (e.g., as in step416), or a grade (e.g., as in step422). If a data value is a date to be verified, the distributed ledger interface server270may retrieve (e.g., after sending a request to the data value source system240), dates associated with the individual (e.g., as in step412). For example, the data value source system may be a computing system associated with a school that the individual had attended and may have, within its archives, dates of attendance by the individual. The distributed ledger interface server, at step414may compare the dates retrieved to the dates stored in the individual-specific data structure291to see if they match (e.g., satisfy a similarity threshold). If the dates do match, the secondary verification of the data value may be deemed successful by the distributed ledger interface server270. If the dates do not match, the distributed ledger interface server270may indicate this (e.g., a failure to meet secondary verification), at step415. If the data value is a job description to be verified, the distributed ledger interface server270may retrieve (e.g., after sending a request to the data value source system240), records associated with the job duties of the individual (e.g., as in step417). For example, the data value source system may be a computing system associated with a previous employer of the individual and may have, within its archives, older resumes submitted by the individual or work product from the individual produced during the scope of employment. At step418, the distributed ledger interface server270may parse the job description stored as the data value in the individual-specific data structure and the records retrieved from the data value source system240for key terms associated with job roles and functions. For example, the NLP system278may process any natural language text of the job description and the records, and use learned data and machine learning algorithms to recognize terms and phrases that correspond to specific job roles (e.g., “finance,” “legal,” “manage,” “patent,” “drafting,” etc.). The identified terms from the job description stored in the individual-specific data structure may be compared to the identified terms from the records to determine whether the two sets of identified terms match (e.g., satisfy a similarity threshold) (e.g., as in step420). If there is a match, the secondary verification of the data value may be deemed successful by the distributed ledger interface server270. If there is not a match, the distributed ledger interface server270may indicate this (e.g., a failure to meet secondary verification) at step421. If a data value is a grade to be verified, the distributed ledger interface server270may retrieve and parse (e.g., after sending a request to the data value source system240), transcripts associated with the individual (e.g., as in step424). For example, the data value source system may be a computing system associated with a school that the individual had attended and may have, within its archives, transcripts associated with the individual. The distributed ledger interface server, at step426may compare a grade in the retrieved transcript to the grade stored in the individual-specific data structure291to see if they match (e.g., satisfy a similarity threshold). If the grades do match, the secondary verification of the data value may be deemed successful by the distributed ledger interface server270. If the dates do not match, the distributed ledger interface server270may indicate this (e.g., a failure to meet secondary verification), at step427. Referring now toFIG.5, one or more steps of method500may be performed for secondary verification of a data value in the individual-specific data structure that comprises geographic information (e.g., identity and/or location of an employer, identity and/or location of an educational institution, etc.). Thus, method500may be performed after the distributed ledger interface server270determines that the data value needing secondary verification is geography-specific (e.g., as in step332). The distributed ledger interface server270may begin secondary verification of a geography-specific data value stored in the individual-specific data structure by determining whether the data value is a physical address (e.g., as in step502). If so, the physical address may be inputted into the geocoding and/or geolocation APIs (“geo API”277) at step510. The geo API277may output what lies at the physical address, e.g., by providing a map with premises on the physical address, or identifying premises located on the physical address. The distributed ledger interface server270may thus use the geo API277to identify the premises at the physical address (e.g., as in step512). The individual-specific structure may include other information about the individual that are associated with the physical address. For example, the individual-specific data structure may include data values identifying an organization that has physical locations, such as the name of a previous employer having an office at a physical address or the name of an educational institution previously attended having a physical address. These identifiers of organizations may be stored as other data values in addition to the data value of the physical address in the individual-specific data structure. Thus, the distributed ledger interface server270may determine whether there are other data values identifying organizations, which may be associated with the physical address in the individual-specific data structure (e.g., as in step513). The distributed ledger interface server270may determine whether the identified organizations match (e.g., satisfy a similarity threshold) with the premises identified from the physical address. For example, the distributed ledger interface server270may compare names of multiple organizations to the identified premises to determine whether there is a match (e.g., as in step514). In some aspects, the list of names of organizations may be narrowed down or filtered based on additional data values (e.g., dates) that make certain organizations more likely to be associated with the physical address than others. If at least one of the names of organizations matches the identified premises, the secondary verification of the data value may be deemed successful by the distributed ledger interface server270. If there are no matches, the distributed ledger interface server270may indicate this (e.g., a failure to meet secondary verification), at step516. Also or alternatively, the identified data value needing secondary verification may comprise the name of an organization (e.g., as in step504). If so, the distributed ledger interface server270may determine possible physical address candidate(s) for the identified organization, e.g., using the geo API (e.g., as in step506). For example, the name of the organization may be inputted into the geo API277. The geo API277may output what possible physical address of the named organization, e.g., by providing a map locating the one or more physical addresses associated with the organization, or identifying a physical address of the organization. The distributed ledger interface server270may thus use the geo API277to determine the physical address candidate(s) of the named organization. The individual-specific structure may include one or more details regarding the physical address or location of the organization (e.g., city and state of a previous employer or an educational institution attended by the individual) (“location details”). The location details may be stored as another data value in addition to the data value of the name of the organization in the individual-specific data structure. Thus, the distributed ledger interface server270may determine whether there are other data values identifying location details of the named organization, in step508). At step509, the distributed ledger interface server270may determine whether the location details stored as data values in the individual-specific data structure match (e.g., satisfy a similarity threshold) with the physical address of the named organization identified using the geo API277. If there is a match, the secondary verification of the data value may be deemed successful by the distributed ledger interface server270. If there is no match, the distributed ledger interface server270may indicate this (e.g., a failure to meet secondary verification), at step516. If the identified data value is not a physical address and does not identify an organization, the distributed ledger interface server270may indicate, e.g., to the second organization computing system, that there is insufficient information for the secondary verification (e.g., as in step518). In some aspects, the user device202may be prompted to provide additional information, e.g., to be added to the candidate profile228of the second organization computing system. Although examples are described above, features and/or steps of those examples may be combined, divided, omitted, rearranged, revised, and/or augmented in any desired manner. Various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this description, though not expressly stated herein, and are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not limiting. | 71,964 |
11861032 | The figures depict embodiments of the invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. System Overview FIG.1is a system100for receiving a query108for a database106and responding to the query108by executing the query in a differentially private (DP) manner, according to one embodiment. The system100includes a differentially private security system (DP system)102that receives an analytical query108from a client104and applies a DP version of the query114on the database106. Subsequently, the DP system102returns the response of the DP query114to the client104as the DP response112. The database106is one or more databases managed by one or more entities. The database106may be managed by the same entity that manages the DP system102or by a different entity. The database106stores at least some restricted data. The restricted data may be represented as rows of records, with each record having a set of columns holding values pertaining to the record. Restricted data is data to which access and/or usage is limited due to legal, contractual, and/or societal concerns. Examples of restricted data include health data of patients and financial records of people, businesses or other entities. Similarly, restricted data may include census data or other forms of demographic data describing people, businesses, or other entities within geographic areas. Restricted data also includes usage data describing how people interact with electronic devices and/or network-based services. For example, restricted data may include location data describing geographic movements of mobile devices, consumption history data describing how and when people consume network-based content, and the particular content consumed (e.g., music and/or video content), and messaging data describing when and to whom users send messages via mobile or other electronic devices. A client104is used to access the restricted data in the database106. A client104is an electronic device such as a desktop, laptop, or tablet computer or a smartphone used by a human user to access the database106. The client104and user may be, but are not necessarily, associated with the entities that manage the database106and/or DP system102. Users of the client104include administrators and analysts. Administrators use the clients104to access the DP system102and/or database106to perform administrative functions such as provisioning other users and/or clients104, and configuring, maintaining, and auditing usage of the system and/or database. The administrators may access the DP system102and database106directly via administrative interfaces that allow users with appropriate credentials and access rights to perform the administrative functions. Analysts use the clients104to apply analytical queries108to the restricted data in the database106. The clients104used by the analysts access the database106only through the DP system102. Depending upon the embodiment, the analyst and/or client104may have an account provisioned by an administrator which grants the analyst or client certain rights to access the restricted data in the database106. The rights to the restricted data may be specified in terms of a privacy budget. The privacy budget describes limits on how much of the restricted data can be released. In one embodiment, the privacy budget is a numerical value representative of a number and/or type of remaining queries108available, or a degree of information which can released about data, e.g., data in a database or accessible by the DP system102. The privacy budget may be specified in terms of a query, analyst, client104, entity, globally, and/or time period. For example, the privacy budget may specify limits for an individual query, with each query having a separate budget. The privacy budget may also specify limits for an analyst or client, in which case the budget is calculated cumulatively across multiple queries from a client or analyst. For a privacy budget specified for an entity, such as an organization having multiple clients104and users, the privacy budget is calculated cumulatively across the multiple queries from clients and users associated with the entity. A global privacy budget, in turn, is calculated across all queries to the database, regardless of the source of the query. The privacy budget may also specify an applicable time period. For example, the privacy budget may specify that queries from particular clients may not exceed a specified budget within a given time period, and the budget may reset upon expiration of the time period. Depending upon the embodiment, client, as used herein, may alternatively or additionally refer to a user using the client to access the DP system102, to a user account registered with the DP system102, to a group of users or to a group of clients104, and/or to another entity that is a source of queries. As discussed above, a client104sends an analytical query108to the DP system102and also receives a differentially private response112to the query from the system. The queries108submitted by the client104may be simple queries, such as count queries that request the number of entries in the databases106that satisfy a condition specified by the client104, or complicated queries, such as predictive analytics queries that request a data analytics model trained on the databases106. Specific types of queries are discussed in more detail below. Each query has an associated set of privacy parameters. The privacy parameters indicate the amount of restricted data to release from the database106to the client104in response to the query108. The privacy parameters likewise indicate a privacy spend, which is the amount of decrease in the relevant privacy budget (e.g., the budget for the client104or entity with which the client is associated) in response to performance of the query108. In one embodiment, the client104specifies a set of associated privacy parameters with each submitted query108. In other embodiments, the privacy parameters are specified in other ways. The DP system102may associate privacy parameters with received queries (rather than obtaining the parameters directly from the query). For example, the DP system102may apply a default set of privacy parameters to queries that do not specify the parameters. The values of the default privacy parameters may be determined based on the client104, analyst, query type, and/or other factors, such as a privacy budget of the client. The DP system102receives an analytical query108from the client104and returns a differentially private response112to the client. In one embodiment, the DP system102determines the privacy parameters associated with the query, and evaluates the parameters against the applicable privacy budget. Alternatively, the analytical query108may specify the one or more privacy parameters of the set of privacy parameters. If the analytical query108and associated privacy parameters exceeds the privacy budget, the DP system102may deny (i.e., not execute) the query. Alternatively, the DP system102may adjust the privacy parameters to fall within the privacy budget, and execute the query using the adjusted privacy parameters. If the privacy parameters do not exceed the privacy budget, the DP system102executes a DP version of the query114on the database106, such that it releases a degree of restricted data from the database106indicated by the privacy parameters specified by the client104, and also protects a degree of privacy of the restricted data specified by the privacy budget. For example, an administrator of the database106may set a privacy budget specifying a maximum threshold on the amount of restricted data released by given query108that the client104may not exceed. Thus, the DP system102balances privacy protection of the restricted data in the database106while releasing useful information on the database106to the client104. The DP query114applied to the database106by the DP system102is a differentially private version of the query108that satisfies a definition of differential privacy described in more detail with reference to the privacy system160inFIG.3. The DP system102may apply the DP query114to the database106by transforming the analytical query108into one or more queries derived from the analytical query that cause the database106to release differentially private results. The DP system102may then return these differentially private results to the client as the DP response112. The DP system102may also, or instead, apply the DP query114to the database106by transforming the analytical query into one or more derived queries that cause the database to release results that are not necessarily differentially private. The DP system102may then transform the released results in a way that enforces differential privacy to produce the DP response112returned to the client104. These transformations may involve perturbing the process by which the DP query114is produced from the analytical query108and/or perturbing the results released by the database106with noise that provides the differential privacy specified by the privacy parameters while enforcing the privacy budget. The DP system102allows an analyst to perform database queries on restricted data, and thereby perform analyses using the DP responses112returned by the queries, while maintaining adherence with privacy parameters and a privacy budget. In addition, the techniques used by the DP system102allow database queries to access restricted data in ways that do not compromise the analytical utility of the data. The DP system102supports a wide variety of analytical and database access techniques and provides fine-grained control of the privacy parameters and privacy budget when using such techniques. The DP system102thus provides an improved database system having expanded and enhanced access to restricted data relative to other database systems. An analyst can use the DP system102for a variety of different purposes. In one embodiment, the restricted data in the database106includes training data describing features of entities relevant to a particular condition. The analyst uses the DP system102to build one or more differentially private machine-learned models, such as classifiers, from the training data. The analyst can apply data describing a new entity to the machine-learned models, and use the outputs of the models to classify the new entity as having, or not having the condition. However, an adversary cannot use the information in the machined-learned models to ascertain whether individual entities described by the training set have the condition due to the differentially private nature of the models. Such models may be retained and executed within the DP system102. For example, an analyst can issue an analytical query108that causes the DP system102to interact with the restricted data in the database106to build the machine-learned models. The DP system102can then store the models within the system or an associated system. The analyst can use a new analytical query108or another interface to the system102to apply the data describing the new entity to the models. The DP system102can execute the new data on the stored models and output the classification of the entity as a DP response112. Alternatively or in addition, the DP system102can output the trained models as a DP response112, and an analyst can store and apply data to the models using different systems in order to classify the entity. Examples of the types of classifications that may be performed using such models include determining whether a person (the entity) has a medical condition. In this example, the restricted training data include health data describing patients that are labeled as having or not having a given medical condition. The analyst applies health data for a new patient to the one or more differentially private machine-learned models generated from the restricted training data in order to diagnose whether the new patient has the medical condition. Another example classification that may be performed using such models involves identifying fraudulent or otherwise exceptional financial transactions. In this example, the restricted training data includes financial transaction data associated with one or more people or institutions, where the transactions are labeled as being exceptional or not exceptional. The analyst applies financial transaction data for a new transaction to the one or more differentially private machine-learned models generated from the restricted training data in order to determine whether the new transaction is exceptional. The analyst can block, flag, or otherwise report an exceptional transaction. As shown inFIG.1, the DP system102includes a user interface150, a library152, an account management system154, a query handling engine156, a data integration module158, a privacy system160, a count engine162, and an adaptive engine164. Some embodiments of the DP system102have different or additional modules than the ones described here. Similarly, the functions can be distributed among the modules in a different manner than is described here. Certain modules and functions can be incorporated into other modules of the DP system102. The user interface150generates a graphical user interface on a dedicated hardware device of the DP system102or the client104in which the client104can submit an analytical query108and the desired privacy parameters, view the DP response112in the form of numerical values or images, and/or perform other interactions with the system. The client104may also use the graphical user interface to inspect the database106schemata, view an associated privacy budget, cache the DP response112to view the response later, and/or perform administrative functions. The user interface150submits properly formatted query commands to other modules of the DP system102. The library152contains software components that can be included in external programs that allow the client104to submit the analytical query108, receive the DP response112, and other functions within a script or program. For example, the client104may use the software components of the library152to construct custom data analytic programs. Each of the software components in the library152submits properly formatted query commands to other modules of the DP system102. The account management system154receives properly formatted query commands (herein “query commands” or “QC”), parses the received query commands, and verifies that the commands are syntactically correct. Examples of query commands accommodated by the DP system102, according to one embodiment, are listed below.QC1. Count‘SELECT COUNT (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC2. Median‘SELECT MEDIAN (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC3. Mean‘SELECT MEAN (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC4. Variance‘SELECT VARIANCE (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC5. Inter-Quartile Range‘SELECT IQR (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC6. Batch Gradient Descent‘SELECT <GLM> (<columns_x>,<column_y>,<params>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC7. Stochastic Gradient Descent‘SELECT SGD <GLM> (<column>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC8. Random Forest‘SELECT RANDOMFOREST (<columns_x>,<columns_y>) FROM <database.table> WHERE <where_clause> BUDGET <eps><delta>.QC9. Histogram‘SELECT HISTOGRAM (<column>) FROM <database.table> WHERE <where_clause_i> BUDGET <eps><delta>. The query handling engine156transforms the received query commands into appropriate function calls and database access commands by parsing the query command string. The function calls are specific to the query108requested by the client104, and the access commands allow access to the required database106. Different databases106require different access commands. The access commands are provided to the database integrator158. The database integrator158receives the access commands to one or more databases106, collects the required databases, and merges them into a single data object. The data object has a structure similar to that of a database structure described in reference toFIG.2. The data object is provided to the privacy system160. The privacy system160receives the data object from the database integrator158, appropriate function calls from the query handling engine156indicating the type of query108submitted by the client104, and privacy parameters specified for the query108. The privacy system160evaluates the privacy parameters against the applicable privacy budget and either denies or allows the query. If the query is denied, the privacy system160outputs a response indicating that the query did not execute. If the query is allowed, the privacy system160executes the query and outputs a DP response112to a differentially private version of the query108with respect to the database106. The privacy system160also decrements the applicable privacy budget to account for the executed query. The privacy system160uses differential privacy engines in the DP System102, such as the count engine162and/or the adaptive engine164, to execute the query. In an embodiment, the count engine162and/or adaptive engine164are components of the privacy system160. The count engine162generates a differentially private result in response to a query to count a set of data in the database106, as described in greater detail below. The adaptive engine164executes a query such that the DP system102pursues a target accuracy for results of the query. A target accuracy is specified in terms of a relative error. The target accuracy for a query is met if the differentially private result of the query has a relative error less than or equal to the target accuracy. Relative error is the discrepancy between an exact value and an approximation of the exact value, in terms of a percentage. Specifically, relative error is: ρ=vE-vAvE*100% Where ρ is the relative error, νEis the exact value, and νAis the approximation. For example, assume a database stores information about patients in a hospital. A count query executed on the database requests a count of all patients in the hospital named Charles. The actual number of patients named Charles may be 100, but the DP system102provides a differentially private result with a value of 90. Here, νE=100 and νA=90. As such, the relative error ρ is 10%. This indicates that the differentially private result, 90, is 10% off from the exact value, 100. A query executed by the adaptive engine164is an adaptive query that specifies a maximum privacy spend in terms of one or more privacy parameters, such as ε as described below, and a target accuracy in terms of a relative error percentage. For example, an adaptive query may specify a maximum privacy spend of ε=1 and a target accuracy of 10%. The adaptive query also specifies one or more operations to perform on data and one or more relations indicating the data on which the adaptive engine164is to perform the one or more operations. The adaptive engine164performs the operations and iteratively adjusts the noise added to the results, then checks whether the adjusted results of the operations satisfy the target accuracy. Each iteration uses a fraction of the maximum privacy spend. If the results of the operations at a given iteration do not satisfy the target accuracy, the adaptive engine164performs another iteration using a larger portion of the maximum privacy spend. The adaptive engine164ceases iterating when either the maximum privacy spend is spent or the target accuracy is achieved. For example, after a first iteration, 1/100 of the maximum privacy spend has been used and the results have a relative error of 20%, greater than a target accuracy of 10% relative error. As such, the adaptive engine164performs an additional iteration, spending 1/50 the maximum privacy spend. If the results of this second iteration have a relative error of 9%, the adaptive engine164ceases to iterate and provides the results of the second iteration to the client104, as their relative error is within the target accuracy of 10%. Using the techniques described herein, the DP system102can provide differentially private results that satisfy a target accuracy while minimizing the privacy spend. As such, the DP system102can avoid providing results that lack analytical utility due to a high amount of noise injected into the results. Simultaneously, the DP system102can avoid overspending privacy parameters to produce results for a query. FIG.2illustrates an example database structure, according to one embodiment. The database200includes a data table, which may be referred to as a matrix, with a number of rows and columns. Each row is an entry of the database and each column is a feature of the database. Thus, each row contains a data entry characterized by a series of feature values for the data entry. For example, as shown inFIG.2, the example database200contains a data table with 8 entries and 11 features, and illustrates a list of patient profiles. Each patient is characterized by a series of feature values that contain information on the patient's height (Feature 1), country of residence (Feature 2), age (Feature 10), and whether the patient has contracted a disease (Feature 11). A row is also referred to as a “record” in the database106. The database106may include more than one data table. Henceforth a data table may be referred to as a “table.” The feature values in the database200may be numerical in nature, e.g., Features 1 and 10, or categorical in nature, e.g., Features 2 and 11. In the case of categorical feature values, each category may be denoted as an integer. For example, in Feature 11 ofFIG.2, “0” indicates that the patient has not contracted a disease, and “1” indicates that the patient has contracted a disease. Definition of Differential Privacy For a given query108, the privacy system160receives a data object X, function calls indicating the type of query108, privacy parameters specified by the client104, and outputs a DP response112to a differentially private version of the query108with respect to X. Each data object X is a collection of row vectors xi=1, 2, . . . , n, in which each row vector xihas a series of p elements xij=1, 2, . . . , p. A query M satisfies the definition of ε-differential privacy if for all: ∀X,X′∈𝔻,∀S⊆Range(M):Pr[M(X)∈S]Pr[M(X′)∈S]≤eɛ whereis the space of all possible data objects, S is an output space of query M, and neighboring databases are defined as two data objects X, X′ where one of X, X′ has all the same entries as the other, plus one additional entry. That is, given two neighboring data objects X, X′ in which one has an individual's data entry (the additional entry), and the other does not, there is no output of query M that an adversary can use to distinguish between X, X′. That is, an output of such a query M that is differentially private reveals little to no information about individual records in the data object X. The privacy parameter ε controls the amount of information that the query M reveals about any individual data entry in X, and represents the degree of information released about the entries in X. For example, in the definition given above, a small value of ε indicates that the probability an output of query M will disclose information on a specific data entry is small, while a large value of ε indicates the opposite. As another definition of differential privacy, a query M is (ε,δ)-differentially private if for neighboring data objects X, X′: ∀X,X′∈𝔻,∀S⊆Range(M):Pr[M(X)∈S]Pr[M(X′)∈S]≤eɛ+δ. The privacy parameter δ measures the improbability of the output of query M satisfying ε-differential privacy. As discussed in reference toFIG.1, the client104may specify the desired values for the privacy parameters (ε, δ) for a query108. There are three important definitions for discussing the privacy system160: global sensitivity, local sensitivity, and smooth sensitivity. Global sensitivity of a query M is defined as GSM(X)=maxX,X′:d(X,X′)=1M(X)-M(X′)where X, X′ are any neighboring data objects, such that d(X, X′)=1. This states that the global sensitivity is the most the output of query M could change by computing M on X and X′. The local sensitivity of a query M on the data object X is given by: LSM(X)=maxX′:d(X,X′)=1M(X)-M(X′)where the set {X′: d(X, X′)=1} denotes all data objects that have at most one entry that is different from X. That is, the local sensitivity LSM(X) is the sensitivity of the output of the query M on data objects X′ that have at most one different entry from X, measured by a norm function. Related to the local sensitivity LSM(X), the smooth sensitivity given a parameter β is given by: SM(X;β)=LSM(X)·e-β·d(X,X′)where d(X, X′) denotes the number of entries that differ between X and X′. Notation for Random Variables The notation in this section is used for the remainder of the application to denote the following random variables.1) G(σ2), denotes a zero-centered Gaussian random variable with the probability density function f(x|σ2)=1σ2πe-x22σ2.2) L(b) denotes a zero-centered Laplacian random variable from a Laplace distribution with the probability density function f(x|b)=12be-xb.3) C(γ) denotes a zero-centered Cauchy random variable with the probability density function f(x|γ)=1πγ(1+(xγ)2). Further, a vector populated with random variables R as its elements is denoted by v(R). A matrix populated with random variables R as its elements is denoted by M(R). Count Engine Turning back toFIG.1, the count engine162produces a DP response112responsive to the differentially private security system102receiving a query108for counting the number of entries in a column of the data object X that satisfy a condition specified by the client104, given privacy parameters ε and/or δ. An example query command for accessing the count engine162is given in QC1 above. For the example data object X shown inFIG.2, the client104may submit a query108requesting a DP response112indicating the number of patients that are above the age of 30. The count engine162retrieves the count q from X. If privacy parameter δ is equal to zero or is not used, the count engine162returns y≈q+L(c1·1ϵ). as the DP response112for display by the user interface150, where c1is a constant. An example value for c1may be 1. If the privacy parameter δ is non-zero, the count engine302returns y≈q+G(c1·2·log2δ·1ϵ2), as the DP response112for display on the user interface150, where c1is a constant. An example value for c1may be 1. Adaptive Engine FIG.3illustrates an adaptive engine164, according to one embodiment. The adaptive engine164includes an error estimator310, an iterative noise calibrator320, a secondary noise generator330, and an accuracy manager340. The adaptive engine164receives an adaptive query specifying a target accuracy in terms of a relative error value and a maximum privacy spend in terms of an ε value. The adaptive query also specifies a count operation to be performed on a set of data. Although described herein with reference to a count operation, the adaptive engine164can be used with alternative operations in alternative embodiments. Upon producing a differentially private result, the adaptive engine164sends the differentially private result to the client104. The adaptive engine164may also send a notification identifying the relative error of the differentially private result. The error estimator310approximates the relative error of a differentially private result. Depending upon the embodiment, the error estimator310can be a plug-in estimator or a Bayesian estimator. The error estimator310generates a temporary result by applying the noise used to produce the differentially private result into the differentially private result. The error estimator310then determines a relative error between the differentially private result and the temporary result. The adaptive engine164uses this relative error to approximate the relative error of the differentially private result as compared to the original result. The iterative noise calibrator320iteratively calibrates the noise of a differentially private result until the differentially private result has a relative error no greater than the target accuracy or the maximum privacy spend has been used, or both. Initially, the iterative noise calibrator320receives an initial differentially private result from a differentially private operation, such as a differentially private count performed by the count engine162. The received initial differentially private result is broken down into its original result and the noise value injected into the original result to provide differential privacy. The iterative noise calibrator320also receives an indicator of a fraction of the maximum privacy spend which was used to generate the initial differentially private result. For example, the fraction of the maximum privacy spend, the “fractional privacy spend,” may be 1/100 the maximum privacy spend S, i.e., S/100. For a given iteration, the iterative noise calibrator320generates a corresponding fractional privacy spend such that it is larger than any fractional privacy spends of preceding iterations. For example, if the iterative noise calibrator320receives an indication that the fractional privacy spend to produce the initial differentially private result was S/100, a fractional privacy spend for a first iteration may be S/50, a fractional privacy spend for a second iteration may be S/25, and so on. The fractional privacy spend of an iteration increments by a specified amount from one iteration to the next. The increment can be based on the amount of the fractional privacy spend of an immediately preceding iteration. For example, the amount by which the fractional privacy spend of one iteration increases from a previous fractional privacy spend can be a doubling of the previous fractional privacy spend. In an embodiment, the amount by which the fractional privacy spend of one iteration increases from a previous fractional privacy spend varies proportional to the difference between the target accuracy and a relative error of a differentially private result of a preceding iteration. The function by which the fractional privacy spend increases in proportion to the difference between the target accuracy and a relative error depends upon the embodiment. As an example of the variance, a first iteration produces a differentially private result with a relative error of 20%, where the target accuracy is 10%. As such, the fractional privacy spend may double. However, if after the first iteration the differentially private result has a relative error of 12%, then the second iteration may generate a fractional privacy spend that is only 20% larger than the fractional privacy spend used in the first iteration. In this second embodiment, the amount by which the fractional privacy spend can increase from one iteration to the next may be capped. For example, the fractional privacy spend may be capped to never more than double a preceding fractional privacy spend, e.g., S/50 will never be immediately followed by a larger fractional privacy spend than S/25, regardless of what the function outputs as the increment from the one fractional privacy spend to another. For the given iteration, the iterative noise calibrator320generates a new noise value by sampling the secondary noise generator330using the new fractional privacy spend and the fractional privacy spend of the immediately preceding iteration (or, in the case of the first iteration, the fractional privacy spend indicated as used by the operation specified in the query). This sampling is described in greater detail below with reference to the secondary noise generator330. The iterative noise calibrator320incorporates the new noise value into the differentially private result by injecting the new noise value into the original result from the operation specified in the query and updating the differentially private result to the resultant value. After incorporating the new noise into the differentially private result, the iterative noise calibrator320checks whether the differentially private result satisfies the target accuracy using the error estimator310. If the differentially private result satisfies the target accuracy by being no greater than the target accuracy, the iterative noise calibrator320ceases to iterate and sends the differentially private result to the client104. If the differentially private result does not satisfy the target accuracy, the iterative noise calibrator320proceeds to another iteration. If an iteration cannot increase the fractional privacy spend, i.e., the fractional privacy spend equals the maximum privacy spend, the iterative noise calibrator320stops iterating. If so, the adaptive engine164may send the differentially private result to the client104with a notification that the target accuracy could not be reached. The notification may indicate the achieved accuracy, i.e., the relative error. The secondary noise generator330produces a secondary distribution different from the distribution used to produce the initial differentially private result. In an embodiment, the secondary distribution is a four-part mixture distribution. Specifically, the four-part mixture distribution may be one part Dirac delta function, two parts truncated exponential functions, and one part exponential function. In an embodiment, the distribution is as follows, where y is the new noise value, x is the previous noise value, a previous fractional privacy spend is ε1, and a new fractional privacy spend is ε2: ɛ1ɛ2e-(ɛ2-ɛ1)xδ(y-x)+ɛ22-ɛ122ɛ2e-ɛ1y-x-ɛ2y+ɛ1x The iterative noise calibrator320samples the secondary noise generator330to generate the new noise value for injection into the result to provide differential privacy to the result. In an embodiment, the secondary noise generator330is sampled as follows, where a previous noise value is x, a new noise sample is y, a previous fractional privacy spend is ε1, a new fractional privacy spend is ε2, and z is drawn from the secondary distribution: switch randomlycasewithprobabilityε1ε2e-(ε2-ε1)❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]":return y = x.casewithprobabilityε2-ε12ε2:drawz~{e(ε1+ε2)z,forz≤00,otherwise.returny=sgn(x)z.casewithprobabilityε1+ε22ε2(1-e-(ε2-ε1)❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"):drawz~{e-(ε2-ε1)z,for0≤z≤❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"0,otherwise.returny=sgn(x)z.casewithprobabilityε2-ε12ε2e-(ε2-ε1)❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]":drawz~{e-(ε1+ε2)z,forz≥❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"0,otherwise.returny=sgn(x)z.end switch Processes FIG.4illustrates a process for executing a query with adaptive differential privacy, according to one embodiment. The DP system102receives410, from the client104, a request to perform a query on a set of data. The query includes a target accuracy and a maximum privacy spend for the query. The DP system102performs420an operation to produce a result, such as a count operation, then injects the result with noise sampled from a Laplace distribution based on a fraction of the maximum privacy spend to produce a differentially private result. The DP system102iteratively calibrates430the noise value of the differentially private result using a secondary distribution different from the Laplace distribution and a new fractional privacy spend. The new fractional privacy spend is generated to be larger than any fractional privacy spends of preceding iterations. The DP system102generates a new noise value sampled from the secondary distribution and incorporates it into the differentially private result to calibrate the noise of the differentially private result. The DP system102determines whether the calibrated differentially private result satisfies the target accuracy by determining a relative error of the calibrated differentially private result using an error estimator and comparing the relative error to the target accuracy. If the relative error is at most the target accuracy, the differentially private result satisfies the target accuracy. The DP system102iterates until an iteration uses the maximum privacy spend or a relative error of the differentially private result is determined to satisfy the target accuracy, or both. The DP system102then sends440the differentially private result to the client104in response to the query. The DP system102may also send the relative error of the differentially private result to the client104. Computing Environment FIG.5is a block diagram illustrating components of an example machine able to read instructions from a machine readable medium and execute them in a processor or controller, according to one embodiment. Specifically,FIG.5shows a diagrammatic representation of a machine in the example form of a computer system500. The computer system500can be used to execute instructions524(e.g., program code or software) for causing the machine to perform any one or more of the methodologies (or processes) described herein. In alternative embodiments, the machine operates as a standalone device or a connected (e.g., networked) device that connects to other machines. In a networked deployment, the machine may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a set-top box (STB), a smartphone, an internet of things (IoT) appliance, a network router, switch or bridge, or any machine capable of executing instructions524(sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute instructions524to perform any one or more of the methodologies discussed herein. The example computer system500includes one or more processing units (generally processor502). The processor502is, for example, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a controller, a state machine, one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any combination of these. The computer system500also includes a main memory504. The computer system may include a storage unit516. The processor502, memory504and the storage unit516communicate via a bus508. In addition, the computer system506can include a static memory506, a display driver510(e.g., to drive a plasma display panel (PDP), a liquid crystal display (LCD), or a projector). The computer system500may also include alphanumeric input device512(e.g., a keyboard), a cursor control device514(e.g., a mouse, a trackball, a joystick, a motion sensor, or other pointing instrument), a signal generation device518(e.g., a speaker), and a network interface device520, which also are configured to communicate via the bus508. The storage unit516includes a machine-readable medium522on which is stored instructions524(e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions524may also reside, completely or at least partially, within the main memory504or within the processor502(e.g., within a processor's cache memory) during execution thereof by the computer system500, the main memory504and the processor502also constituting machine-readable media. The instructions524may be transmitted or received over a network526via the network interface device520. While machine-readable medium522is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions524. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing instructions524for execution by the machine and that cause the machine to perform any one or more of the methodologies disclosed herein. The term “machine-readable medium” includes, but not be limited to, data repositories in the form of solid-state memories, optical media, and magnetic media. | 41,382 |
11861033 | DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Examples of these specific embodiments are illustrated in the accompanying drawings, and specific details are outlined in the following description to provide a thorough understanding of the subject matter. It will be understood that these examples are not intended to limit the scope of the claims to the illustrated embodiments. On the contrary, they are intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the disclosure. In the present disclosure, physical units of data that are stored in a data platform—and that make up the content of, e.g., database tables in customer accounts—are referred to as micro-partitions. In different implementations, a data platform may store metadata in micro-partitions as well. The term “micro-partitions” is distinguished in this disclosure from the term “files,” which, as used herein, refers to data units such as image files (e.g., Joint Photographic Experts Group (JPEG) files, Portable Network Graphics (PNG) files, etc.), video files (e.g., Moving Picture Experts Group (MPEG) files, MPEG-4 (MP4) files, Advanced Video Coding High Definition (AVCHD) files, etc.), Portable Document Format (PDF) files, documents that are formatted to be compatible with one or more word-processing applications, documents that are formatted to be compatible with one or more spreadsheet applications, and/or the like. If stored internal to the data platform, a given file is referred to herein as an “internal file” and may be stored in (or at, or on, etc.) what is referred to herein as an “internal storage location.” If stored external to the data platform, a given file is referred to herein as an “external file” and is referred to as being stored in (or at, or on, etc.) what is referred to herein as an “external storage location.” These terms are further discussed below. Computer-readable files come in several varieties, including unstructured files, semi-structured files, and structured files. These terms may mean different things to different people. As used herein, examples of unstructured files include image files, video files, PDFs, audio files, and the like; examples of semi-structured files include JavaScript Object Notation (JSON) files, eXtensible Markup Language (XML) files, and the like; and examples of structured files include Variant Call Format (VCF) files, Keithley Data File (KDF) files, Hierarchical Data Format version 5 (HDF5) files, and the like. As known to those of skill in the relevant arts, VCF files are often used in the bioinformatics field for storing, e.g., gene-sequence variations, KDF files are often used in the semiconductor industry for storing, e.g., semiconductor-testing data, and HDF5 files are often used in industries such as the aeronautics industry, in that case for storing data such as aircraft-emissions data. Numerous other examples of unstructured-file types, semi-structured-file types, and structured-file types, as well as example uses thereof, could certainly be listed here as well and will be familiar to those of skill in the relevant arts. Different people of skill in the relevant arts may classify types of files differently among these categories and may use one or more different categories instead of or in addition to one or more of these. As used herein, the term “view” indicates a named SELECT statement, conceptually similar to a table. In some aspects, a view can be secure, which prevents queries from getting information on the underlying data obliquely. As used herein, the term “materialized view” indicates a view that is eagerly computed rather than lazily (e.g., as a standard view). In some aspects, the implementation of materialized views has overlapped with change tracking functionality. As used herein, the term “stream” refers to a table and a timestamp. In some aspects, a stream may be used to iterate over changes to a table. When a stream is read inside a Data Manipulation Language (DML) statement, its timestamp may be transactionally advanced to the greater timestamp of its time interval. As used herein, the term “identity resolution” refers to the process of matching fragments of personally identifiable information (PII) across devices and touchpoints to a single profile, often a person or a household. This profile aids in building a cohesive, multi-channel view of a consumer. An identity resolution process can generate a secure identifier (e.g., a secure key) of the person or household (e.g., a key or set of keys that represents a different component of an identity). As used herein, the term “data enrichment” refers to a process of obtaining additional data (e.g., demographic data) related to (and supplementing) an existing set of data (e.g., an existing set of PII). As used herein, the term “task” indicates an object (e.g., a data object) that can execute (e.g., user-managed or managed by a network-based database system) any one of the following types of SQL code: a single SQL statement, a call to a stored procedure, and procedural logic using scripting. In some aspects, the disclosed identity resolution and data enrichment functionalities can exist in a network-based database system (e.g., as illustrated inFIGS.1-3) or can be leveraged using an existing API (e.g., via one or more external functions). More specifically, the disclosed identity resolution and data enrichment techniques are built on top of a native applications framework, which allows a data provider to build an application that data consumers can “install” in their database system accounts to use. Example features of the network-based database system which can be used in connection with identity resolution and data enrichment include configuring and using secure functions, data sharing, data streams (also referred to as streams), and tasks. Such features can work in concert to automate one or more aspects of the identity resolution and data enrichment functionalities. The disclosed techniques can be used for configuring an identity resolution and enrichment (IRE) manager to perform identity resolution and data enrichment functionalities using an application framework. There are two parties in an identity resolution process—a data provider (also referred to as provider) and a data consumer (also referred to as consumer). The data consumer has a data set with PII which needs identity resolution. The data provider can provide proprietary functionality that accomplishes identity resolution for identity information (e.g., PII of a user) available at the data consumer. Both the data consumer and the data provider can be tenants (or subscribers) of services provided by the network-based database system (e.g., services that can include the disclosed identity resolution and data enrichment functionalities of the IRE manager). In this regard, access to one or more of the disclosed identity resolution and data enrichment functionalities provided by an IRE manager can be configured (or enabled) in an account of the data provider or the data consumer at the network-based database system. In some aspects, deployment of the identity resolution framework associated with the IRE manager consists of creating secure objects and data shares in the accounts of the data consumer and data provider at the network-based database system. The framework can be flexible enough to incorporate additional identity resolution and data enrichment functionalities, as needed. In some aspects, the framework can be deployed across two accounts on the same cloud provider and region. In the event the parties are on different providers or regions, one of the parties can replicate their data/objects to the other party's provider or region. The disclosed identity resolution and data enrichment techniques can be used to replace slower and often less secure existing identify resolution and data enrichment methods, including compiling desired data, writing that data to a flat, delimited file, then uploading that file to a secure file transfer protocol (sFTP) site. Once received, the data provider copies the file, processes the data, and returns an output file to the sFTP location, for the requesting party to download. Once downloaded, the requesting party has to ingest the results into databases. The disclosed identity resolution and data enrichment techniques also replace the “embedded” solution where the provider's resolution/enrichment logic resides in the provider account in the database system, but involves using streams/tasks to automate the request/response processes. Some drawbacks of this approach include the following: the consumer has to share their data with the provider, and the provider incurs compute costs for each request from the consumer. Configuring the disclosed techniques using a native applications framework (e.g., an application (or app) of the data provider configured to execute at the account of the data consumer) resolves both of the above drawbacks since the consumer no longer has to share data with the provider, and the consumer incurs compute with each request. Some additional advantages of the disclosed techniques include the following: (a) the app can write to the database in the account of the data consumer; (b) the account of the data provider determines which objects in the database are visible to the app executing in the account of the data consumer; (c) the provider's data continues to be hidden from the consumer; (d) the provider's data is only shared with the app; (e) the consumer no longer has to share their PII data directly with the provider; (f) the provider no longer has to create streams on consumer data and tasks to process requests from the consumer (e.g., the consumer can make requests on demand via the app, and results are generated faster, with the absence of provider tasks); (g) the provider passes the compute costs to the consumer; (h) the disclosed techniques can be integrated with data clean rooms to offer identity resolution “on-the-fly”. The various embodiments that are described herein are described with reference where appropriate to one or more of the various figures. An example computing environment using an IRE manager for configuring identity resolution and data enrichment functionalities is discussed in connection withFIGS.1-3. Example stream-related configurations which can be used with the disclosed identity resolution and data enrichment functions are discussed in connection withFIG.4. Example identity resolution and data enrichment frameworks are discussed in connection withFIGS.5-7. A more detailed discussion of example computing devices that may be used in connection with the disclosed techniques is provided in connection withFIG.8. FIG.1illustrates an example computing environment100that includes a database system in the example form of a network-based database system102, in accordance with some embodiments of the present disclosure. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.1. However, a skilled artisan will readily recognize that various additional functional components may be included as part of the computing environment100to facilitate additional functionality that is not specifically described herein. In other embodiments, the computing environment may comprise another type of network-based database system or a cloud data platform. For example, in some aspects, the computing environment100may include a cloud computing platform101with the network-based database system102, and storage platform104(also referred to as cloud storage platforms). The cloud computing platform101provides computing resources and storage resources that may be acquired (purchased) or leased (e.g., by data providers and data consumers), and configured to execute applications and store data. The cloud computing platform101may host a cloud computing service103that facilitates storage of data on the cloud computing platform101(e.g., data management and access) and analysis functions (e.g. SQL queries, analysis), as well as other processing capabilities (e.g., performing identity resolution and data enrichment functions described herein). The cloud computing platform101may include a three-tier architecture: data storage (e.g., storage platforms104and122), an execution platform110(e.g., providing query processing), and a compute service manager108providing cloud services (e.g., identity resolution and data enrichment services provided by the IRE manager130). It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations). For example, a company could be a customer of a particular data platform and also separately maintain storage of any number of files—be they unstructured files, semi-structured files, structured files, and/or files of one or more other types—on, as examples, one or more of their servers and/or on one or more cloud-storage platforms such as AMAZON WEB SERVICES™ (AWS™), MICROSOFT® AZURE®, GOOGLE CLOUD PLATFORM™, and/or the like. The customer's servers and cloud-storage platforms are both examples of what a given customer could use as what is referred to herein as an external storage location. The cloud computing platform101could also use a cloud-storage platform as what is referred to herein as an internal storage location concerning the data platform. From the perspective of the network-based database system102of the cloud computing platform101, one or more files that are stored at one or more storage locations are referred to herein as being organized into one or more of what is referred to herein as either “internal stages” or “external stages.” Internal stages are stages that correspond to data storage at one or more internal storage locations, and where external stages are stages that correspond to data storage at one or more external storage locations. In this regard, external files can be stored in external stages at one or more external storage locations, and internal files can be stored in internal stages at one or more internal storage locations, which can include servers managed and controlled by the same organization (e.g., company) that manages and controls the data platform, and which can instead or in addition include data-storage resources operated by a storage provider (e.g., a cloud-storage platform) that is used by the data platform for its “internal” storage. The internal storage of a data platform is also referred to herein as the “storage platform” of the data platform. It is further noted that a given external file that a user stores at a given external storage location may or may not be stored in an external stage in the external storage location—i.e., in some data-platform implementations, it is a customer's choice whether to create one or more external stages (e.g., one or more external-stage objects) in the customer's data-platform account as an organizational and functional construct for conveniently interacting via the data platform with one or more external files. As shown, the network-based database system102of the cloud computing platform101is in communication with the cloud storage platforms104and122(e.g., AWS®, Microsoft Azure Blob Storage®, or Google Cloud Storage), client device114(e.g., a data provider), and data consumer115via network106. The network-based database system102is a network-based system used for reporting and analysis of integrated data from one or more disparate sources including one or more storage locations within the cloud storage platform104. The cloud storage platform104comprises a plurality of computing machines and provides on-demand computer system resources such as data storage and computing power to the network-based database system102. The network-based database system102comprises a compute service manager108, an execution platform110, and one or more metadata databases112. The network-based database system102hosts and provides data reporting and analysis services (as well as additional services such as the disclosed identity resolution and data enrichment functions) to multiple client accounts, including an account of the data provider associated with client device114and an account of the data consumer115. In some embodiments, the compute service manager108comprises the IRE manager130which can configure and provide the identity resolution and data enrichment functions to accounts of tenants of the network-based database system102(e.g., an account of the data provider associated with client device114and an account of the data consumer115). A more detailed description of the identity resolution and data enrichment functions provided by the IRE manager130is provided in connection withFIGS.4-7. The compute service manager108coordinates and manages operations of the network-based database system102. The compute service manager108also performs query optimization and compilation as well as managing clusters of computing services that provide compute resources (also referred to as “virtual warehouses”). The compute service manager108can support any number of client accounts such as end-users providing data storage and retrieval requests, accounts of data providers, accounts of data consumers, system administrators managing the systems and methods described herein, and other components/devices that interact with the compute service manager108. The compute service manager108is also in communication with a client device114. The client device114corresponds to a user of one of the multiple client accounts (e.g., a data provider) supported by the network-based database system102. The data provider may utilize application connector128at the client device114to submit data storage, retrieval, and analysis requests to the compute service manager108as well as to access other services provided by the compute service manager108(e.g., identity resolution and data enrichment functions). Client device114(also referred to as user device114) may include one or more of a laptop computer, a desktop computer, a mobile phone (e.g., a smartphone), a tablet computer, a cloud-hosted computer, cloud-hosted serverless processes, or other computing processes or devices may be used to access services provided by the cloud computing platform101(e.g., cloud computing service103) by way of a network106, such as the Internet or a private network. In the description below, actions are ascribed to users, particularly consumers and providers. Such actions shall be understood to be performed concerning client device (or devices)114operated by such users. For example, a notification to a user may be understood to be a notification transmitted to client device114, input or instruction from a user may be understood to be received by way of the client device114, and interaction with an interface by a user shall be understood to be interaction with the interface on the client device114. In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing such actions by the cloud computing service103in response to an instruction from that user. In some aspects, a data consumer115can communicate with the client device114to access functions offered by the data provider. Additionally, the data consumer can access functions (e.g., identity resolution and data enrichment functions) offered by the network-based database system102via network106. The compute service manager108is also coupled to one or more metadata databases112that store metadata about various functions and aspects associated with the network-based database system102and its users. For example, a metadata database112may include a summary of data stored in remote data storage systems as well as data available from a local cache. Additionally, a metadata database112may include information regarding how data is organized in remote data storage systems (e.g., the cloud storage platform104) and the local caches. Information stored by a metadata database112allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device. The compute service manager108is further coupled to the execution platform110, which provides multiple computing resources (e.g., execution nodes) that execute, for example, various data storage, data retrieval, and data processing tasks. The execution platform110is coupled to storage platform104and cloud storage platforms122. The storage platform104comprises multiple data storage devices120-1to120-N. In some embodiments, the data storage devices120-1to120-N are cloud-based storage devices located in one or more geographic locations. For example, the data storage devices120-1to120-N may be part of a public cloud infrastructure or a private cloud infrastructure. The data storage devices120-1to120-N may be hard disk drives (HDDs), solid-state drives (SSDs), storage clusters, Amazon S3™ storage systems, or any other data-storage technology. Additionally, the cloud storage platform104may include distributed file systems (such as Hadoop Distributed File Systems (HDFS)), object storage systems, and the like. In some embodiments, at least one internal stage126may reside on one or more of the data storage devices120-1-120-N, and at least one external stage124may reside on one or more of the cloud storage platforms122. In some embodiments, communication links between elements of the computing environment100are implemented via one or more data communication networks, such as network106. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-Networks) coupled with one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol. The compute service manager108, metadata database(s)112, execution platform110, and storage platform104, are shown inFIG.1as individual discrete components. However, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platforms104and122may be implemented as a distributed system (e.g., distributed across multiple systems/platforms at multiple geographic locations). Additionally, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platforms104and122can be scaled up or down (independently of one another) depending on changes to the requests received and the changing needs of the network-based database system102. Thus, in the described embodiments, the network-based database system102is dynamic and supports regular changes to meet the current data processing needs. During typical operation, the network-based database system102processes multiple jobs determined by the compute service manager108. These jobs are scheduled and managed by the compute service manager108to determine when and how to execute the job. For example, the compute service manager108may divide the job into multiple discrete tasks and may determine what data is needed to execute each of the multiple discrete tasks. The compute service manager108may assign each of the multiple discrete tasks to one or more nodes of the execution platform110to process the task. The compute service manager108may determine what data is needed to process a task and further determine which nodes within the execution platform110are best suited to process the task. Some nodes may have already cached the data needed to process the task and, therefore, be a good candidate for processing the task. Metadata stored in a metadata database112assists the compute service manager108in determining which nodes in the execution platform110have already cached at least a portion of the data needed to process the task. One or more nodes in the execution platform110process the task using data cached by the nodes and, if necessary, data retrieved from the cloud storage platform104. It is desirable to retrieve as much data as possible from caches within the execution platform110because the retrieval speed is typically much faster than retrieving data from the cloud storage platform104. As shown inFIG.1, the cloud computing platform101of the computing environment100separates the execution platform110from the storage platform104. In this arrangement, the processing resources and cache resources in the execution platform110operate independently of the data storage devices120-1to120-N in the cloud storage platform104. Thus, the computing resources and cache resources are not restricted to specific data storage devices120-1to120-N. Instead, all computing resources and all cache resources may retrieve data from, and store data to, any of the data storage resources in the cloud storage platform104. FIG.2is a block diagram illustrating components of the compute service manager108, in accordance with some embodiments of the present disclosure. As shown inFIG.2, the compute service manager108includes an access manager202and a key manager204coupled to a data storage device206, which is an example of the metadata database(s)112. Access manager202handles authentication and authorization tasks for the systems described herein. The key manager204facilitates the use of remotely stored credentials (e.g., credentials stored in one of the remote credential stores) to access external resources such as data resources in a remote storage device. As used herein, the remote storage devices may also be referred to as “persistent storage devices” or “shared storage devices.” For example, the key manager204may create and maintain remote credential store definitions and credential objects (e.g., in the data storage device206). A remote credential store definition identifies a remote credential store (e.g., one or more of the remote credential stores) and includes access information to access security credentials from the remote credential store. A credential object identifies one or more security credentials using non-sensitive information (e.g., text strings) that are to be retrieved from a remote credential store for use in accessing an external resource. When a request invoking an external resource is received at run time, the key manager204and access manager202use information stored in the data storage device206(e.g., a credential object and a credential store definition) to retrieve security credentials used to access the external resource from a remote credential store. A request processing service208manages received data storage requests and data retrieval requests (e.g., jobs to be performed on database data). For example, the request processing service208may determine the data to process a received query (e.g., a data storage request or data retrieval request). The data may be stored in a cache within the execution platform110or in a data storage device in storage platform104. A management console service210supports access to various systems and processes by administrators and other system managers. Additionally, the management console service210may receive a request to execute a job and monitor the workload on the system. The compute service manager108also includes a job compiler212, a job optimizer214, and a job executor216. The job compiler212parses a job into multiple discrete tasks and generates the execution code for each of the multiple discrete tasks. The job optimizer214determines the best method to execute the multiple discrete tasks based on the data that needs to be processed. Job optimizer214also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the job. The job executor216executes the execution code for jobs received from a queue or determined by the compute service manager108. A job scheduler and coordinator218sends received jobs to the appropriate services or systems for compilation, optimization, and dispatch to the execution platform110. For example, jobs may be prioritized and then processed in that prioritized order. In an embodiment, the job scheduler and coordinator218determines a priority for internal jobs that are scheduled by the compute service manager108with other “outside” jobs such as user queries that may be scheduled by other systems in the database but may utilize the same processing resources in the execution platform110. In some embodiments, the job scheduler and coordinator218identifies or assigns particular nodes in the execution platform110to process particular tasks. A virtual warehouse manager220manages the operation of multiple virtual warehouses implemented in the execution platform110. For example, the virtual warehouse manager220may generate query plans for executing received queries. Additionally, the compute service manager108includes configuration and metadata manager222, which manages the information related to the data stored in the remote data storage devices and the local buffers (e.g., the buffers in the execution platform110). Configuration and metadata manager222uses metadata to determine which data files need to be accessed to retrieve data for processing a particular task or job. A monitor and workload analyzer224oversees processes performed by the compute service manager108and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in the execution platform110. The monitor and workload analyzer224also redistributes tasks, as needed, based on changing workloads throughout the network-based database system102and may further redistribute tasks based on a user (e.g., “external”) query workload that may also be processed by the execution platform110. The configuration and metadata manager222and the monitor and workload analyzer224are coupled to a data storage device226. The data storage device226inFIG.2represents any data storage device within the network-based database system102. For example, data storage device226may represent buffers in execution platform110, storage devices in storage platform104, or any other storage device. As described in embodiments herein, the compute service manager108validates all communication from an execution platform (e.g., the execution platform110) to validate that the content and context of that communication are consistent with the task(s) known to be assigned to the execution platform. For example, an instance of the execution platform executing query A should not be allowed to request access to data source D (e.g., data storage device226) that is not relevant to query A. Similarly, a given execution node (e.g., execution node302-1may need to communicate with another execution node (e.g., execution node302-2), and should be disallowed from communicating with a third execution node (e.g., execution node312-1) and any such illicit communication can be recorded (e.g., in a log or other location). Also, the information stored on a given execution node is restricted to data relevant to the current query and any other data is unusable, rendered so by destruction or encryption where the key is unavailable. In some embodiments, the compute service manager108further includes the IRE manager130which can configure and provide the identity resolution and data enrichment functions to accounts of tenants of the network-based database system102(e.g., an account of the data provider associated with client device114and an account of the data consumer115). A more detailed description of the identity resolution and data enrichment functions provided by the IRE manager130is provided in connection withFIGS.4-7. FIG.3is a block diagram illustrating components of the execution platform110, in accordance with some embodiments of the present disclosure. As shown inFIG.3, the execution platform110includes multiple virtual warehouses, including virtual warehouse 1 (or301-1), virtual warehouse 2 (or301-2), and virtual warehouse N (or301-N). Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. The virtual warehouses can execute multiple tasks in parallel by using multiple execution nodes. As discussed herein, the execution platform110can add new virtual warehouses and drop existing virtual warehouses in real time based on the current processing needs of the systems and users. This flexibility allows the execution platform110to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed. All virtual warehouses can access data from any data storage device (e.g., any storage device in the cloud storage platform104). Although each virtual warehouse shown inFIG.3includes three execution nodes, a particular virtual warehouse may include any number of execution nodes. Further, the number of execution nodes in a virtual warehouse is dynamic, such that new execution nodes are created when additional demand is present, and existing execution nodes are deleted when they are no longer necessary. Each virtual warehouse is capable of accessing any of the data storage devices120-1to120-N shown inFIG.1. Thus, the virtual warehouses are not necessarily assigned to a specific data storage device120-1to120-N and, instead, can access data from any of the data storage devices120-1to120-N within the cloud storage platform104. Similarly, each of the execution nodes shown inFIG.3can access data from any of the data storage devices120-1to120-N. In some embodiments, a particular virtual warehouse or a particular execution node may be temporarily assigned to a specific data storage device, but the virtual warehouse or execution node may later access data from any other data storage device. In the example ofFIG.3, virtual warehouse 1 includes three execution nodes302-1,302-2, and302-N. Execution node302-1includes a cache304-1and a processor306-1. Execution node302-2includes a cache304-2and a processor306-2. Execution node302-N includes a cache304-N and a processor306-N. Each execution node302-1,302-2, and302-N is associated with processing one or more data storage and/or data retrieval tasks. For example, a virtual warehouse may handle data storage and data retrieval tasks associated with an internal service, such as a clustering service, a materialized view refresh service, a file compaction service, a storage procedure service, or a file upgrade service. In other implementations, a particular virtual warehouse may handle data storage and data retrieval tasks associated with a particular data storage system or a particular category of data. Similar to virtual warehouse 1 discussed above, virtual warehouse 2 includes three execution nodes312-1,312-2, and312-N. Execution node312-1includes a cache314-1and a processor316-1. Execution node312-2includes a cache314-2and a processor316-2. Execution node312-N includes a cache314-N and a processor316-N. Additionally, virtual warehouse 3 includes three execution nodes322-1,322-2, and322-N. Execution node322-1includes a cache324-1and a processor326-1. Execution node322-2includes a cache324-2and a processor326-2. Execution node322-N includes a cache324-N and a processor326-N. In some embodiments, the execution nodes shown inFIG.3are stateless with respect to the data being cached by the execution nodes. For example, these execution nodes do not store or otherwise maintain state information about the execution node or the data being cached by a particular execution node. Thus, in the event of an execution node failure, the failed node can be transparently replaced by another node. Since there is no state information associated with the failed execution node, the new (replacement) execution node can easily replace the failed node without concern for recreating a particular state. Although each of the execution nodes shown inFIG.3includes one data cache and one processor, alternative embodiments may include execution nodes containing any number of processors and any number of caches. Additionally, the caches may vary in size among the different execution nodes. The caches shown inFIG.3store, in the local execution node, data that was retrieved from one or more data storage devices in the cloud storage platform104. Thus, the caches reduce or eliminate the bottleneck problems occurring in platforms that consistently retrieve data from remote storage systems. Instead of repeatedly accessing data from the remote storage devices, the systems and methods described herein access data from the caches in the execution nodes, which is significantly faster and avoids the bottleneck problem discussed above. In some embodiments, the caches are implemented using high-speed memory devices that provide fast access to the cached data. Each cache can store data from any of the storage devices in the cloud storage platform104. Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that require significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that require caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that require fast scanning of large amounts of data. In some embodiments, the cache resources and computing resources associated with a particular execution node are determined when the execution node is created, based on the expected tasks to be performed by the execution node. Additionally, the cache resources and computing resources associated with a particular execution node may change over time based on changing tasks performed by the execution node. For example, an execution node may be assigned more processing resources if the tasks performed by the execution node become more processor-intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node require a larger cache capacity. Although virtual warehouses 1, 2, and N are associated with the same execution platform110, virtual warehouses 1, N may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse 1 can be implemented by a computing system at a first geographic location, while virtual warehouses 2 and n are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities. Additionally, each virtual warehouse is shown inFIG.3as having multiple execution nodes. The multiple execution nodes associated with each virtual warehouse may be implemented using multiple computing systems at multiple geographic locations. For example, an instance of virtual warehouse 1 implements execution nodes302-1and302-2on one computing platform at a geographic location, and execution node302-N at a different computing platform at another geographic location. Selecting particular computing systems to implement an execution node may depend on various factors, such as the level of resources needed for a particular execution node (e.g., processing resource requirements and cache requirements), the resources available at particular computing systems, communication capabilities of networks within a geographic location or between geographic locations, and which computing systems are already implementing other execution nodes in the virtual warehouse. Execution platform110is also fault-tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location. A particular execution platform110may include any number of virtual warehouses. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary. In some embodiments, the virtual warehouses may operate on the same data in the cloud storage platform104, but each virtual warehouse has its execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users. FIG.4is a diagram400of shared views, in accordance with some embodiments of the present disclosure. In some aspects, a shared view or a stream (e.g., a stream on a view or a stream on a table) can be used by the IRE manager130in connection with identity resolution and data enrichment functionalities performed in an account of a data consumer and an account of a data provider. The terms “stream” and “stream object” are used interchangeably. Referring toFIG.4, a data consumer402manages a source table404(e.g., a source table with PII). The data consumer402can apply different filters to source table404to generate views406and408. For example, data consumer402can apply different filters to source table404so that different PII from the table is shared with different data providers (e.g., data providers410and414) in connection with identity resolution or data enrichment, based on specific privacy requirements of each of the data providers. In this regard, view406is shared with data provider410, and view408is shared with data provider414. In some embodiments, IRE manager130configures streams412and416on corresponding views406and408for consumption by data providers410and414or use during identity resolution or data enrichment. The definition of a view can be complex but observing the changes to such a view may be useful independently of its complexity. Manually constructing a query to compute those changes may be achieved, but can be toilsome, error-prone, and suffer from performance issues. In some aspects, a change query on a view may automatically rewrite the view query, relieving users of this burden. In some aspects, simple views containing only row-wise operators (e.g., select, project, union all) may be used. In some aspects, complex views that join fact tables with (potentially several) slowly-changing-dimension (DIM) tables may also be used. Other kinds of operators like aggregates, windowing functions, and recursion may also be used in connection with complex views. FIG.5is a block diagram illustrating identity resolution and data enrichment functions performed at account500of a data consumer (also referred to as consumer account500), in accordance with some embodiments of the present disclosure.FIG.6is a block diagram illustrating identity resolution and data enrichment functions performed at account600of a data provider (also referred to as provider account600), in accordance with some embodiments of the present disclosure. Deployment of the identity resolution framework ofFIGS.5-6consists of creating stored procedures, secure objects, and data shares in the consumer account500and the provider account600. The framework can be flexible enough to incorporate additional functionality, as required. The framework can be deployed across two accounts on the same cloud provider and region. In the event the data provider and the data consumer are on different providers or regions, one of the parties can replicate their data/objects to the other party's provider or region. Identity resolution and data enrichment functions performed at accounts500and600can be configured by the IRE manager130. In some aspects, the IRE manager130can configure an application of the provider (e.g., provider identity native application502) to execute at the consumer account500. In some aspects, application502(or app502) is configured to enhance secure data sharing by allowing the provider to create local state objects (e.g., tables) and local compute objects (e.g., stored procedures, external functions, and tasks) in addition to sharing objects representing the application logic in the consumer account500. For example and as illustrated inFIG.5andFIG.6, the IRE manager130can configure the following stored procedures to execute at the provider account600: onboard_consumer stored procedure602, verify_logs stored procedure604, verify jobs stored procedure606, consumer_log_share_check stored procedure608, and enable_consumer stored procedure610. The IRE manager130can also configure the provider account600with a sign_log function612, client metadata storage614, application logs storage616, failed logs storage618, application jobs storage620, and source data622. The IRE manager130can configure the following stored procedures to execute as part of app502within the consumer account500: enrich_records stored procedure504, log_share stored procedure506, installer stored procedure508, and app_log stored procedure510. The above-listed stored procedures configured at the consumer account500and the provider account600are explained in greater detail herein below. App502can be configured (e.g., by the IRE manager130) to allow data providers to control the accessibility of objects with their data consumers. App502can be installed in the consumer account500as a database, similar to secure data sharing. The provider can create an application that consists of stored procedures and/or external functions (e.g., as listed above) that resolve and/or enrich data in the consumer account500. For example, a consumer “installs” app502in the consumer account500as a database. Once installed, the consumer can call stored procedures in the application that provide the application identity resolution and enrichment functionalities discussed herein to resolve/enrich their data, on-demand, and without having to share their data with the provider account600. In some aspects, the provider can make the app502available to consumer accounts (e.g., consumer account500) through either direct secure data sharing or through a private listing (e.g., a listing in a data exchange specifying particular consumer accounts allowed to download and use the app). Once added to the listing, the consumer executes an app script (e.g., a publisher's “helper” script) to create a role, warehouse, and stored procedures to streamline the app usage. Example processing operations for performing identity resolution and enrichment functions using the app502are referenced as operations 1-5 inFIGS.5-6. At operation 1, the provider will onboard the consumer for the usage of the provider app502by calling the onboard_consumer stored procedure602. The onboard_consumer stored procedure602adds consumer values (e.g., as metadata626), including record limit and interval, to a metadata table stored in the client metadata storage614. In addition, the procedure also creates a dedicated warehouse, schema, secure view to view jobs, and a task to call the consumer_log_share_check stored procedure608to check for a share (e.g., share514) from the consumer account500. In some embodiments, the onboard_consumer stored procedure602can be expanded to include additional metadata values. In some aspects, the onboard_consumer stored procedure602can be of JavaScript type. In some aspects, onboard_consumer stored procedure602can be configured to use the following parameters:(a) account_locator (VARCHAR)—a locator for the consumer account500;(b) consumer_name (VARCHAR)—the consumer's company name;(c) request_record_limit (FLOAT)—the limit on the number of unique records the consumer can enrich within the given request_limit_interval; and(d) request_limit_interval (VARCHAR)—the time interval in which requests are limited (i.e., “1 day”). During the onboarding process, the provider creates a share (e.g., share514in the provider account600) for the consumer. The consumer can create a database (e.g., application log database512) from the provider's share (created during initial setup). The consumer can add the provider account600to their outbound share (e.g., share514in the consumer account500). Once the consumer has added the provider account to their outbound share, the provider can automatically enable the consumer (e.g., enable access530to resolution/enrichment functions of the app502via the enable_consumer stored procedure610). The metadata table in the client metadata storage614can be used to check whether the enabled key has been switched to YES (or Y). In some aspects, the application502framework's enablement and usage limits can be enforced via the metadata table stored in the client metadata storage614. In some aspects, the table structure utilizes key/value pairs for each customer account, which allows the provider to create and manage consumer metadata keys, as desired. In some aspects, the following metadata keys can be stored for each consumer:(a) consumer_name—the consumer's company name;(b) enabled—a flag indicating whether the consumer has been enabled to use the identity resolution/enrichment service of the provider;(c) requests_count—the number of resolution or enrichment requests made by the consumer;(d) last_request_timestamp—timestamp of the last resolution or enrichment request made by the consumer;(e) record_request_limit—the number of unique enriched/resolved records allowed to the consumer (over an interval specified below);(f) request_limit_interval—the interval at which the consumer is allowed to make the specified volume of unique requests; and(g) request_record_counter—the number of records the consumer has currently enriched/resolved during the allotted period (this counter can be reset to 0 at the allotted period, i.e., at 00:00:00 UTC daily). In some aspects, the stored metadata can be extended (e.g., by the IRE manager130) to track additional details and enforce additional limits as needed. In the event additional metadata keys are needed, the following stored procedures are updated accordingly: the onboard_consumer stored procedure602(if metadata values should be added during onboarding), the enable_consumer stored procedure610(if metadata values should be added/modified during onboarding), the enrich_records stored procedure504(if metadata values relate to any request-related events), the verify_logs stored procedure604(if metadata values relate to any install/request related events), and the verify jobs stored procedure606(if metadata values relate to any request-related events). At operation 2, the consumer can perform a native app installation516of app502in the consumer account500. For example, the installer stored procedure508is executed, which installs the app502in the consumer account500and also creates the local app log table and metadata view (e.g., application log database512) that displays the consumer's metadata, as stored in the provider account600. Similar to the shared procedures, this stored procedure is created to execute with owner rights. To make objects shared to the shared database role visible to the consumer, the shared database role is granted to the app's APP EXPORTER role. Any objects (i.e., source data) that should not be visible to the consumer may not perform this step. At operation 3, the consumer account500calls the provision_provider app helper stored procedure to provision the app502and create a log data share (e.g., share514) via the app's log_share stored procedure506to the provider to share the application logs (e.g., the logs stored in the application log database512). At operation 4, the provider account600uses the consumer_log_share_check stored procedure608to check for shared app logs via share514. Once a shared app log has been detected, the enable_consumer stored procedure610is activated to enable access by the consumer account500to resolution/enrichment functions of the app502. Metadata628(e.g., an enable flag) can be stored in the metadata table in the client metadata storage614after the access is enabled by the enable_consumer stored procedure610. In some aspects, the consumer account500can access the metadata table stored in the client metadata storage614to confirm the consumer account is enabled (e.g., via the “enabled” flag). The consumer_log_share_check stored procedure608checks for the log share from the consumer account500(e.g., via share514). If a database is not already created from the consumer's share, then this procedure calls the enable_consumer stored procedure610. The consumer_log_share_check stored procedure608uses the parameter account_locator (VARCHAR), which includes locator information for the consumer account500. The enable_consumer stored procedure610enables a consumer for identity resolution/enrichment, once the consumer has enabled sharing (e.g., sharing of application logs) to the provider account600. This procedure can add additional metadata values to the metadata table in the client metadata storage614. The enable_consumer stored procedure610uses the parameter account_locator (VARCHAR), which includes locator information for the consumer account500. At operation 5, once enabled, the consumer account500can resolve/enrich data by calling a generate_request helper stored procedure, which activates the enrich_records stored procedure504. The enrich_records stored procedure504is shared with the app502that allows the consumer to enrich_records in the specified table with data from the provider account (e.g., source data622and external source data624) when matched on a specified join key. The enrich_records stored procedure504can be used to resolve or enrich data from input table522and generate results in output table528(also referred to as result table528). In some aspects, the enrich_records stored procedure504uses the following parameters:(a) account_locator (VARCHAR)—locator information for the consumer account500;(b) request_id (VARCHAR)—the request's unique identifier;(c) input_table_name (VARCHAR)—the consumer's input table containing data to be enriched;(d) match_key (VARCHAR)—the field to join the consumer's data to the provider's data;(e) results_table_name (VARCHAR)—the results table to be created and shared to the consumer account or a result table stored in the consumer account (e.g., result table528); and(f) template_name (VARCHAR)—the query template used to construct the approved query used to access the consumer's data. To protect consumers, the application framework ofFIGS.5-6configured by the IRE manager130may not allow the app502to write data from the consumer account500back to the provider account600. As a result, local app logs cannot be written back to the provider account600. As a workaround, during the provisioning process, the consumer account creates a share514to the provider account600, along with a stream/tasks to write new log messages to a log table (e.g., in the application log database512) shared with the provider. During the consumer account enablement process, the provider creates a database from the consumer's log share, processes the consumer's logs, and inserts them into a master log table (e.g., stored in the application logs storage616). Each log/job table entry can be uniquely signed by provider-specific encryption (e.g., using the sign_log function612). This signature (e.g., hash) is verified by the provider, to ensure logs are not tampered with. If log tampering is detected, the consumer's access to app502is revoked. If at any time the Consumer either drops the log share or removes the provider from the share, the provider automatically detects the lost share and disables the consumer. For example, app502can generate one or more logs, including log532(associated with the log_share stored procedure506), log534(associated with the installer stored procedure508), or any other logs generated by app502. The app_log stored procedure510can be configured with usage privileges520of the sign_log function612at the provider account600. The sign_log function612is used for generating a hash (or a secure key) of a log generated by app502. The app_log stored procedure510then updates (or revises) the log to include the generated hash, to obtain a signed log536. The signed log536is stored in the application log database512and is shared with the provider account600via share514. At the provider account600, the consumer_log_share_check stored procedure608detects the signed log536via the share514and calls the verify_logs stored procedure604. The verify_logs stored procedure604retrieves the hash from the signed log536, generates a new hash using the log data in the signed log, and compares the new hash to the original hash generated before sharing the log data. If the two hashes match, a determination is made that the log data has not been tampered with, and the log data is stored in the application logs storage616. If the two hashes do not match, a determination is made that the log data has been tampered with at the consumer account500. The tampered log data is stored in failed logs storage618. In some aspects, a request for identity resolution or data enrichment may contain several levels of resolution or enrichment. In this regard, a request for identity resolution or data enrichment is also referred to as a “job”. Verified log data for a job can be stored in the application jobs storage620. In some aspects, a job can be associated with different processing stages (e.g., multiple enrichment stages for an enrichment request) which are bundled to form the job In some aspects, app502is granted a temporary role that has read access privileges524(e.g., to retrieve input data from input table522), read access privileges518(e.g., to access source data622and external source data624for identity resolution and data enrichment functions), and write access privileges526to store resolved/enriched data in the result table528. During an example enrichment operation, a consumer can create a task that calls the enrich_records stored procedure504when there is a change in the input table522(e.g., new or revised PII associated with a user). The enrich_records stored procedure504orchestrates the processing of each PII record for identity resolution or data enrichment. More specifically, the enrich_records stored procedure504can perform identity resolution based on the updated PII and using source data622or external source data624. During identity resolution, the enrich_records stored procedure504can match the updated PII from the input table522with existing identity-related data using source data622or external source data624to determine a user identity (or identity associated with a household of the user). The source data622or external source data624can include identity-related data for users and user households, as well as opt-out information associated with such users or user households. During identity resolution, for each user/consumer PII record obtained from the input table522, one or more secure identifiers (e.g., keys) can be generated by the enrich_records stored procedure504. In some aspects, the int enrich_records stored procedure504further encrypts the generated one or more secure identifiers using a user-specific encryption passphrase. The one or more secure identifiers associated with the user are stored in the result table528at the data consumer account500. In some embodiments, after identity resolution is performed, the enrich_records stored procedure504can further perform data enrichment to generate additional data (also referred to as enrichment data) for the user (or the user's household) associated with the one or more secure identifiers generated during the identity resolution. More specifically, the enrich_records stored procedure504can use the source data622(which can include enrichment data for users and user households) to obtain enrichment data for the user (or the user's household) associated with the one or more secure identifiers generated during the identity resolution. In some aspects, the enrich_records stored procedure504further uses additional enrichment data (or matching logic) to perform data matching and obtain additional enrichment data (e.g., using one or more databases of the data provider or one or more external databases that the data provider has access to). The determined enrichment data can be stored in the result table528(or an additional result table). In some aspects, the consumer account500can be configured with a merge/append function, which can be used to merge identity resolution data (as well as enrichment data if available) stored in the result table528with the PII stored in input table522. FIG.7is a flow diagram illustrating the operations of a database system in performing method700for identity resolution, in accordance with some embodiments of the present disclosure. Method700may be embodied in computer-readable instructions for execution by one or more hardware components (e.g., one or more processors) such that the operations of method700may be performed by components of network-based database system102, such as components of the compute service manager108(e.g., the IRE manager130) and/or the execution platform110(e.g., which components may be implemented as machine800ofFIG.8). Accordingly, method700is described below, by way of example with reference thereto. However, it shall be appreciated that method700may be deployed on various other hardware configurations and is not intended to be limited to deployment within the network-based database system102. At operation702, a shared data object is detected at an account of a data provider. For example, the consumer_log_share_check stored procedure608is used to detect a shared signed log536stored in the application log database of the consumer account500. The signed log536is shared by the consumer account500with the provider account600via share514. At operation704, an application executing at the account of the data consumer is enabled for an identity resolution process based on the detecting of the shared data object. For example, after the consumer_log_share_check stored procedure608detects the signed log536shared by the consumer account500via share514, the enable_consumer stored procedure610is executed to enable the consumer account500to use app502for identity resolution and data enrichment functions. At operation706, a request for source data is detected at the provider account600. The request is received from app502. For example, the enrich_records stored procedure504uses the read access privileges518to request (or obtain) source data (e.g., source data622) managed by the provider account600. At operation708, the source data is communicated to the application executing at the account of the data consumer. For example, after the verify_logs stored procedure604successfully verifies the shared signed log536, the obtained source data is communicated back to the enrich_records stored procedure504(or the enrich_records stored procedure504is provided access to the requested source data). At operation710, the identity resolution process is performed at the account of the data consumer using the source data. For example, the enrich_records stored procedure504performs identity resolution or data enrichment using the obtained source data. The resolved or enriched data is stored in the result table528. A notification of the resolved or enriched data is also communicated by the account of the data consumer (e.g., to a user device of a user of the consumer account500). FIG.8illustrates a diagrammatic representation of a machine800in the form of a computer system within which a set of instructions may be executed for causing the machine800to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.8shows a diagrammatic representation of machine800in the example form of a computer system, within which instructions816(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein may be executed. For example, instructions816may cause machine800to execute any one or more operations of method700(or any other technique discussed herein, for example in connection withFIG.4-FIG.7). As another example, instructions816may cause machine800to implement one or more portions of the functionalities discussed herein. In this way, instructions816may transform a general, non-programmed machine into a particular machine800(e.g., the compute service manager108or a node in the execution platform110) that is specially configured to carry out any one of the described and illustrated functions in the manner described herein. In yet another embodiment, instructions816may configure the compute service manager108and/or a node in the execution platform110to carry out any one of the described and illustrated functions in the manner described herein. In alternative embodiments, the machine800operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a smartphone, a mobile device, a network router, a network switch, a network bridge, or any machine capable of executing the instructions816, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions816to perform any one or more of the methodologies discussed herein. Machine800includes processors810, memory830, and input/output (I/O) components850configured to communicate with each other such as via a bus802. In some example embodiments, the processors810(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor812and a processor814that may execute the instructions816. The term “processor” is intended to include multi-core processors810that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions816contemporaneously. AlthoughFIG.8shows multiple processors810, machine800may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory830may include a main memory832, a static memory834, and a storage unit836, all accessible to the processors810such as via the bus802. The main memory832, the static memory834, and the storage unit836store the instructions816embodying any one or more of the methodologies or functions described herein. The instructions816may also reside, completely or partially, within the main memory832, within the static memory834, within machine storage medium838of the storage unit836, within at least one of the processors810(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. The I/O components850include components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components850that are included in a particular machine800will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components850may include many other components that are not shown inFIG.8. The I/O components850are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components850may include output components852and input components854. The output components852may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), other signal generators, and so forth. The input components854may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touches gestures or other tactile input components), audio input components (e.g., a microphone), and the like. Communication may be implemented using a wide variety of technologies. The I/O components850may include communication components864operable to couple the machine800to a network880or device870via a coupling882and a coupling872, respectively. For example, the communication components864may include a network interface component or another suitable device to interface with the network880. In further examples, communication components864may include wired communication components, wireless communication components, cellular communication components, and other communication components to provide communication via other modalities. The device870may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a universal serial bus (USB)). For example, as noted above, machine800may correspond to any one of the compute service manager88or the execution platform110, and device870may include the client device114or any other computing device described herein as being in communication with the network-based database system102, the storage platform104, or the cloud storage platforms122. The various memories (e.g.,830,832,834, and/or memory of the processor(s)810and/or the storage unit836) may store one or more sets of instructions816and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions816, when executed by the processor(s)810, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate arrays (FPGAs), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network880may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, network880or a portion of network880may include a wireless or cellular network, and coupling882may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling882may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions816may be transmitted or received over the network880using a transmission medium via a network interface device (e.g., a network interface component included in the communication components864) and utilizing any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, instructions816may be transmitted or received using a transmission medium via coupling872(e.g., a peer-to-peer coupling) to device870. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions816for execution by the machine800, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of the disclosed methods may be performed by one or more processors. The performance of some of the operations may be distributed among the one or more processors, not only residing within a single machine but also deployed across several machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across several locations. Described implementations of the subject matter can include one or more features, alone or in combination as illustrated below by way of examples. Example 1 is a system comprising: at least one hardware processor; and at least one memory storing instructions that cause the at least one hardware processor to perform operations comprising: detecting, at an account of a data provider, a shared data object that is shared by an account of a data consumer with the account of the data provider; enabling an application executing at the account of the data consumer for an identity resolution process based on the detecting of the shared data object; detecting at the account of the data provider, a request for source data received from the application, the source data being managed by the account of the data provider; communicating the source data to the application executing at the account of the data consumer based on a verification that the application is enabled for the identity resolution process; and performing the identity resolution process at the account of the data consumer using the source data. In Example 2, the subject matter of Example 1 includes subject matter where the performing of the identity resolution process comprises: granting a record enrichment stored procedure of the application, write access privileges to a result data table stored at the account of the data consumer; granting the record enrichment stored procedure, first read access privileges to an input data table stored at the account of the data consumer; and granting the record enrichment stored procedure, second read access privileges to the source data managed by the account of the data provider. In Example 3, the subject matter of Example 2 includes, the operations further comprising: retrieving by the record enrichment stored procedure, personally identifiable information (PII) from the input data table, using the first read access privileges. In Example 4, the subject matter of Example 3 includes, the operations further comprising: generating, by the record enrichment stored procedure, a secure identifier of a user associated with the PII based on the source data; and updating, by the record enrichment stored procedure, the result data table with the secure identifier using the write access privileges. In Example 5, the subject matter of Examples 1-4 includes, the operations further comprising: generating an application log of the application, the application log being based on one or more functions performed by the application during the identity resolution process. In Example 6, the subject matter of Example 5 includes, the operations further comprising: generating at the account of the data provider, a first hash of the application log using a hash function; and revising the application log with the first hash to generate a revised application log. In Example 7, the subject matter of Example 6 includes, the operations further comprising: sharing the revised application log with the account of the data provider using the shared data object. In Example 8, the subject matter of Example 7 includes, the operations further comprising: retrieving the application log at the account of the data provider using the revised application log; and generating at the account of the data provider, a second hash using the hash function and the application log. In Example 9, the subject matter of Example 8 includes, the operations further comprising: disabling the application executing at the account of the data consumer for the identity resolution process when the first hash is different from the second hash. In Example 10, the subject matter of Examples 8-9 includes, the operations further comprising: incrementing a counter stored in a metadata database of the account of the data provider when the first hash is different from the second hash, the counter indicating a number of records stored at the account of the data consumer on which the identity resolution process is performed; and disabling the application executing at the account of the data consumer for the identity resolution process when the number of records exceeds a threshold number of records stored in the metadata database. Example 11 is a method comprising: performing, by at least one hardware processor, operations comprising: detecting, at an account of a data provider, a shared data object that is shared by an account of a data consumer with the account of the data provider; enabling an application executing at the account of the data consumer for an identity resolution process based on the detecting of the shared data object; detecting, at the account of the data provider, a request for source data received from the application, the source data being managed by the account of the data provider; communicating the source data to the application executing at the account of the data consumer based on a verification that the application is enabled for the identity resolution process; and performing the identity resolution process at the account of the data consumer using the source data. In Example 12, the subject matter of Example 11 includes subject matter where the performing of the identity resolution process comprises: granting a record enrichment stored procedure of the application, write access privileges to a result data table stored at the account of the data consumer; granting the record enrichment stored procedure, first read access privileges to an input data table stored at the account of the data consumer; and granting the record enrichment stored procedure, second read access privileges to the source data managed by the account of the data provider. In Example 13, the subject matter of Example 12 includes, retrieving by the record enrichment stored procedure, personally identifiable information (PII) from the input data table, using the first read access privileges. In Example 14, the subject matter of Example 13 includes, generating, by the record enrichment stored procedure, a secure identifier of a user associated with the PII based on the source data; and updating, by the record enrichment stored procedure, the result data table with the secure identifier using the write access privileges. In Example 15, the subject matter of Examples 11-14 includes, generating an application log of the application, the application log being based on one or more functions performed by the application during the identity resolution process. In Example 16, the subject matter of Example 15 includes, generating at the account of the data provider, a first hash of the application log using a hash function; and revising the application log with the first hash to generate a revised application log. In Example 17, the subject matter of Example 16 includes, the operations further comprising: sharing the revised application log with the account of the data provider using the shared data object. In Example 18, the subject matter of Example 17 includes, retrieving the application log at the account of the data provider using the revised application log; and generating at the account of the data provider, a second hash using the hash function and the application log. In Example 19, the subject matter of Example 18 includes, disabling the application executing at the account of the data consumer for the identity resolution process when the first hash is different from the second hash. In Example 20, the subject matter of Examples 18-19 includes, incrementing a counter stored in a metadata database of the account of the data provider when the first hash is different from the second hash, the counter indicating a number of records stored at the account of the data consumer on which the identity resolution process is performed; and disabling the application executing at the account of the data consumer for the identity resolution process when the number of records exceeds a threshold number of records stored in the metadata database. Example 21 is a computer-storage medium comprising instructions that, when executed by one or more processors of a machine, configure the machine to perform operations comprising: detecting, at an account of a data provider, a shared data object that is shared by an account of a data consumer with the account of the data provider; enabling an application executing at the account of the data consumer for an identity resolution process based on the detecting of the shared data object; detecting, at the account of the data provider, a request for source data received from the application, the source data being managed by the account of the data provider; communicating the source data to the application executing at the account of the data consumer based on a verification that the application is enabled for the identity resolution process; and performing the identity resolution process at the account of the data consumer using the source data. In Example 22, the subject matter of Example 21 includes subject matter where the performing of the identity resolution process comprises: granting a record enrichment stored procedure of the application, write access privileges to a result data table stored at the account of the data consumer; granting the record enrichment stored procedure, first read access privileges to an input data table stored at the account of the data consumer; and granting the record enrichment stored procedure, second read access privileges to the source data managed by the account of the data provider. In Example 23, the subject matter of Example 22 includes, the operations further comprising: retrieving by the record enrichment stored procedure, personally identifiable information (PII) from the input data table, using the first read access privileges. In Example 24, the subject matter of Example 23 includes, the operations further comprising: generating, by the record enrichment stored procedure, a secure identifier of a user associated with the PII based on the source data; and updating, by the record enrichment stored procedure, the result data table with the secure identifier using the write access privileges. In Example 25, the subject matter of Examples 21-24 includes, the operations further comprising: generating an application log of the application, the application log being based on one or more functions performed by the application during the identity resolution process. In Example 26, the subject matter of Example 25 includes, the operations further comprising: generating at the account of the data provider, a first hash of the application log using a hash function; and revising the application log with the first hash to generate a revised application log. In Example 27, the subject matter of Example 26 includes, the operations further comprising: sharing the revised application log with the account of the data provider using the shared data object. In Example 28, the subject matter of Example 27 includes, the operations further comprising: retrieving the application log at the account of the data provider using the revised application log; and generating at the account of the data provider, a second hash using the hash function and the application log. In Example 29, the subject matter of Example 28 includes, the operations further comprising: disabling the application executing at the account of the data consumer for the identity resolution process when the first hash is different from the second hash. In Example 30, the subject matter of Examples 28-29 includes, the operations further comprising: incrementing a counter stored in a metadata database of the account of the data provider when the first hash is different from the second hash, the counter indicating a number of records stored at the account of the data consumer on which the identity resolution process is performed; and disabling the application executing at the account of the data consumer for the identity resolution process when the number of records exceeds a threshold number of records stored in the metadata database. Example 31 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of Examples 1-30. Example 32 is an apparatus comprising means to implement any of Examples 1-30. Example 33 is a system to implement any of Examples 1-30. Example 34 is a method to implement any of Examples 1-30. Although the embodiments of the present disclosure have been described concerning specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art, upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim is still deemed to fall within the scope of that claim. | 92,627 |
11861034 | DETAILED DESCRIPTION The following modes, given by way of example only, are described in order to provide a more precise understanding of the subject matter of a preferred embodiment or embodiments. In the figures, incorporated to illustrate features of an example embodiment, like reference numerals are used to identify like parts throughout the figures. Particular embodiments of the present invention relate to minimizing a risk of malicious parties being able to obtain private or sensitive information which is input to or displayed by a processing system. In one embodiment, there is provided a wearable device for authenticating a user. The wearable device includes one or more sensors. The sensors detect movement of fingers of the user wearing the wearable device. The data corresponding to the movement of fingers is further processed using one or more classifiers to determine authentication data. In one example, the classifiers are trained in a training mode to interpret the sensor data and the trained classifiers are then used for interpreting. Examples of the classifiers include a number of finger taps represented by each finger tap segment. The authentication data is then used to access a service. For example, the authentication data can be transmitted to another entity for authentication and subsequent access to corresponding service. In another embodiment, there is provided a point-of-sale (POS) system where a POS device; and a physically separate wearable user input device interact in a way whereby authentication data from a user interacting with the user input device is wirelessly communicated with the POS device in order to process a transaction by the user. An example of the authentication data is a PIN number input via a PIN pad interface displayed at the wearable user input device. The PIN pad interface can be enabled by installation of an “app” on the user input device. In a further embodiment, there is a PIN entry device/interface including a plurality of buttons, each button having an electronic display for display digits. The digits in each button are presented in a random manner in accordance to an arrangement defined by a random digit layout generator, whereby input data by a user selecting one or more of the buttons is received as a PIN for the user. Corresponding methods relating to the aforementioned devices and systems are also disclosed. Further details of the various embodiments will be described in the following paragraphs. Particular embodiments of the present invention can be realised using a processing system, an example of which is shown inFIG.1. In particular, the processing system100generally includes at least one processor102, or processing unit or plurality of processors, memory104, at least one input device106and at least one output device108, coupled together via a bus or group of buses110. In certain embodiments, input device106and output device108could be the same device. An interface112also can be provided for coupling the processing system100to one or more peripheral devices, for example interface112could be a PCI card or PC card. At least one storage device114which houses at least one database116can also be provided. The memory104can be any form of memory device, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc. The processor102could include more than one distinct processing device, for example to handle different functions within the processing system100. Input device106receives input data118and can include, for example, a keyboard, a pointer device such as a pen-like device or a mouse, audio receiving device for voice controlled activation such as a microphone, data receiver or antenna such as a modem or wireless data adaptor, data acquisition card, etc. Input data118could come from different sources, for example keyboard instructions in conjunction with data received via a network. Output device108produces or generates output data120and can include, for example, a display device or monitor in which case output data120is visual, a printer in which case output data120is printed, a port for example a USB port, a peripheral component adaptor, a data transmitter or antenna such as a modem or wireless network adaptor, etc. Output data120could be distinct and derived from different output devices, for example a visual display on a monitor in conjunction with data transmitted to a network. A user could view data output, or an interpretation of the data output, on, for example, a monitor or using a printer. The storage device114can be any form of data or information storage means, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc. In use, the processing system100is adapted to allow data or information to be stored in and/or retrieved from, via wired or wireless communication means, the at least one database116and/or the memory104. The interface112may allow wired and/or wireless communication between the processing unit102and peripheral components that may serve a specialised purpose. The processor102receives instructions as input data118via input device106and can display processed results or other output to a user by utilising output device108. More than one input device106and/or output device108can be provided. It should be appreciated that the processing system100may be any form of terminal, server, specialised hardware, or the like. The processing device100may be a part of a networked communications system200, as shown inFIG.2. Processing device100could connect to network202, for example the Internet or a WAN. Input data118and output data120could be communicated to other devices via network202. Other terminals, for example, thin client204, further processing systems206and208, notebook computer210, mainframe computer212, PDA214, pen-based computer216, server218, etc., can be connected to network202. A large variety of other types of terminals or configurations could be utilised. The transfer of information and/or data over network202can be achieved using wired communications means220or wireless communications means222. Server218can facilitate the transfer of data between network202and one or more databases224. Server218and one or more databases224provide an example of an information source. Other networks may communicate with network202. For example, telecommunications network230could facilitate the transfer of data between network202and mobile or cellular telephone232or a PDA-type device234, by utilising wireless communication means236and receiving/transmitting station238. Satellite communications network240could communicate with satellite signal receiver242which receives data signals from satellite244which in turn is in remote communication with satellite signal transmitter246. Terminals, for example further processing system248, notebook computer250or satellite telephone252, can thereby communicate with network202. A local network260, which for example may be a private network, LAN, etc., may also be connected to network202. For example, network202could be connected with Ethernet262which connects terminals264, server266which controls the transfer of data to and/or from database268, and printer270. Various other types of networks could be utilised. The processing device100is adapted to communicate with other terminals, for example further processing systems206,208, by sending and receiving data,118,120, to and from the network202, thereby facilitating possible communication with other components of the networked communications system200. Thus, for example, the networks202,230,240may form part of, or be connected to, the Internet, in which case, the terminals206,212,218, for example, may be web servers, Internet terminals or the like. The networks202,230,240,260may be or form part of other communication networks, such as LAN, WAN, Ethernet, token ring, FDDI ring, star, etc., networks, or mobile telephone networks, such as GSM, CDMA or 3G, etc., networks, and may be wholly or partially wired, including for example optical fibre, or wireless networks, depending on a particular implementation Referring toFIGS.3A and3Bthere are shown schematic diagrams of examples of wearable devices310for authenticating a user. In particular, the wearable device310includes one or more sensors. The one or more sensors are configured to generate sensor signals representing signal data related to movement of fingers of the user wearing the wearable device to provide authentication data. Referring toFIG.3Athere is shown an example wearable device in the form of a glove. Referring toFIG.3Bthere is shown another example of a wearable device in the form of a wrist worn electronic device such as a smart watch. It will be appreciated that whilstFIG.3Billustrates the wearable device in the form of smart watch, other wrist worn electronic devices can also be used. In the case of the wearable device ofFIG.3B, the wrist band can include one or more sensors to sense the movement of tendons in the users wrist which are associated with movement of one or more fingers. The sensor data is received by one or more processors and processed for determining authentication data for accessing a service. The one or more processors may be part of the wearable device or may be a separate computer implemented device such as a processing system100, wherein the sensor data is transferred to the computer implemented device via a communication interface. Referring toFIG.3Cthere is shown a block diagram representing a system300for authenticating a user using an electronic wearable device. In particular, the electronic wearable device310includes one or more processors320, one or more sensors330, a communication interface340and a memory350coupled together via a data bus360. The memory has stored therein one or more classifiers355. The wearable device can be in wireless communication with another processing system facilitating access to the service upon successful authentication. Referring toFIG.4there is shown a flowchart representing a method400performed by the one or more processors320of the wearable device310ofFIG.3. In particular, at step410, the method400includes receiving the sensor data. At step420, the method400includes interpreting the sensor data using one or more classifiers355to determine the authentication data. At step430, the method400includes using the authentication data to access the service. The one or more sensors330can include one or more accelerometers to determine movement of the fingers. Generally, the one or more accelerometers include one or more gyroscopes to determine the movement of the fingers. In other embodiments, the one or more sensors can additionally or alternatively include one or more location sensors (e.g., GPS), proximity sensors, biometric sensors, force sensors and/or the like. The wearable device310can be operated in a training mode and an operable mode. In the training mode, the one or more classifiers are trained to interpret the sensor data to determine the authentication data. In the operable mode, the wearable device310is configured to interpret the sensor data using the one or more classifiers, trained in the training mode, to determine the authentication data. Generally the wearable device310includes an input device to be able to switch the wearable device310between modes. The wearable device310can have particular benefits in relation authentication data such as a PIN. In one form, the one or more classifiers are trained to interpret the sensor data indicative of movement of the fingers according to a surface representing a PIN pad to determine a plurality of digits of the PIN. It will be appreciated that the surface does not necessarily bear indicia representing a PIN pad as it can simply be visualised by the user on the surface such that the user moves their fingers on the surface to indicate a selection of particular digits of the PIN by contacting or pressing the surface with one or more of their fingers. In one form, the one or more classifiers355can be trained specifically to interpret a series of finger taps represented by the sensor data as authentication data. In particular, the one or more processors320are configured to determine, using the one or more classifiers355, digit tap segments of the series of taps. For example, the one or more processors320may attempt to detect longer temporal pauses between taps to indicate a pause between different digits of the PIN. The one or more processors320are then configured to interpret, using the one or more classifiers355, each digit tap segment to determine a digit of the PIN. The one or more processors320then combine each digit of the PIN to obtain the PIN. In one particular form, the one or more processors320are configured to determine, using the one or more classifiers355, a number of finger taps represented by each finger tap segment, wherein the number of finger taps represents one of the respective digits of the PIN. For example, the user may tap their finger three times, then pause, tap their finger another seven times, then pause, then tap their finger twice, then pause, then tap their finger a further four times. Based on this example, the one or more processors320, using the classifiers355, can determine that the PIN is 3724. In another form, tapping may be replaced by flexing the fingers which can also be detected using the one or more fingers. As such, the sensor data can be segmented into digit flex segments, and then each digit represented by each digit flex segment is determined and then concatenated together to form the PIN. As shown inFIG.3, the wearable device310can include a wireless communication module340. In this regard, the wearable device310can be configured to transfer data indicative of the authentication data wirelessly, using the wireless communication module340, to another computer implemented device370in order to obtain access to the service. In one form, the other computer implemented device370may be a general processing system100, a POS device, an Automatic Teller Machine, or the like. Preferably, the one or more processors320encrypt the PIN upon determination and prior to transfer wirelessly to the other computer implemented device370. Referring toFIG.5there is shown a point-of-sale (POS) system500including a POS device520and a user input device510which is physically separate to the POS device520. The user input device510is in wireless communication530with the POS device520. Referring toFIG.6there is shown a method of using the POS system500disclosed inFIG.5. In particular, at step610the method600includes receiving, via the user input device510, authentication data from a user interacting with the user input device510. At step620, the method600includes the user input device510establishing a wireless connection with the POS device520. At step630, the method600includes the user input device510wirelessly transferring the authentication data to the point-of-sale device520for authentication in order to process a transaction by the user. In one form, the authentication data is a PIN. In one particular form, the user input device510may be a user's mobile communication device which has installed thereon an executable application. For example, the user's mobile communication device510may be a smart phone or tablet processing system which has installed thereon an “app”. When authentication of the user is required in order to authenticate a financial transaction being processed by the POS device520, the POS device520may communicate with the user's mobile communication device510to present a PIN pad interface515within the application512. The user can then interact with the PIN pad interface515presented via the display of the mobile communication device510, wherein data indicative of the authentication data is transferred to the POS device520. Communication between the user input device510and the POS device520can be wireless. In one form, the wireless communication530may be conducted using Bluetooth protocol. It is preferable the data indicative of the authentication data is encrypted using an encryption algorithm such as triple DES or the like. In an alternate form, the user input device510may be the wearable device310discussed in relation toFIGS.3and4. Referring toFIG.7there is shown a PIN entry device700including a plurality of buttons710. Each button710has an electronic display720. The PIN entry device700also includes or is coupled to one or more processors770electrically coupled to the plurality of buttons710. Furthermore, the one or more processors770are coupled to memory790including a random digit layout generator792and random digit mapping layout data795, and a communication interface780. Operation of the PIN entry device700will now be discussed in relation toFIG.8. In particular, at step810, the method800includes the one or more processors770determining random digit layout mapping data795. At step820, the method800includes the one or more processors770controlling presentation of a digit by the electronic display720of each button710according to the random digit layout mapping data795. At step830, the method800includes receiving input data by a user selecting one or more of the buttons710. At step840, the method includes the one or more processors770determining, based on the input data and the random digit layout mapping data, a PIN for the user. In one form, the one or more processors770are configured to determine the random digit layout mapping data for each transaction. For example, the one or more processors may execute a software module such as the random digit layout generator792to determine a random layout of the digits (0-9) for the PIN entry device. In one form, the one or more processors may be configured to generate the random digit layout which is not a traditional digit layout (i.e. first row from left to right being “1”, “2”, “3”, second row from left to right being “4”, “5”, “6”, third row from left to right “7”, “8” and “9” and fourth row “0”). As such, the random digit layout presented by the PIN entry device is a non-traditional digit layout. For example, the random digit layout mapping data may include for example a first row from left to right being “3”, “4”, “9”, second row from left to right being “1”, “7”, “8”, a third row from left to right “2”, “6” and “5”, and a fourth row of “0”. The electronic display720for each button710may include a segmented display such as a seven segmented display such that the one or more processors770are electrically connected thereto to control the presentation of the respective digit according to the random digit layout mapping data795. It will be appreciated that upon determining the PIN, the one or more processors770encrypts the PIN using an encryption algorithm such as triple DES or the like. It will also be appreciated that the random digit layout mapping data795may be stored in memory in an encrypted manner. It will be appreciated that the PIN entry device700can be part of a POS device. Alternatively, the PIN entry device may be part of an Automatic Teller Machine (ATM). It will be appreciated that the random digit layout mapping data792can be utilised with mobile processing devices510such as those discussed in relation toFIGS.5and6. In particular, the mobile communication device510determines random digit layout mapping data and then generates the PIN pad interface515in accordance with the random digit layout mapping data which is presented via the application512executed by the mobile communication device510. The user can then interact with the random digit layout of the PIN pad interface515presented by the display of the mobile communication device510in order to select the appropriate interface elements of the PIN pad interface512to input the authentication data in the form of the user's PIN. Data indicative of the PIN can then be encrypted as discussed above prior to being transferred to the POS device520for processing. Referring toFIG.9there is shown a schematic of a processing system900configured for detecting a security risk. In one form, the processing system900includes one or more processors910coupled to one or more sensors950, one or more output devices904in the form of a display and one or more input devices930. In one form, the one or more sensors950are part of the processing system900, however it is also possible that the one or more sensors950are not integrated with the processing system900. Referring toFIG.10there is shown a flowchart representing a method1000performed by the processing system900ofFIG.9for detecting a security risk. In particular, at step1010, the method1000includes the one or more processors910receiving sensor data. At step1020, the method1000includes the one or more processors910analysing the sensor data to detect whether there is a security risk of the sensitive data being vulnerable. The sensitive data can be input by a user using the input device930or displayed by the display of the processing system900. At step1030, the method1000includes the one or more processors910disabling an application922open at the processing system900in response to detecting the security risk. In one form, the one or more processors910are configured to determine, based on the sensor data, a user position relative to the display940. The user position is then compared by the one or more processors910to user position criteria stored in memory920of the processing system900. The security risk can be detected in response to the user position failing to satisfy the user position criteria. The user position can be an angular user position relative to the display. In particular, in the event that the user is facing the display940of the processing system900substantially front-on then the application922is not disabled. However, in the event that the user's head is laterally moved relative to the display so that the user is no longer face the display front-on or is turned such that the user is not facing the display within an angular user position range (e.g. +/−90 degrees), then the one or more processors910are configured to disable the application922. In this regard, the one or more sensors950may be a camera such as a web-cam or an thermographic camera. The one or more processing systems900may be configured to perform image processing system upon one or more images to determine a user position relative to the display. In another form, the one or more processors910can be configured to detect the security risk based on the sensor data950being indicative of a camera flash. In particular, the one or more sensors950may be a light sensor such as a photocell, photoresistor, photodiode or phototransistor, wherein the one or more processors910receive a signal indicative of light sensed. In the event that a flash has of a camera has been captured by the light sensor950based on analysis of the received signal, the application922can be disabled. In other embodiments, a web-cam or camera can be used as the one or more sensors950, wherein a stream of images or video footage can be analysed by the one or more processors910to determine whether a flash has been detected. In response to the positive detection of a flash, the application922is disabled. In another form, the one or more processors910can be configured to detect, based on the sensor data, a number of users. The security risk is detected in the event that more than one user are detected adjacent the processing system900or zero users are detected adjacent the processing system900. In particular, in the event that the user walks away from the processing system900and sensitive data is left presented upon the display940, the one or more processors910can detect, based on analysis of the sensor data, the security risk and disable the application922. Alternatively, in the event that another person is “shoulder surfing”, the determination of two users through analysis performed by the one or more processors910can be detected as the security risk resulting in the disabling of the application922. In this embodiment, the one or more sensors950can be a camera such as a web-cam, an infra-red sensor or thermographic camera. For example, in relation to an infra-red sensor, in the event that no signal is received by the one or more processors910indicative of a user, the security risk is detected. In relation to camera devices950such as web-cam or a thermographic camera, the one or more processors910may perform image analysis to determine the number of users captured in the image in order to determine whether a security risk has been detected. In the above embodiments, disabling the application922can include the application being minimized. Additionally, the application922may be locked or prevented from being opened without successful user authentication. For example, authentication data such as a valid password may be required to be entered using the input device930of the processing system900in order for the application922to be reopened. In another form, the disabling of the application922may include locking the operating system such that the application922is in turn disabled from being used. The application922can then be reused only upon the operating system being unlocked by successful user authentication which can include the entering of a password or the like. Generally, the processing system900has installed in memory a detection computer program925which configures the processing system900to operate as described above. It will be appreciated that there may be instances where the user wishes to present information to another user viewing the display940or wishes to input data presented on the display with another user. In this instance, the user can disable the detection computer program922executing in the processing system900in order to allow such actions to take place. Once the user wishes for the risk detection processes to recommence, the user can interact with the computer program922executing upon the processing system900to indicate the recommencement of the risk detection process. It will be appreciated that the computer program925executable by the processing system900may be re-enabled after a temporal threshold period of time. For example, the threshold may be 60 minutes, wherein after 60 minutes has elapsed since the detection computer program925was disabled, the computer program925of the processing system900is re-enabled. In another instance, the user can interact with the computer program925of the processing system900to reduce the security risks being detected for a period of time. For example, the user may be working with a colleague at the processing system900for the next hour and as such the user wishes to configure the processing system900such that detected or suspected shoulder surfing is not considered a security risk for this period of time. As such, the user has interact with the computer program925to restrict detections of multiple users viewing the display of the processing system900for the next hour. However, in the event that the users walk away from the processing system900during this hour period, the processing system900can detect this type of security risk and disable the application922. Referring toFIG.11there is shown a schematic of an example of a detection device1110which can be part of a detection system1100. The detection device1110includes one or more sensors1135, a communication interface1140for coupling the detection device1110to a processing system1150executing an application1162associated with sensitive data, a memory1125, and one or more processors1120coupled to the one or more sensors1135, the memory1125and the communication interface1140. Referring toFIG.12there is shown a flowchart representing an example method1200performed by the detection device1110. In particular, at step1210, the method1200includes the one or more processors1120obtaining sensor data from the one or more sensors1135. At step1220, the method1200includes the one or more processors1120analysing the sensor data to detect whether there is a security risk of the sensitive data being vulnerable, the sensitive data being input by a user using an input device1170or output by an output device1175of the processing system1150. At step1230, the method1200includes the one or more processors1120instructing the processing system1150, via the communication interface1140, to disable the application in response to detecting the security risk. In one form, the one or more processors1120are configured to determine, based on the sensor data, a user position relative to the output device1175such as the display of the processing system1150. The user position is then compared by the one or more processors1120to user position criteria stored in memory1125of the detection device1110. The security risk can be detected in response to the user position failing to satisfy the user position criteria. The user position can be an angular user position relative to the display1175. In particular, in the event that the user is facing the display1175of the processing system1150substantially front-on then the application1162is not disabled. However, in the event that the user's head is laterally moved relative to the display such that the user is not facing the display1175front-on or is turned such that the user is not facing the display within an angular user position range (e.g. +/−90 degrees), then the one or more processors1120are configured to disable the application1162. In this regard, the one or more sensors1135may be a camera such as a web-cam or an thermographic camera. The one or more processors1120may be configured to perform image processing system upon one or more images to determine a user position relative to the display1175of the processing system1150. In another form, the one or more processors1120can be configured to detect the security risk based on the sensor data being indicative of a camera flash. In particular, the one or more sensors1135may be a light sensor such as a photocell, photoresistor, photodiode or phototransistor, wherein the one or more processors1120receive a signal indicative of light sensed. In the event that a flash has been captured by the light sensor1135based on analysis of the received signal, the instruction is transferred to disable the application1162. In other embodiments, a web-cam or camera can be used as the one or more sensors1135, wherein a stream of images or video footage can be analyzed by the one or more processors1120to determine whether a flash has been detected. In response to the positive detection of a flash, the instruction to disabled the application1162is transferred. In another form, the one or more processors1120can be configured to detect, based on the sensor data, a number of users that are located adjacent the processing system1150. The security risk is detected in the event that more than one user is detected adjacent the processing system1150or zero users are detected adjacent the processing system1150. In particular, in the event that the user walks away from the processing system1150and sensitive data is presented by the output device1175, the one or more processors1155can detect, based on analysis of the sensor data, the security risk and instruct the processing system1150to disable the application1162. Alternatively, in the event that another person is “shoulder surfing”, the determination of two users can be detected as the security risk resulting in the detection device1110transferring the instruction to the processing system1150to disable the application1162. In this embodiment, the one or more sensors1135can be a camera such as a web-cam, an infra-red sensor or thermographic camera. For example, in relation to an infra-red sensor, in the event that no signal is received by the one or more processors1120indicative of a user, the security risk is detected. In relation to camera devices such as web-cam or a thermographic camera, the one or more processors1120may perform image analysis to determine the number of users captured in the image in order to determine whether a security risk has been detected. In the above embodiments, disabling the application1162can include the application1162being minimized. Additionally, the application1162may be locked or prevented from being opened without successful user authentication. For example, authentication data such as a valid password may be required to be entered using the input device1170of the processing system1150in order for the application1162to be reopened. In another form, the disabling of the application1162may include locking the operating system such that the application1162is in turn disabled from being used. The application1162can then be reused only upon the operating system being unlocked by successful user authentication which can include the entering of a password or the like. The communication interface1140of the detection device1110can be a wireless communication interface such as Bluetooth, WiFi, or the like. Alternatively, a physical communication interface such as a USB cable, serial cable or the like can be used to communicate data between the detection device1110and the communication interface1185of the processing system1150. Generally, the processing system1150has installed in memory1160a detection computer program1165which configures the processing system1150to operate as described above, in that an instruction received from the detection device1110is used to disable the application1162in response to detecting the security risk. It will be appreciated that there may be instances where the user wishes to present information to another user viewing the display1170or wishes to input data with another user present and adjacent the processing system1150. In this instance, the user can disable the detection computer program1162executing in the processing system1150in order to allow such actions to take place. Additionally or alternatively, the processing system1150can transfer, in response to the user providing input to request disablement of the detection process, an instruction or command to the detection device1110to be disabled. As such, a computer program1130executed by the one or more processors1120of the detection device1110can be disabled in response to receiving the command or instruction from the processing system1150. Once the user wishes for the risk detection processes to recommence, the user can interact with the computer program1160of the processing system1150to indicate the recommencement of the risk detection process, and in response the processing system1150transfers to the detection device1110, via the communication interfaces1185,1140, the command or instruction to re-enable the computer program1130of the detection device1110such that security risks can again be detected. It will be appreciated that the computer program1162executable by the processing system1150may automatically request re-enablement of the software after a temporal threshold period of time. For example, the threshold may be 60 minutes wherein after 60 minutes has elapsed since the detection device1110was disabled, the computer program1165of the processing system1150transfers a re-enablement command or instruction to the detection device1110. Alternatively, the computer program1130of the detection device1110may monitor the period of time disabled and then re-enable after a threshold period of time of disablement has elapsed. In another instance, the user can interact with the computer program1165of the processing system and/or the computer program1130of the detection device1110to reduce a number types of security risks being detected for a period of time. For example, the user may be working with a colleague at the processing system1150for the next hour and as such the user interacts with the computer program1165of the processing system1150and/or the computer program1130of the detection device1110to reduce the security risks detected for shoulder surfing for the next hour. As such, the application1162is not disabled despite multiple users viewing the display of the processing system1150for the next hour. However, other types security risks are still monitored during this period. Therefore, in the event that the users walk away from the processing system1150during this hour period, the detection device can detect this security risk and disable the application1162being executed by the processing system1150. It will be appreciated that in some instances the processing system1150may also include integrated sensors such as a webcam for a laptop processing system. As such, sensor data can be obtained by the one or more processors1155from the of the one or more sensors1190(shown in dotted line) and transferred, via the communication interfaces1185,1140to the detection device1110to be analyzed to determine if there a security risk. Thus, one or more sensors1135of the detection device1110and one or more sensors1190of the processing system1150can be used to detect if there is a security risk. Throughout this specification and claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated integer or group of integers or steps but not the exclusion of any other integer or group of integers. Persons skilled in the art will appreciate that numerous variations and modifications will become apparent. All such variations and modifications which become apparent to persons skilled in the art, should be considered to fall within the spirit and scope that the invention broadly appearing before described. | 37,884 |
11861035 | DETAILED DESCRIPTION It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.1are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.2, a set of functional abstraction layers provided by cloud computing environment50(FIG.1) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.2are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and building an artificial intelligence model96. A public AI model is an AI model that is trained with public data only. A private AI model is a model trained with a mix of public and private data. A common technique for improving both public and private AI models is transfer learning which involves fine-tuning a pre-trained model through weighting updates. Common issues that face a user of private AI model is that a user's private AI model may not have enough samples in the private AI model and the turnaround time in the private AI model may be slow because the user does not have the computer power available to a public AI model which is accessed by many users. To overcome these issues a user may try to fine tune user's private AI model with someone else's public AI model using transfer-learning. In order to train the user's private AI model with the public AI model, the user may download the public AI model, keep only the feature extraction parts of the public, add the FC layers of the private AI model and tune the entire combined neural network for better accuracy. Problems with existing ways of fine-tuning public and private AI models with transfer learning, there are unverifiable original models, too many derived or transferred models with a resulting inference overhead, and there is no way to preserve privacy of the data in the user's private AI model. In conventional methods of transfer learning between a public AI model and a private AI model, a user's private AI model is not only fine-tuned by the public model but the public AI model is fine-tuned with the user's private AI model. However, while a private model fine-tuned with public data from a public AI remains private, a public AI model which is tuned with private data from a private AI model becomes a private AI model, due to the presence of the private data in the previously public AI model. In an embodiment of the present invention302depicted inFIG.3, in order to prevent the public AI model from becoming a private AI model during transfer learning, the public AI model is kept “fixed”, i.e., the public AI model does not change, when a user's private AI model is combined with a public AI model to form a combined AI model and the a user's private AI model is fine-tuned with the public AI model. In the embodiment of the present invention depicted inFIG.3, a public NN312is accessible by many private NNs314, of which a private NN316for user[i] and a private NN318for user [j] are shown, interact with public NN312over a 5G wireless telecommunication network as shown by double-headed arrow320. Public NN312is a large, full-size, public NN for everyone and contains public knowledge. Private NNs316are small and contain private knowledge. Public NN312runs on a public environment (env) such as a public cloud and maximizes resource sharing, i.e., batch-up to cut down the cost of using public NN312. Each private NN of private NNs316runs on a private environment, such as a mobile phone, private cloud, etc. in which privacy is protected and which is fully tuned for personalization. In addition to the benefit preventing a public AI model from becoming a private AI model, the embodiment of the present invention depicted inFIG.3has an additional benefit, as shown inFIG.4, that the embodiment of the present invention depicted inFIG.3is more energy efficient compared to other systems and method of transfer learning involving public AI models and private AI model. Switching model parameters for multiple users/applications, as is done in traditional transfer learning methods, is an energy consuming task. In contrast, fixed and sharable public NN parameters, such as provided in various embodiments of the present invention, such as the embodiment of the present invention shown inFIG.4provides energy efficiency. FIG.5depicts a public database502, i.e., ImageNet, and a private database504, i.e. Food101 used to test the operation of an embodiment of the present invention. Resnet50, a convolutional neural network trained on more than a million images from ImageNet is used as the public AI model. A private AI model is trained on Food101. Table602ofFIG.6shows the results of training tests. Column612lists the size in megabytes of each of the public AI models used in each test, column614lists the size in megabytes of each of the private AI models used in each test, column616lists the Top-1 accuracy for each test, and column618lists the training time in seconds per epoch for each test. Top-1 accuracy is the conventional accuracy: the model answer (the one with highest probability) must be exactly the expected answer. For the public AI model Resnet50, trainings on Resnet50 alone are treated as if they are tests on a “private AI model” in Table602and Resnet50 is not a fixed public AI model. Test622involved a scratch training of Resnet50. Test624involved a full retraining of Resnet50. Test626involved a training according to an embodiment of the present invention of combined AI model comprising Resnet50 and an FC layer of the private AI model trained on Food101 where Resnet50 was fixed during training of the combined AI model. Test628involved a training of a combined AI model comprising Resnet50 as a fixed public AI model and a private AI model consisting of a combination of res4, which is part of public model Resnet50, and an FC layer of a private AI model trained on Food101. Test630involved a training of a combined AI model comprising Resnet50 as a fixed public AI model and a private AI model consisting of 25% of Resnet50 and an FC layer of a private AI model trained on Food101. Test632involved a training of a combined AI model comprising Resnet50 as a fixed public AI model and a private AI model consisting of 12.5% of Resnet50 and an FC layer of a private AI model trained on Food101. Test634involved a scratch training of 25% of Resnet50. Test636involved a scratch training of 12.5% of Resnet50 As can be seen in Table602ofFIG.6, re-training an entire private AI model has an expensive training cost (145 sec/epoch in Test624). Also, it is expensive to retrain a private model entirely on private servers. Retraining an FC layer of a private model as shown in Test626has the advantages of only having to retrain a small AI private model but the private AI model has poor accuracy. Test632shows several advantages of a combine AI model according to an embodiment of the present invention in which the private part (private AI model) of the combined AI model on private servers is linked to a fixed public part of the combined AI model which is a large pre-trained public AI model on cheaper public services. Such a combined AI model provides efficient cost management and protects privacy while still maintaining high accuracy. FIG.7depicts a public AI model and a private AI model are kept separate from each other in a combined AI model according to an embodiment of the present invention in comparison to the mixing of the public AI model with the private AI model in conventional transfer learning techniques. An equation712for a function “f” is shown for a pre-trained NN (public AI model). Public knowledge from the public AI model is mixed with private knowledge from private data to produce equation714for a function “f” that includes private knowledge from a user i. In contrast, in a combined AI model according to an embodiment of the present invention in which the public AI model and the private AI model are kept separate, produces equation716including a fusion function “z” that includes a function “f” for the public AI model and a separate function “g” for the private AI model. FIG.8depicts a conventional transfer learning service802in which customers' private local databases812on a private side814, such as a private cloud, for each customer, are uploaded as private data816, as shown by each respective arrow818to a public side820, such as a public cloud. Each customer picks a model template822from a public catalog824which is trained using the customer's private data816to produce a private AI model826for the customer that provides a prediction828for the customer. Transfer learning service802not only produces a private model832that can be downloaded by the customer to private side814but also produces a private model834that remains on remains public side820. From a customer standpoint, concerns about such a transfer learning service as depicted inFIG.8include: (1) concerns about trusting the service provider of the public AI model, because the customer's private AI model, prediction, labels, etc. remain on the public side, (2) concerns about the cost increasing because the service provider owns everything on the public side and (3) the service provider will know private information about the customers' private AI model including the problem the customer was trying to solve, the number of classes in the customers private AI model, etc. From a service provider standpoint, concerns about such a transfer learning service as depicted inFIG.8include: (1) earning the trust of customers given that the service provider must keep the customer's private information safe, protected, encrypted, etc., and (2) the cost of providing the service increasing over time due to safety/legal overhead, the need for more and bigger data/models and more resources needed to serve higher peak inference requests. FIG.9depicts a conventional transfer learning service902in which customers' private local databases912are located on a private side914, such as a private cloud, for each customer. Each customer selects a model template922from a public catalog924on public side916, such as private cloud, to use as a public AI model926. Public AI model926is “frozen”, i.e., fixed, and will not be updated during transfer learning and training of a customer's private AI model928during which public AI model926is linked to the customer's private AI model928as a combined AI model. During training, private AI model928is trained using feature tensors932of public AI model926. Feature tensors932are features of public AI model926used to modify private AI model928. After being trained, private AI model928may be used to make a prediction940. Because public AI model926is kept fixed, while being used to train private AI model928, public AI model926may also be used by itself or in combination with another public AI model, i.e., public AI model942from public catalog924to train another private AI model944, which may be owned by the same customer as owns private AI model928or by a different customer. As shown inFIG.9, during training, private AI model944is trained using feature tensor946of public AI model926and feature tensor948of public AI model942, similarly to the way that private AI model928is trained using feature tensors932of public AI model926. Once private AI model944is trained, private AI model944may be used to make a prediction952. Each public AI model in the embodiment of the present invention depicted inFIG.9is fixed, big, complex and perfectly trained, i.e., each public AI model is trained with a large dataset. Private AI model928is smaller than public AI model926and private AI model934is smaller than public AI model942. In an embodiment of the present invention as shown inFIG.9, the public AI model/model template selected to train a particular private AI model is the “most transferable one”, i.e., the public AI model that is most relevant to the private AI model. Augmenting a customer's private AI model itself for knowledge transfer as shown inFIG.9has various advantages including: preventing the updating of weights (unlike in typical transfer learning), enabling inherent model parallelism, i.e., that a fixed public AI model may be used to simultaneously train two or more different private AI model, providing efficient computing at scale and more secure data/model protection. FIG.10depicts a usage scenario1002for an embodiment of the present invention. On a public side1012of usage scenario1002are a public AI model1014and a public AI model1016, each of which is fixed and perfectly trained. On a private side1022are a private AI model1024, a private AI model1026, a private AI model1028and a private AI model1030. Public AI model1014is trained with surveillance images from various sources, such as images from a surveillance camera1032at a highway and images from a surveillance camera1034at a downtown location. Public AI model1016is trained with images from various sources, such as images from surveillance camera1034at a downtown location and is trained with speech files from various data sets (not shown). Private AI model1024for traffic control is trained using a dataset1042of traffic images. Private AI model1024and dataset1042are stored on an electronic device1044. In order to improve private AI model1024, private AI model1024may be linked to public AI model1014to allow fine-tuning of private AI model1024with the early highway feature1046of public AI model1014. Early highway feature1046is based on early morning images from one or more highway surveillance cameras, such as surveillance camera1032. Private AI model1026for speeding, i.e., for determining which vehicles are speeding, is trained using dataset1042of traffic images. Private AI model1026and dataset1042are stored on electronic device1044. In order to improve private AI model1026, private AI model1026may be linked to public AI model1014to allow fine-tuning of private AI model1026with early highway feature1048of public AI model1014and with a late highway feature1050, of public AI model1014. Late highway feature1050is based on images from late in the day from one or more highway surveillance cameras, such as surveillance camera1032. Private AI model1028for street safety, i.e., for determining which streets are currently safe, is trained using a dataset1052of street images. Private AI model1028and dataset1052are stored on electronic device1054. Private AI model1028is also stored on a mobile device1062, such as a smartphone. In order to improve private AI model1028, private AI model1028may be linked to public AI model1014to allow fine-tuning of private AI model1028with late downtown feature1064of public AI model1014. Late downtown feature1064is based on late day images from one or more downtown surveillance cameras, such as surveillance camera1034. Private AI model1028is also linked to public AI model1016to fine-tune private AI model1028with early downtown feature1066of public AI model1016. Early downtown feature1066is based on speech files on messages about the condition of street safety in the early hours downtown. Private AI model1030is for private voice recognition, i.e., for voice recognition of communications received by mobile device1062and is stored on mobile device1062. In order to improve private AI model1030, private AI model1030may be linked to public AI model1016to allow fine-tuning of private AI model1030with late mobile feature1072of public AI model1016. Late mobile feature1072is combination of features to a mobile phone user. Although public AI models and private AI models trained with particular types of data are described above and shown in the drawings, various embodiments of the present invention may use public AI models and private AI models trained with other types of data including computer vision (image and video) data, language data, text data, speech data, audio data, etc. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 32,241 |
11861036 | DETAILED DESCRIPTION One or more specific embodiments of the present disclosure are described above. In an effort to provide a concise description of these embodiments, certain features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. While only certain features of the disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. For example, while the embodiments described herein include a specific logic organization for private information protection services, substantially similar benefits provided by the present invention may be obtained by trivial structural alterations such as changes in name or in non-substantial changes to the mode of operation of the data structures. It is, thus, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. Turning first to a discussion of an overall system for private data protection,FIG.1is a block diagram, illustrating a system100for private information management, in accordance with embodiments of the present disclosure. The system100includes a private information protection (PIP) service102that is communicatively coupled to a variety of devices and/or service providers (e.g., via the Internet104). For example, in the current embodiment, the PIP service102is communicatively coupled to one or more web servers106that serve websites that include private information surveys. The private information surveys include questions geared towards particular individuals to ascertain particular characteristics of the individual's and/or group's (such as a family's) private data handling practices. Additionally, the PIP service102, in the current embodiment, is communicatively coupled to “Internet of Things” (IOT) devices and/or IOT device service providers108. IOT devices are common objects, such as televisions, appliances, etc. that are embedded with computing devices for interconnection via the Internet104, enabling them to send and receive data. IOT device service providers provide electronic services that make use of data sent and/or received by the IOT devices. In many instances, IOT service providers act as a mediator between IOT devices, thus having access to many individual IOT device settings and data metrics. As used herein, IOT devices may also include items that facilitate Internet communications, such as network routers, switches, and/or other local area network (LAN) devices. The PIP service102may also be connected to social media services110. Social media services110include interactive computer-mediated technologies that facilitate the creation and sharing of information, ideas, career interests and other forms of expression via virtual communities. The PIP service102may also be communicatively coupled to a PIP service application112, which may execute on a remote electronic device to capture additional data. Each of the systems coupled to the PIP service102may provide data useful for an analysis of an individual's and/or a group of individuals' private data protection. The PIP service104may retrieve information from these systems, aggregate the data, and determine, via PIP data analysis, how protected the individual's and/or the group of individuals' data is. Based upon the PIP analysis, a PIP score114indicative of a measure of how protected private information is may be generated. For example, factors used in the PIP analysis may result in a determination of a level of private information disclosure which may be represented in a PIP score114. As mentioned above, the PIP score114may impact product/service offerings and/or costs associated with such products/services. For example, the PIP score114may be provided to a financial readiness scoring (FRS) service116. The FRS service score118may indicate a level of financial aptitude of an individual and/or group of individuals. As may be appreciated, the individual's and/or group of individuals' private data protection or lack thereof may positively or negatively affect financial aptitude. For example, less protected private information may result in more financial fraud. Accordingly, the FRS score118may be reduced for lower PIP scores114. Conversely, more protected private information may result in less financial fraud. Accordingly, the FRS score118may be increased for higher PIP scores114. The FRS scores118may be provided to banking and/or finance electronic systems120, after modifying the FRS score118to account for the PIP score114. This enables the banking and/or financial electronic systems120to make educated product/service offerings and/or product/service price adjustments based upon a level of protection of private data. This provides a significant benefit over former techniques, which did not have access to such information for use in product/service offerings. In some embodiments, the PIP score114and/or the FRS score118may be provided to the individual and/or group of individuals via, for example, a client device122. For example, the PIP score114and/or the FRS score118may be provided as a numerical indication of where in a range of values the individual measures up for private information protection and/or financial readiness, respectively. By using numerical scores, individuals and/or groups of individuals may become quickly apprised of a standing amongst others with relatively little effort. Further, lower numerical scores may motivate positive change in private information protection and/or financial readiness actions, driving increased growth in these areas. Further, high numerical scores may motivate persistence in already positive private information protection and/or financial readiness actions. In some embodiments, the PIP service102may be communicatively coupled to a remedial management service124. Using information acquired by the PIP service102, the remedial management service124may automatically institute remedial measures. For example, the PIP service102may, through PIP analysis, identify that private information is easily accessible by public users on the social media services110and, therefore, is unprotected. The PIP service102may provide information to the remedial management service124, which may communicate with the social media services110(e.g., via an application programming interface (API)), to control private information settings within the social media services110. While the remedial management service124is shown as a service separate from the PIP service102, in some embodiments, these services are combined as one service. In some embodiments, a machine-learning system126may be coupled with the PIP service102to derive additional information from data acquired by the PIP service102. For example, as will be discussed in more detail below, the machine learning system126may be used to identify relevant groups of individuals that may act in common ways with regard to protection of private data (or lack thereof). For example, in one embodiment, the machine learning system126may identify a previously unknown group, such as enlisted servicemen and particular PIP activities associated with this group of individuals, such as an indication that they share deployment dates online publically, which may result in a reduction of a PIP score114. By identifying new relevant groups and/or particular PIP activities associated with certain groups of individuals, tailored advice may be generated and presented for particular subsets of individuals. Tailored PIP content is discussed in more detail below with regard toFIG.8. Turning now to functionality of the PIP service102,FIG.2is a flowchart, illustrating a process200for managing private information, in accordance with an embodiment of the present disclosure. The process200begins by a user initiating and/or logging into to the private information scoring service (block202). For example, the individual and/or group of individuals may access, via a device204, a uniform resource locator (URL) of the PIP service102, where the PIP scoring may commence. The internal services206(e.g., the PIP service104) may determine whether the individual and/or group of individuals is a member of the internal services (decision block208). If the individual and/or group of individuals is a member, available information about the individual and/or group of individuals is gathered for use in the PIP analysis (block210). The privacy scoring survey (e.g., the PIP analysis data gathering from relevant sources) may begin, with a baseline set of data including the gathered data from block210(block212). Otherwise, when the individual and/or group of individuals is not a member and no additional details regarding the individual and/or group of individuals is known by the internal services206, the privacy score survey may begin without gathered baseline data (block212). The privacy score survey may include many different sections pertaining to different internal and/or external services. Data accumulated in each of the sections may be aggregated with data from the other sections to be used in determination of the overall private information protection score. In the current embodiment, a first section includes a social media section214for collecting/analyzing private information protection pertaining social media services110. As a preliminary matter in the social media section214, the device may identify whether the individual and/or group of individuals has one or more social media profiles (decision block216). This may be done in a number of ways. For example, the device204may poll for installed applications on the device204and determine if any social media service110applications are installed. If there are social media service110applications installed, the device204may access the social media service110applications to identify whether the individual and/or group of individuals is logged into a social media profile. If so, the device204can determine that the individual and/or group of individuals does have a social media profile and can identify a unique identifier associated with the social media profile. In some embodiments, the device204may provide a graphical user interface (GUI) prompting the individual and/or group of individuals to indicate whether they have a social media profile. The response to this prompt may provide an indication to the device204as to whether the individual and/or group of individuals has a social media profile. When the individual and/or group of individuals has at least one social media profile, the user may be prompted to log in to the social media services110(which are external services218) or otherwise provide access to social media services110(block220). For example, open authorization (OAuth) access may be used between the social media services110and the PIP services102. Once provided access, the internal services206may retrieve from the social media services110, account information, profile activity, etc. that may be useful for the PIP analysis (block222). For example, as may be appreciated, information regarding posted data and who the data is posted to may be very useful for the PIP analysis, as sharing certain types of identifying information with unfamiliar people could result in less private information protection. Further additional profile information, which may be obtained from social media services110APIs may include security/privacy settings, etc. When such information is available, the PIP service102may determine a level of protection regardless of historical posting of data. In some embodiments, it may be enough to provide a unique identifier for the social media profile without providing access to personal social media services110for the individual and/or group of individuals. For example, when looking for historical post data for public (e.g., non-specified) viewers of the social media services110, the PIP services110may simply reference the profile using the unique identifier as a public viewer. Once accessed, the PIP service102may crawl through posts and other data of the profile to ascertain information that is not protected from public view. FIGS.3A and3Bare schematic diagrams, illustrating performance of a private information analysis for social media services110, in accordance with embodiments of the present disclosure.FIG.3Aillustrates an embodiment that uses a public view300for gathering social media services110data andFIG.3Buses a limited view350to gather data. InFIG.3A, the individual provides a social media unique identifier302as “JOHN123” in progression304. Upon receiving the unique identifier associated with the social media profile, the PIP services102may access the social media page view306or underlying code308of the social media page view306that is associated with the unique identifier302. This is illustrated in progression310. The PIP service102may crawl either the social media page view306or the underlying code308for private information. As illustrated in progression310, two pieces of private information312(e.g., a full legal name, birthday, or other identifying information) are identified in progression310. In progression314, the privacy information score may be adjusted based upon the type and/or amount of private information that is available in public view. For example, private information protection scores may be reduced with an increasing magnitude based upon weights associated with different types of data. For example, social security numbers, which may be more difficult to obtain and are oftentimes treated as protective identifiers, may be weighted heavier than birthdates. Further, full legal names may be weighted less than birthdays, as they are more easily attainable than birthdays and are not often relied upon for security measures. In some instances, the profile can be crawled for text associated with known private information. Further, in some embodiments, image recognition can be used to identify pictures or video with potential private information protection concerns. In some embodiments, to do this, the PIP service102may identify known private information and mine for posted content associated with the known private information. For example, if the PIP services102know that that an individual has listed his mother's maiden name as an answer to a security question on a website, the PIP services102may crawl for disclosure of his mother's maiden name on the social media services110. The same technique may be used for other digital content. If, for example, the individual indicated that his favorite sport is tennis on a security question, the PIP services102could analyzed posted images, video, and/or audio for content that discloses the individual's enjoyment of tennis (e.g., a tennis highlights video posted to social media, a picture of the individual playing tennis, and/or a posted podcast related to tennis, etc.). The private information protection score may be modified based upon the presence or lack of presence of such content. In some embodiments, the PIP service102may identify an accuracy of disclosed information and factor that into the private information protection score. For example, if a birthday is disclosed, but is not the individual's actual birthday, this may be treated as non-disclosure of the individual's birthday and/or may be treated as a protection precaution that actually increases the individual's private information protection score. InFIG.3B, the individual logs in to the social media profile using login information352, as illustrated in progression354. Upon logging in, the PIP service102may have access (e.g., via the social media services'110APIs) to limited views356, which can only be accessed by specific groups of people (e.g., social media “friends”). As may be appreciated, with limited access, certain information may become less sensitive, while certain other information remains sensitive. For example, disclosure of a birthday to “friends” may not be sensitive, but social security number disclosure may still be sensitive. Accordingly, the PIP service102may identify potentially sensitive data based upon a context of who has access to the disclosure. As discussed withFIG.3Aand illustrated in progression358, the PIP services102may crawl either a page view356or the underlying code360. Because the limited view356and underlying code360has a more limited audience than the view306and underlying code308inFIG.3A, some private data may not be marked as sensitive in progression358. For example, while two pieces of sensitive data were identified inFIG.3A, only one piece of sensitive data362is identified inFIG.3B. The private information protection score may be updated based upon the identified private information, as illustrated in progression364. In some embodiments, the PIP service application112may be used to obtain a view of the social media profile view. For example, inFIG.4A, a device402may prompt404for a snapshot of a social media view (e.g., either public or limited). A camera of the device402may be positioned to take a snapshot of electronic display406, which is displaying the requested social media view408. Upon capturing the snapshot, the PIP service application112may provide the snapshot to the PIP service102, where the PIP service102can use optical character recognition (OCR) and/or image recognition techniques to analyze the view408for disclosed private information, as discussed above. Having discussed the social media section214, the discussion now turns to a physical document section224inFIG.2. The physical documents section relates to protection of data found on tangible documents, such as postal mail, etc. In some embodiments, data related to physical documents may be captured using the PIP service application112ofFIG.1. For example physical document images may be captured in block226. FIG.4Bis a schematic diagram, illustrating usage of an image capture device (e.g., a camera) of an electronic device450to capture relevant data for a private information analysis, in accordance with an embodiment of the present disclosure. In FIG.4B, the PIP service application112, running on the electronic device450provides a prompt452requesting a snapshot of physical mail454. Attributes of the physical mail454may be recognized (e.g., via optical character recognition (OCR)) to scan for pertinent private information analysis data. For example, in the current embodiment, the sender information456as well as the recipient information458is captured. The sender information456may be used to identify predatory solicitors and other bad actors who may be attempting for defraud an individual. For example, the sender information456may be compared against a database that includes a list of addresses, entities, etc. that are known to practice predatory business practices. Accordingly, when such a sender is detected in the sender information456, the individual can be notified of the bad actor status of the sender, in an effort to reduce fraudulent activity. Further, the recipient information may be useful to understand whether certain private information is already out in the public. For example, a generic recipient, such as “Head of Household” may indicate that the sender does not have access to the individual's full legal name, whereas a specific recipient, such as “John A. Smith” may indicate that the send already has the individual's full legal name. This information may be used in the private information protection score and may result in a notification to the individual. Returning toFIG.2, additionally and/or alternatively, survey questions may be answered related to physical documents (block228).FIG.5is a schematic diagram, illustrating a GUI500that presents survey for obtaining relevant information for a private information analysis, in accordance with an embodiment of the present disclosure. As illustrated inFIG.5, the survey may include questions to be answered via a list of options502, questions to be answered via drop down options504, questions to be answered with freeform text506, or any combination thereof. The questions for the survey in this section relate to physical document procedures followed by the individual or group of individuals. For example, questions may relate to the types of physical documents that are retained, the period of retention, how they are disposed of, etc. In this section, the survey questions may look to see that the individual or group of individuals is properly retaining certain documents with proper security (e.g., out of public access, such as in a lock box, etc.) and that disposed of documents are disposed of in a secure manner (e.g., shredding documents with sensitive data rather than merely throwing the documents in the trash). In some embodiments, it may be desirable to do a more piecemeal survey, not asking all questions at once, as answering a significant number of questions may seem like a daunting task or may at the least be an undesirable user experience.FIG.6is a flowchart, illustrating a process600for providing a piecemeal survey based upon usage of an electronic website, in accordance with an embodiment of the present disclosure. The process600begins by identifying the individual user (block602). For example, this can be done when the user logs into the PIP service102website. Next, a typical site usage pattern and/or entry method to the PIP service102website may be determined (block604). For example, the usage pattern might include looking at a number of times the individual has accessed the website in the past, a frequency that the user has accessed the website in the past, etc. Such a pattern may provide an indication of how likely the individual is to return in the future to answer additional questions, which may impact the number of questions currently offered. Further, the entry method to the website may also be used to decide how many questions to offer. For example, if the individual directly accessed the survey, this may indicate the individual is prepared to answer more questions in the survey. In contrast, if the survey questions are indirectly provided to the user (e.g., the individual is on a car-loan site and is presented survey options related to private information protection), this may indicate that the individual may be less likely to be prepared to answer a significant number of survey questions. Based upon the usage pattern and/or entry method, a number of survey questions to present may be determined (block608). For example, as mentioned above, high frequency of visits or extensive past usage may indicate that fewer questions can be asked up front, as the individual is likely to return to the website and can answer more questions the next time they log in. On the other hand, less frequent users may be given larger numbers of questions, as it may be unclear whether these users will return to the site to answer additional survey questions in the future. Likewise, if a direct entry method is used to access the survey questions, the individual may receive more survey question than individuals that indirectly access survey questions (e.g., as a secondary topic of exploration on the website). Once the number of survey questions to present is determined, the number of survey questions are presented to the individual (block610). For example, this might result in the GUI500ofFIG.5implemented above, as described inFIG.6. Returning toFIG.2, the private information protection analysis may continue with a physical security section230. Physical security, as used herein, refers to security of tangible items, such as cars, houses, etc. The physical security section230may include capturing data from individuals answering survey questions (block232), as discussed above. The questions for this section may include information regarding physical security, such as whether cars and houses associated with the individual have alarms, whether those alarms are actively used, and whether the alarms include a monitoring service. Many physical security services now include online services that may track security over the Internet. Accordingly, the process200may also include accessing security services and retrieving relevant data from these services (block234). For example, online camera monitoring services may confirm a number of cameras used at a particular location. This information may be accessed via APIs of the service providers. Once the physical security data gathering is complete, the private data protection score may be updated according to the data. For example, the use of physical security systems may increase the score, while lack of use of such systems may reduce the score. The process200may include a digital security section236. In the digital security section236, the service analyzes data flowing to and from digital devices, such as IOT devices. First, a determination is made as to whether the individual is using home automation or other IOT services (decision block238). If so, the service uses applicable APIs to retrieve security/profile information and settings for the relevant devices, which may be used in scoring (block240). For example, information that may be retrieved may include what services the IOT device data is shared with, the types of data that are captured, etc. After this information is collected (or the individual does not have home automation or other IOT services), survey questions pertaining to digital security may be asked and answered by the individual (block242). This may occur in a similar manner as discussed above with regard toFIGS.5and6. Once data capture has occurred for all relevant sections, the overall score calculation may occur (block244). As described above, upon completion of each section, an aggregated score may be updated based upon data of that section. Alternatively, a single score may be calculated at the end of all of the data capture based upon an aggregation of the collected data. Upon completion of the calculated score, the results section may be initiated (block246). The calculated score may be presented to the individual, as illustrated by the the scoring bar702of GUI700ofFIG.7. Further, areas of improvement may be identified from the negative impacting data items that were captured during the analysis (block248). These are illustrated as negative points704inFIG.7. Further, hints, positive points706to continue doing, etc. may also be provided in the GUI700(block250). Remediation advice708and/or links710to trigger remediation or remediation instructions may also be provided. The links710may result in navigating the individual to an external page, make a phone call to a relevant remediation service, etc. (block252). In addition, analytics data may be sent for machine learning (block254). For example, as discussed above, groups of individual with particular private information protection concerns may be identified by using machine learning. Thus, tailored information may be provided to particular groups of individuals, as illustrated inFIG.8. InFIG.8, two dialog boxes800and802are presented to particular relevant groups of individuals. In this example, the machine learning algorithms were used to identify that “A Group” members typically have protection issues centered around a list of factors1,2, and3, which is presented to members of the “A Group” when they seek out advice for private information protection. In contrast, the machine learning algorithms have detected that the list of factors4,5, and6are particularly relevant to “B Group” members. Accordingly, when a “B Group” members seeks out the same advice as the “A Group” member, the “B Group” member will receive dialog box802instead of dialog box800. As may be appreciated, this may result in more beneficial content provided to targeted groups of individuals rather than a significant amount of generic advice that an individual may gloss over. The systems and techniques provided herein provide significant value in a world were information is becoming increasingly valuable and easily attainable. By providing systems that proactively analyze, rate and provide remediation efforts for private information vulnerabilities, individual or groups of individuals may be less susceptible to fraudulent activities, such as identify theft, etc. | 29,054 |
11861037 | DETAILED DESCRIPTION One goal of the unified data fabric is to protect and govern data associated with individuals to address privacy concerns, regulatory compliance, and ethical considerations. The basic functionality of the unified data fabric is to provide a standard repository for collecting data from a variety of trusted sources and to limit the use of collected data in accordance with various policies and preferences. The unified data fabric also correlates data from different sources to individuals related to the data through different relationships to provide a comprehensive view of information for a particular individual from the various trusted sources. There is a need to provide consumers with as much relevant information as possible to help them complete various tasks. Organizations may collect data from a variety of different sources, both internal sources and external sources. For example, a single corporation may be associated with a number of different brands or businesses under a large corporate umbrella. A healthcare corporation may own one or more health insurance brands, a pharmacy brand, and be associated with a wide array of healthcare providers, which may not be owned under the umbrella corporation. The corporation may receive data related to health care claims, insurance premium payments, prescription information, and medical records from the healthcare providers. The corporation may also partner with various external companies like fitness centers, grocery stores, or the like to enact incentive programs to encourage a healthy lifestyle for their customers. The corporation may also receive information about potential customers through various marketing programs or consumer outreach through various employers. All of the data that is collected through these various avenues is typically collected and compartmentalized within the immediate program that collects the data for the specific intended use of that data. However, much of that data can be beneficial to a customer to provide the customer with a holistic view of their interaction with the corporation's various business units and/or partners. For example, a customer that is viewing a medical record related to high cholesterol may be interested in incentive programs for healthy eating in their local market that would help them lower their cholesterol, including information on whether they are already enrolled in the program or eligible for the program. By presenting that information in a user interface where the medical record is displayed in a manner that makes it easier for the customer to take advantage of those programs, both the customer and the business can enjoy the benefits of improving the customer's health. However, it will be appreciated that, for example, it is typically difficult for a business to connect a customer that has a relationship with one business unit with that customer's relationship in another business unit or a program that is not even managed by the business. Sometimes, privacy or regulatory concerns require that such data should not be shared openly from one business unit to another. Ethical practices may encourage a business unit to ask a customer to opt-in or opt-out before data is shared between business units, and each business unit or affiliate may need to consider different concerns related to the sharing of information internally or externally. Of course, while the example above is provided as related to a healthcare context, the embodiments of the system are not limited to such a context. The management of data lifecycles and data flows within an organization is widely applicable to many types of businesses or applications. The unified data fabric described herein helps to address many of the concerns described above. Collection of data can be limited to trusted sources that helps to ensure the collected data is accurate and reliable. The use of data control policies and access control policies automatically applies a regulatory and compliance framework to the data elements in the unified data fabric, and these policies can also help address each individual's privacy concerns by letting the consumer constituents control how their information is disseminated through preferences or permissions. The unified data fabric can also help address data protection concerns, through securing of data at a data element level within secure data stores and by limiting the access to data from certain client devices. FIG.1illustrates a system100for managing data lifecycles and data flows between trusted data sources and data clients, in accordance with some embodiments. As depicted inFIG.1, the system100includes a unified data fabric102that receives data elements from one or more data sources110and controls access to the data by one or more clients180. The unified data fabric102includes a data ingestion engine120, a data lifecycle engine140, and a data delivery engine150. The unified data fabric102can also include a number of data stores130for storing data elements within the unified data fabric102. In some embodiments, the data ingestion engine120is configured to ingest data elements received from one or more trusted data sources. In an embodiment, the trusted data sources are defined sources for particular items of data. For example, a manager of the unified data fabric102can specify a particular network asset (e.g., a database, application, etc.) as the trusted source for address data for individuals. The manager can select the network asset over other network assets based on, e.g., an audit of the various assets and the accuracy of the information contained therein, security protocols associated with the network asset, breadth of records maintained by the asset, or the like. Reasons for selecting a particular asset as a trusted data source can vary, but the general indication when designating a data source as a trusted data source is that there is some assurance that the information included in the data source is reliable and accurate and that the selected data source is the best source of data for that type of information. In some embodiments, the trusted data sources can be included in a whitelist, where the data ingestion engine120is restricted to only allow ingestion of data elements from data sources included in the whitelist. In some embodiment, a trusted data source is defined as a source that is determined to be the governing data authority for a specific piece of information that publishes the data through a managed interface and is charged with the responsibility for the accuracy of the data. The trusted data source can be a validated source of truth on behalf of a particular source of record, and can manage changes to the data over time. In an embodiment, the data ingestion engine120associates each ingested data element with a global identifier allocated to a constituent associated with the information included in the data element. A constituent can include an individual associated with demographic information or an entity associated with a group of one or more individuals. For example, a constituent can be a customer of a business, associated with demographic information such as a name, an address, a date of birth, an identifier (e.g., social security number, driver's license number, customer identifier, etc.), or the like. As another example, a constituent can be a partnership or a business having a number of partners or employees. Each unique constituent can be assigned a global identifier that uniquely identifies that constituent within the context of the unified data fabric102. The data ingestion engine120is configured to identify the global identifier associated with each data element received from a trusted data source and, subsequently, associate the data element with the global identifier. In an embodiment, the data ingestion engine120appends the global identifier to the data element. The data ingestion engine120can utilize ingestion interfaces configured for each trusted data source to ingest data elements into the unified data fabric102. Data elements are then stored in one or more data stores130. In some embodiments, each trusted data source corresponds to one or more data stores130. At least one data store130can be created for each trusted data source. Some additional data stores130can also be created to store data elements that integrate information from two or more trusted data sources, which can be referred to herein as integrated data elements. For example, data store1130-1corresponds to source1110-1, data store3130-3corresponds to source2110-2, and data store2130-2corresponds to source1110-1and source2110-2. As shown inFIG.1, the unified data fabric102includes M data stores (130-1,130-2, . . .130-M) and X data sources (110-1,110-2, . . .110-X). As depicted inFIG.1, in some embodiments, at least one data store, such as data store130-2, can be configured to store an integrated data element that includes information received from different trusted data sources. The data ingestion engine120can be configured to receive a first data element including a first source identifier from a first trusted data source, and receive a second data element including a second source identifier from a second trusted data source. The data ingestion engine120determines that the first source identifier and the second source identifier are mapped to a particular global identifier for a single constituent and generates an integrated data element that includes first information from the first data element and second information from the second data element. The integrated data element is associated with the particular global identifier and stored in a data store130. In some embodiments, the data ingestion engine120is configured to secure data elements and store the secured data elements in the one or more data stores130. A key of the secured data can be shared by the data ingestion engine120and the data delivery engine150such that the data delivery engine150can unlock the secured data elements prior to transmission to the client devices180. In some embodiments, the data elements can be re-secured by the data delivery engine150prior to transmission to the client devices180, using any technically feasible data security technique. In some embodiments, the data lifecycle engine140is configured to manage data control policies and access control policies for the data elements. Data control policies are policies for controlling the ingestion of information in the data elements. Access control policies are policies for controlling the dissemination of information of data elements in the unified data fabric102. Data control policies can specify what types of information can be ingested into the unified data fabric and access control policies can specify what information certain clients have permission to access. Access control policies can include, but are not limited to, privacy policies, compliance policies, permissions policies, and group policies, as will be discussed in more depth below. In an embodiment, the data ingestion engine120queries the data lifecycle engine140for any data control policies related to a particular data source110or a particular global identifier associated with the information in a data element. The data control policy specifies whether the information is permitted to be ingested into the unified data fabric102. The data delivery engine150queries the data lifecycle engine140for any access control policies related to a particular data client180or a particular global identifier associated with the information in a data element. The access control policy specifies whether the information in the data element is permitted to be accessed by a particular data client180. It will be appreciated that data control policies and access control policies can be related to a constituent (e.g., an individual) and enable that constituent to prevent certain information from being ingested into the unified data fabric102, via a data control policy, or being utilized by certain clients180, via an access control policy. However, data control policies and access control policies are not limited to preferences or permissions configured by a particular individual associated with the information, but can also incorporate legal and regulatory requirements, global privacy concerns, or preferences and permissions related to groups of individuals or groups of clients. In some embodiments, the data delivery engine150is configured to control access to the data elements based on the access control policies. The data delivery engine150receives requests from clients180to access information in the unified data fabric102. The request can include a read request that specifies a particular data element in the unified data fabric102. The data delivery engine150can determine whether one or more access control policies permit the client180to access the data and, if the client has the appropriate permissions based on the access control policies, then the data delivery engine150returns the data element to the client180in a data access response. In other embodiments, the request can specify a view of information included in one or more data elements. For example, the request can include a read request for information related to a particular global identifier, where the information can be included in one data element or from multiple data elements from one or more data sources. The data delivery engine150can compile the information from multiple data elements into a data access response and return a view of the information in the response. A view, as used herein, can refer to a particular structure or format that includes the relevant information, such as an extensible markup language (XML) document or JavaScript Object Notation (JSON) format document that includes elements that exposes various information from the one or more data elements. FIG.2is a flow diagram of a method200for managing data lifecycles and data flows between trusted data sources and data clients, in accordance with some embodiments. The method200can be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method200can be performed by one or more processors configured to execute instructions that cause the processor(s) to carry out the steps of the method200. Furthermore, persons of ordinary skill in the art will understand that any system that performs method200is within the scope and spirit of the embodiments described herein. At step202, a data ingestion engine120receives one or more data elements from one or more trusted data sources. In an embodiment, the data ingestion engine120receives the one or more data elements via ingestion interfaces corresponding to the trusted data sources. In other embodiments, the ingestion interfaces are included in the data ingestion engine120. At step204, for each data element, the data ingestion engine120associates the data element with a global identifier. In an embodiment, the data ingestion engine120reads a source identifier from the data element and queries a table to retrieve a global identifier corresponding to the source identifier. In some embodiments, the global identifier replaces the source identifier in the data element. In some embodiments, the table is included in a global identity and permissions database, application, or service that is external to the unified data fabric102. At step206, the data ingestion engine120stores the one or more data elements in one or more data stores130. In an embodiment, each trusted data source110can be associated with one or more data stores130, and data elements ingested from a particular trusted data source110are stored in a corresponding data store130. In other embodiments, each data store130is configured to be a repository for certain types of information, which can be received from any number of trusted data sources110. At step208, the data delivery engine150controls access to the data elements in the data stores130based on a set of access control policies. In an embodiment, the data delivery engine150applies access control policies to permit or deny access to the data elements by client devices. FIG.3Aillustrates a data flow for ingesting data elements302into the unified data fabric102, in accordance with some embodiments. As depicted inFIG.3A, a data element302is published by a trusted data source. As used herein, publishing a data element302can be defined as any technique that renders the data element available to the data ingestion engine120. For example, a data element can be added to a database, either local or distributed, that is accessible to the data ingestion engine120. As another example, the data element can be published in a resource available over a network (e.g., through a cloud-based service or through a document accessible over the Internet). In some embodiments, the data ingestion engine120is configured to poll the data sources110periodically to identify whether any data elements have been published. For example, the data ingestion engine120can transmit a request to each data source110periodically (e.g., hourly, daily, weekly, etc.) for a record that includes a list of identifiers for new data elements published in a time period since the last request. In other embodiments, the data ingestion engine120is configured to receive a notification of published data elements. For example, each data source110can be configured to transmit a notice that new data element(s) have been published whenever a new data element is published by the data source110. As described above, each data source110can be associated with one or more ingestion interfaces. Each ingestion interface is configured to parse one or more items of information from the data element302based on, e.g., an expected format of the data element302, a particular characteristic of the information (e.g., a known key or tag associated with a particular field in the data element302), or the like. The data ingestion engine120can be configured to select a particular ingestion interface from one or more ingestion interfaces associated with the data source110as the source interface310, which is used to ingest the data element302into the unified data fabric102. The source interface310can be implemented as a program or set of instructions that define how the data element302is processed. The source interface310generates a standard data element304based on the data element302and logic included in the source interface310. In an embodiment, the logic can include logic for parsing information in the data element to select a subset of information from the data element. In some embodiments, the logic can also include functions that generate new information based on the information in the data element302. For example, the logic can include a function that converts a date of birth for an individual included in the data element302into an age of the individual in the standard data element304, even though the age of the individual was not included explicitly in the data element302. The logic can also format the information to match an expected format for the information in the standard data element304. In some embodiments, the logic can also include logic that checks to make sure the information is consistent with a type of information expected. For example, if the source interface310expects to parse a date of birth for an individual, the logic can check that the retrieved date is for a year between Jan. 1, 1900 and the current date to ensure that the information is represents a likely date of birth for a living individual. Any dates outside of this range, or beyond the current date, for example, could be rejected as invalid information. The standard data element304is then processed by an ingestion module320. As used herein, the ingestion module320can include hardware or software, executed by one or more processors, that includes logic for processing the standard data element304. In some embodiments, the ingestion engine320includes conformance logic322and integration logic324. The conformance logic322is configured to match a source identifier provided by the data source110with a corresponding global identifier associated with the source identifier in the context of the unified data fabric102. It will be appreciated that a constituent (e.g., an individual) allocated a particular global identifier can be associated with different source identifiers in different data sources110. Consequently, one aspect of the conformance logic322is to match any source identifier included in the standard data element304to a global identifier. In an embodiment, the conformance logic322queries the global identity and permission database300based on the source identifier to return a corresponding global identifier. The conformance logic322can then replace the source identifier in the standard data element304with the global identifier. Alternatively, the global identifier can be appended to the standard data element304such that both the global identifier and the source identifier are included in the standard data element304. The standard data element304is also processed by integration logic324. In some embodiments, the integration logic324requests one or more data control policies associated with the standard data element304from the data lifecycle engine140. The request can include an identifier for the data source110, a global identifier for the constituent associated with the information in the standard data element304, or any other relevant information for determining whether the standard data element304is associated with any data control policies. The integration logic324can receive the one or more data control policies returned from the data lifecycle engine140and apply the data control policies, if any, to the standard data element304. For example, a data control policy can specify whether certain types of information from a particular data source110are permitted to be ingested into the unified data fabric102. Since the data source110is the owner of the information, the data source110may permit specific uses for certain information and may restrict uses of other information. A data control policy can be defined for a specific data source110that indicates which information can be ingested from that data source110. The integration logic324can read the data control policy and update the standard data element304based on the data control policy. For example, the integration logic324can remove certain information from the standard data element304entirely or modify other information (e.g., changing a social security number to only include the last four digits of a social security number). It will be appreciated that a data control policy can be defined for more than one data source110(e.g., the policy can apply to all data sources or a subset of data sources controlled by a particular organization). A data control policy can also apply to a specific type of information (e.g., financial information such as account numbers can be prohibited from being ingested to prevent instances of fraud or identity theft). Once the standard data element304is processed by the conformance logic322and the integration logic324, the standard data element304can be stored as a processed data element306in one or more of the data stores130, such as data store1304, where J is less than M. FIG.3Bis a flow diagram of a method350for ingesting a data element302into the unified data fabric102, in accordance with some embodiments. The method350can be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method350can be performed by one or more processors configured to execute instructions that cause the processor(s) to carry out the steps of the method350. Furthermore, persons of ordinary skill in the art will understand that any system that performs method350is within the scope and spirit of the embodiments described herein. At step352, a data element is processed by conformance logic322. In an embodiment, the conformance logic322associates a standard data element304produced by a source interface310with a global identifier. In an embodiment, the conformance logic322includes instructions that cause a processor to compare a particular source identifier to a table of source identifiers to identify a corresponding global identifier mapped to the particular source identifier, and associate the corresponding global identifier with the standard data element304. At step354, the data element is processed by integration logic324. In an embodiment, the integration logic324receives zero or more data control policies and applies the data control policies to the data element. Again, the data control policies can permit certain information to be ingested and block other information from being ingested. The data control policies can include preferences and/or permissions set by individuals associated with the global identifier. The data control policies can also include permissions set by the owners of the data sources that permit or disallow certain uses of the published information. In some embodiments, the integration logic324includes logic for generating multiple data elements from a single data element, where each data element in the multiple data elements contains a subset of information in the single data element being processed by the integration logic324. In other embodiments, the integration logic324combines multiple data elements into a single data element, where information from each of the multiple data elements is included in the single data element. The multiple data elements being combined can be ingested from one data source or multiple data sources. In some embodiments, multiple data elements can only be combined when they are associated with the same global identifier. At step356, validation data is recorded. In an embodiment, a log or other database is updated to indicate whether a data element was ingested into the unified data fabric102. The record for the data element can include information such as, but not limited to, a date and time of the attempt to ingest the data element, whether the ingestion process resulted in at least one valid processed data element306being stored in a data store130, a location of each processed data element306stored in a data store130, an identifier for each processed data element306, and the like. In some embodiments, the records included in the validation data can be checked to see whether a particular data element302has previously been ingested into the unified data fabric102prior to processing the data element302by the ingestion interface. FIG.3Cis a flow diagram of a method for processing a data element through conformance logic322, in accordance with some embodiments. The method can be performed as part of step352of method350. At step362, the conformance logic322compares a source identifier to a table of source identifiers to identify a corresponding global identifier mapped to the source identifier. In an embodiment, the conformance logic322transmits a request to a global identity and permissions database300. The request includes a source identifier from the standard data element304. The global identity and permissions database300returns a global identifier that is mapped to the source identifier in a table of the database. In another embodiment, the data ingestion engine120includes a table associating source identifiers to the global identifiers maintained in the records of the global identity and permissions database300. The table can be updated when new global identifiers are added to the global identity and permissions database300. It will be appreciated that although the global identity and permissions database300is described as a database, that can be queried, in other embodiments, the global identity and permissions database300can be implemented as a service or an application available over a network and may include tables or any other structure that enables the global identifiers to be mapped to one or more source identifiers from different data sources110. At step364, the conformance logic322associates the corresponding global identifier with the data element. In an embodiment, the global identifier replaces the source identifier in the data element. In another embodiment, the global identifier is appended to the data element, where the source identifier remains in the data element with the global identifier. In yet another embodiment, the global identifier is associated with the data element as metadata or as a key to the data element in a key-value data store such that the global identifier is not explicitly included in the data element data structure. FIG.4illustrates a flow of a data element through the ingestion process, in accordance with some embodiments. A data element302is published by a data source110. As depicted inFIG.4, the data element302can include information (e.g., Data_1, Data_2, etc.) and source identifiers (e.g., sourceID, etc.). The data element302can be in a variety of formats. As one example, each item of information can be paired with a source identifier that associates that piece of information with an account or individual corresponding to the source identifier. The data element302can be formatted as key-value pairs, where the keys are source identifiers. As another example, the data element302can be formatted as a markup language document (e.g., HTML, XML, etc.). As yet another example, the data element302can be formatted as a file having a particular file format (e.g., a portable document format (pdf) or a spreadsheet). It will be appreciated that the data elements302can be received in a variety of formats and that the ingestion interface can be configured to read and parse said format to extract the information and source identifiers. The ingestion interface processes the data element302to generate the standard data element304. The standard data element304can take a specific format expected by the data ingestion engine120. In an embodiment, the standard data element304is given a unique identifier that can be used to identify the standard data element in a data store130. In an embodiment, the standard data element304contains information for a single source identifier associated with the data source110. If the data element302includes more than one source identifier, then the ingestion interface can generate multiple standard data elements304, at least one standard data element for each unique source identifier included in the data element302. In an embodiment, the ingestion interface can also associate items of information with labels that can be used as keys to retrieve the items of information in the standard data element304. For example, a first item of information (e.g., Data_1) can be a first name of an individual and associated with the label firstName, and a second item of information (e.g., Data_2) can be a last name of the individual and associated with the label lastName. In an embodiment, the ingestion interface can also append metadata related to the data element302into the standard data element304. For example, as depicted inFIG.4, a name or identifier of the data source110that published the data element302can be included in the standard data element304with the label source. As another example, a timestamp that identifies a time and/or date that the data element was published or ingested into the unified data fabric102can be included in the standard data element304with the label date. It will be appreciated that the type of metadata appended to the information in the standard data element304described herein is not exclusive and that other types of metadata such as a size of the data element302, a location of the data element302(e.g., a URL that identifies a network resource for the data element302), a filename, a format type, or other metadata can be included in the standard data element304as well. The standard data element304is then processed by the ingestion module320and generates one or more processed data elements306. In an embodiment, the source identifier is replaced with a global identifier. The information in the processed data element306can include a subset of the information in the standard data element304based on, e.g., the data control policies as well as other conformance logic322or integration logic324included in the ingestion module320. In some embodiments, a single standard data element304can result in one or more processed data elements306being generated by the ingestion module320. In some embodiments, multiple standard data elements304can be processed and combined into an processed data element306. It will be appreciated that the example shown inFIG.4is merely one potential flow of a data element302having a particular format and that other formats or flows are contemplated as being within the scope of various embodiments of the present disclosure. FIG.5illustrates a flow for management of data lifecycles, in accordance with some embodiments. As depicted inFIG.5, the data lifecycle engine140manages data control policies510and access control policies520associated with data lifecycles. In an embodiment, data control policies510are utilized during the data ingestion process to control whether certain information is added to the unified data fabric102. Such policies can prevent certain data (e.g., sensitive financial information, social security numbers, regulated information (e.g., health information subject to HIPAA), and the like) from being ingested into the unified data fabric102. The data control policies510can also reflect legal agreements between the organization maintaining the unified data fabric102and any third-party organizations that own one or more data sources110. The data control policies510can reflect agreements on use of certain data reflected in contractual obligations between the parties. These contractual obligations can permit copying and use of certain information contained in a data source110while disallowing copying or use of other information in the data source110. For example, the data control policies can reflect agreements memorialized in a terms of use (ToU) or terms of service (ToS) agreement associated with a data source110. In an embodiment, at least one data control policy510is configured to specify a data type permitted from a particular trusted data source110. In an embodiment, access control policies520are utilized during the data access process. Such policies can enable an organization to place restrictions on the type of information that can be accessed by various client devices180. These restrictions can be based on policies developed by the organization on how the organization handles certain information, or the restrictions can be based on regulatory or compliance schemes encoded in law, regulations, or enforced by administrative agencies at the state or federal level. The access control policies520can also reflect preferences of users in how they have indicated they want their personal information to be handled. The access control policies520can also allow restrictions to be placed on the client devices180that permit or disallow certain client devices180or groups of client device180from accessing certain types of information. In an embodiment, at least one access control policy520is configured to specify a consent preference for a constituent associated with a particular global identifier. The constituent can be an individual associated with demographic data or an entity that includes a group of individuals. The constituent can set a preference that indicates, for a particular subset of information (limited consent) or for all information (global consent), whether that constituent grants consent to access or use the information associated with the constituent (e.g., associated with the global identifier for the constituent). It will be appreciated that this allows for an organization with a relationship with a constituent in a specific context to control access to information about the individual collected from a completely separate system outside the specific context based on the preferences of the individual collected within the specific context. For example, an individual can specify, as part of their user preferences for a user account related to a health portal application, that the user expects their personal information to be kept private and opts-out of sharing that information with other partners of the service provider for the health portal application. The unified data fabric102can then use this preference to create an access control policy520that restricts use of information related to that individual from being shared with external partners through the unified data fabric102, even if that information was collected from a source outside of the context of the health portal application. This type of policy can lead to better assurance provided to customers that their information is more secure when working with organization that enact such policies, even if the policy is more restrictive than required by statutory, regulatory, or contractual obligations. In an embodiment, a set of access control policies520managed by the data lifecycle engine140can include at least one of a privacy policy, a compliance policy, a permissions policy, or a group policy. A privacy policy can refer to a policy related to how sensitive information is shared. For example, a privacy policy can limit personally identifying information to be accessed by client devices180identified as internal client devices. External client devices, such as those owned by affiliates or third-party end users may be restricted from accessing any personally identifiable information that ties an individual's name, address, or other identifying information to data related to that individual. However, internal client devices may be permitted to access such information by the privacy policy. A compliance policy can refer to a policy related to regulatory or legal requirements. For example, a compliance policy related to HIPAA may enforce rules related to the protection of health information tied to a specific individual. Compliance policies can be related to regulations rather than specific legal statutory requirements. Compliance policies can also incorporate specific allowed uses of data that are not tied to legal or regulatory frameworks enforced externally, but instead are tied to internal frameworks regarding the specific allowed uses of certain information. For example, a compliance policy may only permit access to billing information if an invoice number is provided that matches the customer for the global identifier associated with the billing information. By controlling access to the information based on the invoice number, only individuals with knowledge of the invoice number will be permitted to access the billing information for that invoice. A permissions policy can refer to a policy related to permissions or preferences provided by an individual. When a user sets up a user account with a service, the account can be associated with various preferences or permissions. However, the user account, if properly vetted, may also be associated with an individual that is assigned a global identifier that recognizes that individual as a unique person or entity within the context of the unified data fabric102. Such users can typically submit preferences through the user account for how they would like their data to be handled. These preferences can be associated with a global identifier in one or more tables in the global identity and permissions database300and used outside of the context of the user account. In some embodiments, the data lifecycle engine140can be configured to query the global identity and permission database300to retrieve preferences or permissions associated with a global identity and create, within the access control policies520, one or more access control policies related to those global identities that reflect the selections made by that individual or entity in relation to their user account. A group policy can refer to a policy that permits or denies access to information for a group of client devices180. The group policy can apply to sets of client devices180specified by client identifiers, specified by network locations or by subnet masks within a particular local area network (LAN), or specified by characteristics of the members in the group (e.g., all client devices that establish a connection through a specific mobile application). Group policies can enable administrators to restrict access to information to specific clients by listing clients individually or defining a group identifier for the group and associating client devices180with the group identifier is a separate table. In some embodiments, the access control policies520can be associated with different levels of access (LoA). LoA refers to different tiers of access, where higher LoA refers to the ability to access more sensitive information. For example, client devices associated with an accounting department might be issued a higher LoA to access secure financial records for customers stored in the unified data fabric102, but client devices associated with a marketing department, even within the same organization that maintains the unified data fabric102, may have a lower LoA that only permits access to less sensitive consolidated financial information for all customers. The lowest LoA can be reserved for client devices180connected from an external network, where a user account has not been vetted. Accounts that have been vetted to confirm that the account is tied to a specific individual may be granted a higher LoA that permits more access to certain information in the unified data fabric102. Client devices180associated with employees connected to an internal network may be granted a higher LoA, and so forth. Access control policies520can be associated with LoAs to help enforce security measures related to the access of the information in the unified data fabric102. As depicted inFIG.5, the data ingestion engine120can request data control policies from the data lifecycle engine140. The request can include a global identifier associated with a standard data element304, an identifier for the data source110, a type of information included in the standard data element304, or any other information required to select or query the applicable data control policies510from the set of data control policies510maintained by the data lifecycle engine140. Similarly, the data delivery engine150can request access control policies520from the data lifecycle engine140. The request can include a client device identifier, group identifier, application identifier, global identifier, an identifier for the data element being accessed, a type of information being accessed, or any other information required to select or query the applicable access control policies520from the set of access control policies520maintained by the data lifecycle engine140. FIG.6Aillustrates a flow for accessing data elements stored in the unified data fabric102, in accordance with some embodiments. As depicted inFIG.6A, the data delivery engine150includes a control policy module620. As used herein, the control policy module620can include hardware or software, executed by one or more processors, that includes logic for controlling access to the processed data elements306stored in the unified data fabric102. A flow of processed data elements306to a client device180can be described as follows. A client device180-L transmits a request636to access a processed data element306to the unified data fabric102. The request636can be generated by an application or a browser executed by the client device180-L. The request636is received by the control policy module620, which transmits a related request622for any applicable access control policies520to the data lifecycle engine140. In an embodiment, the request636can include an identifier of an processed data element306for which access is requested. In another embodiment, the request636can include parameters that are used by the data delivery engine150to query a data store130-J to identify the processed data element306being requested. For example, the request636can include a search query that, e.g., specifies a particular type of information related to a person having a specific name and/or associated with a particular address using known values for the name and address of the person. The data delivery engine150can query a data store130-J to identify one or more processed data elements306. Once the data delivery engine150receives the one or more processed data elements306based on the query, the control policy module620reads the global identity from the returned processed data element306and generates the request622for the set of applicable access control policies520. The data lifecycle engine140returns a response624that includes or provides a reference to the set of applicable access control policies520. The control policy module620can then apply the set of applicable access control policies520to determine whether the client180-L has permission to access the processed data element306. In an embodiment, when the client180-L is not permitted to access the information included in the processed data element306, then the response either does not include the data element or indicates, through a message to the client180-L, that the access is denied, depending on the data access policy. In some embodiments, no response is transmitted when access is denied, and the client180-L may infer access is denied through a timeout mechanism or after a failed number of retry attempts. However, when the client180-L is permitted to access the information included in the processed data element306, then the response can include the information in the processed data element306. In an embodiment, the response634includes the processed data element306directly. In another embodiment, the response634includes the information from the processed data element306by providing a view of the information in a different form or format. In some embodiments, the response634provides a view of information from multiple processed data elements306related to the request636. For example, where the query returns two or more processed data elements306from the data store(s)130, then the response634can provide a view that includes the information from at least two different processed data elements306retrieved from one or more data stores130. Such embodiments allow for the data delivery engine150to provide a comprehensive view of information stored in disparate data stores130in the unified data fabric102in a simple and easy to view interface on the client device180. In an embodiment, the data delivery engine150is configured to receive a first data element from a first data store and receive a second data element from the first data store or a second data store. The first data element and the second data element are associated with a first global identifier. The data delivery engine150is configured to generate an integrated view of multiple data elements that includes first information from the first data element and second information from the second data element and transmit the integrated view to a client. It will be appreciated that, when combined with the functionality of the data ingestion engine120, discussed above, information for related data elements can either be combined during the data ingestion process to generate integrated data elements in the data stores or the information can be kept separate in different data elements in the data stores and combined during the data delivery process to provide a comprehensive view of information from multiple data elements. FIG.6Bis a flow diagram of a method650for controlling access to data elements in the unified data fabric102, in accordance with some embodiments. The method650can be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method650can be performed by one or more processors configured to execute instructions that cause the processor(s) to carry out the steps of the method650. Furthermore, persons of ordinary skill in the art will understand that any system that performs method650is within the scope and spirit of the embodiments described herein. At step652, a request636is received from a client to access a first data element in a data store130. In some embodiments, the request636can identify the first data element directly. In other embodiments, the request636can include parameters, such as a search query, that enable the data delivery engine150to select the first data element from the data store130. At step654, credentials are received from the client. In some embodiments, the client device180can provide credentials, such as an authentication token, to the data delivery engine150along with the request636that authorizes the client device180to establish a connection with the data delivery engine150. In other embodiments, a user of the client device180can provide credentials, such as username/password associated with a user account, which are transmitted to the data delivery engine150to authenticate the client device180. At step656, the data delivery engine150determines whether the client is authorized to access information in the unified data fabric102. The determination can be made based on the credentials, such as verifying the authentication token or verifying that the username/password is valid. If the client device180is not authorized to establish a connection with the data delivery engine150, such as if the token is expired or the username/password does not match stored credentials for the user account, then the method650proceeds to step660, where the data delivery engine150either does not transmit the data element to the client180or sends a message to the client device180that indicates access is denied, depending on the data access policy. However, at step656, if the client device180is authorized to establish a connection with the data delivery engine150, then the method650proceeds to step658, where the data delivery engine150determines whether the client device180is permitted to access the first data element based on a set of access control policies520associated with the first data element. If the client device180is not permitted to access the first data element, then, at step660, the data delivery engine150either does not transmit the data element to the client180or sends a message to the client device180that indicates access is denied, depending on the data access policy. However, returning to step658, if the client device180is permitted to access the first data element, then, at step662, the data delivery engine150transmits the first data element to the client. Alternatively, at step662, the data delivery engine150transmits a view of information included in the first data element to the client device180. It will be appreciated that the various elements included in the unified data fabric102can be implemented by one or more processes or applications configured to be executed by a computer system that includes at least one processor. In some embodiments, each element of the unified data fabric102can be implemented on different nodes connected by a network. Furthermore, each element of the unified data fabric102can be implemented on multiple nodes to spread the load using various load balancing techniques. For example, the data ingestion engine120can be deployed on one or more network nodes in one data center and the data delivery engine150can be deployed on one or more additional network nodes in the same or a different data center. In some embodiments, each data store130can be implemented in one or more network nodes that include storage resources. Each network node can be configured to transfer data to the other network nodes via one or more communication protocols. In other embodiments, the various elements of the unified data fabric102can be implemented on the same network node. FIG.7illustrates a network topology700for implementing the unified data fabric, in accordance with some embodiments. A gateway740can be connected to a plurality of network nodes750, including R network node750-1,750-2, . . .750-R. A client node710and a source node720are connected, through a network, to the gateway740. The client node710is one of the client devices180and the source node720is a server that acts as one of the data sources110. In an embodiment, the data ingestion engine120, the data lifecycle engine140, and the data delivery engine150are implemented in the gateway740, and each data store130is implemented on one or more network nodes750, which include storage resources760-1,760-2,760-3, . . .760-S. In another embodiment, each of the data ingestion engine120, the data lifecycle engine140, and the data delivery engine150are implemented in a different gateway740, where the network topology700includes multiple instances of the gateway740. In yet other embodiments, the gateway740is a frontend for load-balancing and each of the data ingestion engine120, the data lifecycle engine140, and the data delivery engine150are implemented by one or more network nodes750. As shown inFIG.7, the dashed lines between nodes represent external connections established through a wide area network (WAN) such as the Internet and the solid lines between nodes represent internal connections established through a local area network (LAN). In some embodiments, the gateway740includes a firewall or other security systems for preventing unauthorized access to the data stored on the storage resources760. Although not explicitly shown, the nodes750can each communicate directly or indirectly with the other nodes750in the LAN. FIG.8illustrates an exemplary computer system800, in accordance with some embodiments. The computer system800includes a processor802, a non-volatile memory804, and a network interface controller (NIC)820. The processor802can execute instructions that cause the computer system800to implement the functionality various elements of the unified data fabric described above. Each of the components802,804, and820can be interconnected, for example, using a system bus to enable communications between the components. The processor802is capable of processing instructions for execution within the system800. The processor802can be a single-threaded processor, a multi-threaded processor, a vector processor or parallel processor that implements a single-instruction, multiple data (SIMD) architecture, or the like. The processor802is capable of processing instructions stored in the volatile memory804. In some embodiments, the volatile memory804is a dynamic random access memory (DRAM). The instructions can be loaded into the volatile memory804from a non-volatile storage, such as a Hard Disk Drive (HDD) or a solid state drive (not explicitly shown), or received via the network. In an embodiment, the volatile memory804can include instructions for an operating system806as well as one or more applications808. It will be appreciated that the application(s)808can be configured to provide the functionality of one or more components of the unified data fabric102, as described above. The NIC820enables the computer system800to communicate with other devices over a network, including a local area network (LAN) or a wide area network (WAN) such as the Internet. It will be appreciated that the computer system800is merely one exemplary computer architecture and that the processing devices implemented in the unified data fabric102can include various modifications such as additional components in lieu of or in addition to the components shown inFIG.8. For example, in some embodiments, the computer system800can be implemented as a system-on-chip (SoC) that includes a primary integrated circuit die containing one or more CPU cores, one or more GPU cores, a memory management unit, analog domain logic and the like coupled to a volatile memory such as one or more SDRAM integrated circuit dies stacked on top of the primary integrated circuit dies and connected via wire bonds, micro ball arrays, and the like in a single package (e.g., chip). In another embodiment, the computer system800can be implemented as a server device, which can, in some embodiments, execute a hypervisor and one or more virtual machines that share the hardware resources of the server device. Furthermore, in some embodiments, each of the network nodes depicted inFIG.7can be implemented as a different instance of the computer system800. Alternatively, each of the network nodes depicted inFIG.7can be implemented as a virtual machine on one or more computer systems800. Various computer system and network architectures for implementing the elements of the unified data fabric102are contemplated as being within the scope of the embodiments described herein and the various embodiments are not limited to the network topology700or the computer system800depicted inFIGS.7and8. It is noted that the techniques described herein may be embodied in executable instructions stored in a computer readable medium for use by or in connection with a processor-based instruction execution machine, system, apparatus, or device. It will be appreciated by those skilled in the art that, for some embodiments, various types of computer-readable media can be included for storing data. As used herein, a “computer-readable medium” includes one or more of any suitable media for storing the executable instructions of a computer program such that the instruction execution machine, system, apparatus, or device may read (or fetch) the instructions from the computer-readable medium and execute the instructions for carrying out the described embodiments. Suitable storage formats include one or more of an electronic, magnetic, optical, and electromagnetic format. A non-exhaustive list of conventional exemplary computer-readable medium includes: a portable computer diskette; a random-access memory (RAM); a read-only memory (ROM); an erasable programmable read only memory (EPROM); a flash memory device; and optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), and the like. It should be understood that the arrangement of components illustrated in the attached Figures are for illustrative purposes and that other arrangements are possible. For example, one or more of the elements described herein may be realized, in whole or in part, as an electronic hardware component. Other elements may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other elements may be combined, some may be omitted altogether, and additional components may be added while still achieving the functionality described herein. Thus, the subject matter described herein may be embodied in many different variations, and all such variations are contemplated to be within the scope of the claims. To facilitate an understanding of the subject matter described herein, many aspects are described in terms of sequences of actions. It will be recognized by those skilled in the art that the various actions may be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of the terms “a” and “an” and “the” and similar references in the context of describing the subject matter (particularly in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the scope of protection sought is defined by the claims as set forth hereinafter together with any equivalents thereof. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illustrate the subject matter and does not pose a limitation on the scope of the subject matter unless otherwise claimed. The use of the term “based on” and other like phrases indicating a condition for bringing about a result, both in the claims and in the written description, is not intended to foreclose any other conditions that bring about that result. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the embodiments as claimed. | 61,625 |
11861038 | DETAILED DESCRIPTION The description that follows discusses illustrative systems, methods, techniques, instruction sequences, and computing machine program products. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various example embodiments of the present subject matter. It will be evident, however, to those skilled in the art, that various example embodiments of the present subject matter may be practiced without these specific details. In an example embodiment, a differentially private function is computed via secure computation. Secure computation allows multiple parties to compute a function without learning details about the data. The differentially private function defines a probability distribution, which then permits computation of a result that is likely to be very close to the actual value without being so exact that it can be used to deduce the underlying data itself. FIG.1is a block diagram illustrating a system100for differentially private function computation in accordance with an example embodiment. System100may include, for example, three parties102A,102B,102C, although more parties are possible. Each party may maintain or otherwise control its own database104A,104B,104C. Each party may also maintain or otherwise control its own server106A,106B,106C. Each server106A,106B,106C may act to obtain local queries from users and query their own corresponding databases104A,104B,104C for results. In cases where the queries involve functions that require the analysis of data not stored in their own corresponding databases104A,104B,104C, a secure multiparty private function computation component108A,108B,108C located on each server106A,106B,106C may be utilized to evaluate the corresponding function(s) based on information stored on other of the databases104A,104B,104C. This may be accomplished via multi-party communication among the various secure multiparty private function computation components108A,108B,108C, as will be described in more detail below. In an example embodiment, the secure multiparty private function computation component108A,108B108C may be in the form of software code that is made and distributed to the parties102A,102B,102C by a third-party software developer. In some example embodiments, this software code may be included with the distribution of another product, such as a database server application or a workbench component. As described briefly above, one technical problem that is encountered is how to protect the privacy of user's data while still gaining insights from this data, when the data is stored by multiple parties. In an example embodiment, both differential privacy and secure computation are used to accomplish this goal. It should be noted that while the techniques described in this disclosure can be applied to any differentially private functions performed on a data set, one example of such a function is the median function. For ease of discussion, the present disclosure may refer specifically to the median function when describing some of all of the functionality. This shall not, however, be taken to mean that the score of the present disclosure is limited to median functions. Differential privacy is a privacy notion used when statistical properties of a dataset can be learned but information about any individual in the data set needs to be protected. Specifically, databases D and D′ are called neighbors, and are neighboring when database D can be obtained from D′ by adding or removing one element. Informally a differentially private algorithm limits the impact that the presence or absence of any individual's data in the input database can have on the distribution of outputs. Specifically, in differential privacy: DEFINITION 1 (DIFFERENTIALPRIVACY). A mechanism M satisfies ∈-differential privacy, where ∈≥0, if for all neighboring databases D and D′, and all sets S⊆Range(M) Pr[M(D)∈S]≤exp(∈)·Pr[M(D′)∈S], where Range(M) denotes the set of all possible outputs of mechanism M. One way to accomplish differential privacy is to use what is called the Laplace mechanism, which acts to add Laplace distributed noise to a function evaluated on the data. This has the effect, however, of making the underlying function less reliable (due to the introduction of noise into the data). Another technique to accomplish differential privacy is called the exponential mechanism, which is more complex but offers better accuracy. Another advantage of the exponential mechanism is that, as will be shown, it can be efficiently implemented as a scalable secure computation. Furthermore, the exponential mechanism can implement any differentially private algorithm. The exponential mechanism expands the application of differential privacy to functions with non-numerical output, or when the output is not robust to additive noise, such as when the function is a median function. The exponential mechanism selects a result from a fixed set of arbitrary outputswhile satisfying differential privacy. This mechanism is exponentially more likely to select “good’ results, where “good” is quantified by a utility function u(D,r), which takes as input a database Dϵ(wheredenotes the set of all possible databases) and a potential output rϵ. Informally, higher utility means the output is more desirable and its selection probability is increased accordingly. Thus:For any utility function u: (n×)→and a privacy parameter ϵ, the exponential mechanism Muϵ(D) outputs rϵwith probability proportional toexp(ϵu(D,r)2Δu),whereΔu=max∀r∈iR,D≃D′u(D,r)-u(D′,r)is the sensitivity of the utility function. That is, Pr[Muϵ(D)=r]=exp(ϵu(D,r)2Δu)Σr′ϵRexp(ϵu(D,r′)2Δu). There are two types of privacy techniques that are used in exponential mechanisms. The first is a trusted model, in which a central “curator” computes and anonymizes data centrally. While accuracy in the trusted model is high, privacy is not, and as described briefly above there are certain types of data that are unable to be shared with even a trusted curator for legal reasons. The second type is an untrusted model. Here, computations and anonymization is performed locally, which results in more privacy but less accuracy. Secure computation is used for input secrecy, namely the data itself is never shared between parties. In an example embodiment, secure computation is used to simulate a central (trusted) model in a local (untrusted) model. This is accomplished by the parties simulating trusted third-party techniques via cryptographic protocols. More particularly, in an example embodiment, a “restricted” exponential mechanism is computed for subranges of data. By restricting the exponential mechanism, efficiency in computing the function is increased. Specifically, the “restriction” involved is to only consider “composable functions”. Composable functions are those that are easy to combine, to allow for efficient computation. Specifically, first the parties compute the partial result locally, without interaction with others, without secure computation, then the parties combine the partial results into a global one, with interaction with others, with secure computation. By dividing a potentially very large data universe (e.g., billions of possible values) into subranges and computing selection probabilities for them, this solution is much faster (sublinear in universe size) than computing selection probabilities for each possible data element (linear in universe size). Assume n parties, each holding a single value di∈D (we can generalize to multiple values). To combine local utility scores per player into a global score for all, the utility functions may be composable: DEFINITION 3 (COMPOSABILITY). We call a utility function u: (n×)→composable w.r.t. function u′: (n×)→if u(D,x)=∑i=1nu′(di,x) for xϵand D={d1, . . . , dn}. We use composability to easily combine utility scores in Weightsln(2)/2d, and to avoid secure evaluation of the exponential function in Weights*. General secure exponentiation is complex (e.g., binary expansions, polynomial approximations) and we want to avoid its computation overhead. If u is composable, users can compute weights locally, and securely combine them via simple multiplications: ∏.exp(u′(Di,x)ϵ)=exp(∑.u′(Di,x)ϵ)=exp(u(D,x)ϵ). There are a wide variety of selection problems that satisfy this composability definition. This includes rank-based statistics such as median, convex operations, unlimited supply auctions, and frequency-based statistics. For purposes of this discussion, a composable median utility function is described. Here, we quantify an element's utility via its rank relative to the median. The rank of Xϵwith respect to D is the number of values in D smaller than x. We use the following composable median utility function: The median utility function umedc: (n×)→Z gives a utility score for a range R=[rl, ru) ofw.r.t Dϵnas umedc(D,R)={rankD(ru)ifrankD(ru)<n2n-rankD(ri)ifrankD(ru)>n2n2else. The sensitivity of umedcis ½ since adding an element increases n/2 by ½ and j either increases by 1 or remains the same [25]. Thus, the denominator 2Δu in the exponents of Equation (1) equals 1, and we will omit it in the rest of this work. FIG.2is a flow diagram illustrating a method200of computation of a secure multiparty differentially private function, in accordance with an example embodiment. At operation202, all elements in a data universe are divided into k subranges. All elements in a data universe may be defined as all the possible values output by the function. At operation204, selection probabilities are computed for each subrange. This will be described in more detail with respect toFIGS.4and5below. Generally, however, either method400ofFIG.4or method500ofFIG.5is used to compute the selection probabilities. The choice between using method400or method500is based on how important privacy and speed are to a user controlling the software. Method400is faster but has less privacy, while method500is slower but has more privacy. A user defined privacy parameter c can be used to represent the user's choice, and method400may be used if c can be expressed as ϵ=ln(2)/2dfor some integer d). At operation206, the selection probabilities (which can also be called weights) are used to select one of the subranges (essentially, the subrange most likely to contain the actual function result, e.g., the subrange most likely to contain the median). This will be described in more detail respect toFIG.3below. At operation208, it is determined if the selected subrange has a size of one element. If so, then at operation210the value of that one element is considered to be the “answer” to the function (e.g., the median). If not, then at operation212, the data universe is replaced with the selected subrange and the method200returns to operation202to subdivide the selected subrange. Thus, method200recursively evaluates each smaller and smaller subrange until the specific output value is determined. Furthermore, by dividing a potentially large data universe into subranges, this process is much more efficient than computing selection probabilities for each possible value. It should be noted that the selection probabilities are non-normalized weights, rather than probabilities that have been normalized to sum to 1. FIG.3is a flow diagram illustrating a method206of using selection probabilities to select one of a plurality of subranges, in accordance with an example embodiment. The selection probabilities (weights) inform as to how likely each subrange is to be selected. In other words, they define a probability distribution over the subranges. At operation300, the sum of all weights is computed. This may be called s. Then, at operation302, an element t at uniform random between 2 to the power of −32 to 1 is selected. Then, at operation304, t is multiplied by s to get a random value r between nearly 0 and s. These are the initial selection probabilities. A loop is then begun where each of the selection probabilities (weights) are stepped through in order, stopping when a condition is met and choosing the subrange corresponding to the current selection probability in the loop. Specifically, beginning with the first subrange, at operation306, a sum of the selection probabilities of all subranges evaluated in the loop is determined. Thus, the first time in the loop this sum will simply be the selection probability for the first subrange, the second time in the loop this sum will be the sum of the selection probabilities for the first and second subrange, the third time in the loop this sum will be the selection probability for the first, second, and third subranges, and so on. At operation308, it is determined whether the sum from operation306is greater than r. If so, then at operation310the subrange currently being evaluated in this loop is selected and method300ends. If not, then the method300loops back to operation306, for the next subrange. FIG.4is a flow diagram illustrating a method400for computing selection probabilities for each subrange in accordance with an example embodiment. Here, the selection probabilities use complicated computations (exponential functions of the product of the privacy parameter and the rank, where the rank is the position of an element in the sorted data). They are slow with secure computation. Thus, this may be avoided by letting parties compute the rank and compute powers of two instead of an exponential function. A loop is begun at the first of the k subranges. At operation402, the rank of the subrange for the current private data is computed. The current private data is the data available to the party executing the method400, thus, in the example ofFIG.1, party102A evaluates operation402with respect to data in database104A, party102B evaluates operation402with respect to data in database104B, and so on. At operation404, the rank determined at operation402is shared with the at least one other party. Thus, if method400is currently being executed by party102A, the calculated rank is shared with parties102B and102C. The result is that each of the parties' version of method400will now have the ranks computed by each of the at least one other party, without having access to the underlying data of the at least one other party. At operation406, the ranks are combined into a global rank. This may be performed by, for example, adding the ranks together. At operation408, the utility of the subrange is determined using the global rank. This is determined by taking the smallest distance between a boundary of the subrange and the median rank in the subrange. Thus, for example, if the subrange considers elements 1 through 5, the utility is determined by computing the distance between the value for element 1 in the global rank and the median for the global rank, and computing the distance between the value for element 5 in the global rank and the median in the global rank, and then taking the lesser of these computed distances. The median in the global rank is computed as n/2, where n is the size of the combined datasets. Thus, if the combined dataset has 8 elements, the median is at rank 4. If element 1 in the subrange is at rank 2 in the combined rank, while element 5 in the subrange is at rank 7, then the closest of these two elements is element 1, which is at a distance of 2 from the median (2 spots away from rank 4). Thus, the utility of this subrange is calculated at 2. There are three options:(1) The median is contained in the subrange, then the utility score for the subrange is the same as the utility for the median (n/2)(2) The subrange does not contain the median and the subrange endpoints are smaller than the median (e.g., median=4 and subrange contains elements 1, 2, 3), then the utility is the same as for the subrange endpoint that is closer to the median (e.g., endpoints for subrange containing elements 1, 2, 3 are 1 and 2 and the latter is closer to median 4 than 1, thus we use the utility for endpoint 2). Here the utility is n/2 minus the rank of the larger subrange endpoint.(3) Same as (2) but endpoints are larger than the median. Here the utility is the rank of the smaller subrange minus n/2. Note that in case (2), one can always the larger endpoint (denoted ruin [0030]), and in case (3), one can always use the smaller one (rl) At operation410, a selection probability (weight) for this subrange is computed using a fixed value and the utility. This fixed value may be designated as ε. In an example embodiment, ε is set to ln(2), and the selection probability (weight) for the subrange is computed as exp(utility*ε). The result is that in this example embodiment, the weight is equal to 2utilitywhich can be more efficiently computed with secure computation than general exponentiation. At operation412, it is determined if this is the last subrange in the k subranges. If so, then the method400ends. If not, then the method400loops back to operation402for the next subrange. FIG.5is a flow diagram illustrating a method500for computing selection probabilities for each subrange in accordance with another example embodiment. Here, rather than the parties computing the local ranks, they compute local (or partial) weights. The local weights are then combined into global ranks. This results in a method that is slower than method400but offers better privacy. A loop is begun at the first of the k subranges. At operation502, the utility of the subrange for the current private data is computed. The current private data is the data available to the party executing the method500, thus, in the example ofFIG.1, party102A evaluates operation502with respect to data in database104A, party102B evaluates operation502with respect to data in database104B, and so on. The weight (i.e., unnormalized selection probability) of the subrange for the current private data may be computed by performing an exponent function on the rank of the subrange in the current private data. This involves bringing a constant to the power of a function applied on the rank. In an example embodiment, this constant is e. At operation504, the utility determined at operation502is used in a joint secure computation with the at least one other party. Thus, if method500is currently being executed by party102A, this joint secure computation is additionally performed with parties102B and102C. At operation506, the weights are combined into a global weight. This may be performed by, for example, multiplying the weights together. This global weight is the selection probability (weight) of this subrange. At operation508, it is determined if this is the last subrange in the k subranges. If so, then the method500ends. If not, then the method500loops back to operation502for the next subrange. At a programming level, these methods may be implemented using a series of pseudocode algorithms, as follows: Algorithm MEDIANSELECTIONInput: Number of subranges k, size n of combined data D, and ∈.Data universeis known to all parties.Output: Differentially private median of D.1: rl, ru, ← 0, ||2: for 1 to┌logk||┐ do3:r#←⌊ru-rlk⌋4: Define array W of size k5: if ∈ = ln(2)/2dthen6:W← Weightsln(2)/2d(rl, ru, r#, k, n, d)7: else8:W← Weights* (rl, ru, r#, k, n, ∈)9: end if10: i ← INDEXSAMPLING(W)11: rl← rl+ i · r#12: ru← ruif i = k − 1 else rl+ (i + 1) · r#13: end for14: return[rl] Algorithm INDEXSAMPLING.Input: ListWof weights with size k.Output: Index i ∈ [0, k - 1] sampled according toW.1:s← 02:for j ← 0 to k - 1 do3:s← FLAdd(s,W[i])4:end for5:t← FLRand(32)6:r← FLMul(s,t)7:for j ← 0 to k - 1 do8:r←FLAdd(r,-W[j])9:return j if FLLTZ(r)10:end for11:return k - 1 Algorithm Weightsln(2)/2d.Input: Range [rl, ru), subrange size r#, number k of subranges, datasize n, and parameter d ∈ {0, 1}. Subrange ranks rankDp(·) areinput by each party p ∈ {1, . . . , q).Output: List of weights.1: Define arrays Rl, Ru, W of size k, initialize Rl, Ruwith zeros2: for p ← 1 to q do //Get input from each party3: for j ← 0 to k − 1 do //Divide range into k subranges4: rl← rl+ j · r#5:Rl[j]← Add(Rl[j],rankDp([rl]))6:Ru[j − 1]← Add(Ru[j − 1],rankDp([rl)) if j > 07: end for8:Ru[k − 1]← Add(Ru[k − 1],rankDp([ru]))9: end for10: for j ← 0 to k − 1 do11:rankl← Add(n,−Rl[j])12:〈cu〉←LT(〈Ru[j]〉,〈n2〉)13:〈cl〉←LT(〈n2〉,〈Rl[j]〉)14:〈u〉←CondSel(〈Ru[j]〉,〈rankl〉,〈n2〉,〈cu〉,〈cl〉)15: if d = 0 then16:W[j]← FLExp2(u)17: else18:t← Trunc(u, d)19:e← FLExp2(t)20:r← Mod2m(u, d)21:c← EQZ(r)22:s← FLSwap(1,{square root over (2)},c)23:W[j]← FLMul(e,s)24: end25: end for26: returnW 4 Algorithm Weights*.Input: Range [rl, ru), subrange size r#, number k of subranges,data size n, and ∈. Subrange weights e∈(·)are input by eachparty p ∈ {1, . . . , q}.Output: List of weights.1: Define arrays Wl, Wu, W of size k, initialize Wl, Wuwith ones2:wmed←exp(n2ϵ)3: for p ← 1 to q do //Get input from each party4: for j ← 0 to k − 1 do //Divide range into k subranges5: rl← rl+ j · r#6: ru← ruif j = k − 1 else rl+ (j + 1) · r#7:Wl[j]← FLMul(Wl[j],)8:Wu[j]← FLMul(Wu[j],)9: end for10: end for11: for j ← 0 to k − 1 do12:cu← FLLET(Wu[j],wmed)13:cl← FLLT(Wl[j],wmed)14:W[j]← FLCondSel(Wu[j],Wl[j],wmed,cu,cl)15: end for16: returnW With the following secure computation protocols: Secure ProtocolFunctionality/OutputAdd(a,b)a + bLT(a,b)returns 1 if the first operand isless than the second and 0otherwiseLET(a,b)1 if a ≤ b else 0EQZ(a)1 if a = 0 else 0EQ(a,b)1 if a equals b else 0Mod2m(a, b)a mod 2bfor public bTrunc(a, b)[a/2b] for public bCondSel(x,y,z,c1,c2)x if c1= 1, y if c2= 1, and z ifboth c0and c1are zero (a newsharing is output)FLMul(a,b)a · bFLExp2(a)2aFLLTZ(a)1 if a less than zero else 0FLRand(b)a uniform random float in (2−b,1.0] for public bFLSwap(a,b,c)a if c = 1 otherwise b (a newsharing is output)FLAdd(a,b), FLLT(a,b) , FLLET(a,b), FLCondSel(x,y,z,c1,c2) are the float versions of the integer protocols with the same name Floating points are marked with the prefix FL. All inputs and outputs may be secret shared, denoted by angle brackes < >, which is a cryptographic primitive for secure computation. EXAMPLES Example 1. A system comprising: at least one hardware processor; and a computer-readable medium storing instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations comprising, at a first party in a multiparty system having a plurality of parties with their own independent databases: identifying a function to be evaluated over a range of data contained in a database of the first party; dividing the range into a plurality of subranges of data; computing, for each of the plurality of subranges of data, a selection probability for the subrange, based on information received about each of the plurality of subranges of data from at least one other party in the multiparty system; selecting one of the plurality of subranges of data based on the selection probability; dividing the selected subrange into additional subranges; and recursively iterating the computing, the selecting, dividing, and repeating for the additional subranges, until the selected subrange has a size of one, each iteration being performed on subranges of the selected subrange from an immediately previous iteration. Example 2. The system of Example 1, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;jointly securely computing the rank of the subrange for data with at least one other party in the multiparty system;combining the rank of the subrange of data from the at least one other party and the rank of the subrange for data contained in the database of the first party in to a global rank;determining a utility of the subrange using the global rank; andcalculating a selection probability for the subrange using a fixed value and the utility. Example 3. The system of Example 2, wherein the combining includes adding the ranks. Example 4. The system of Examples 2 or 3, wherein the fixed value is ln(2). Example 5. The system of any of Examples 1-4, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;calculating a utility for the subrange based on the rank;jointly securely computing the weight of the subrange for data with at least one other party in the multiparty system; anddetermining a selection probability for the subrange by combining the weights for the subrange. Example 6. The system of Example 5, wherein the combining includes multiplying the weights. Example 7. The system of Example 1, wherein a selection probability for a particular subrange is a likelihood that the particular subrange will be selected. Example 8. A method comprising: at a first party in a multiparty system having a plurality of parties with their own independent databases: identifying a function to be evaluated over a range of data contained in a database of the first party; dividing the range into a plurality of subranges of data; computing, for each of the plurality of subranges of data, a selection probability for the subrange, based on information received about each of the plurality of subranges of data from at least one other party in the multiparty system; selecting one of the plurality of subranges of data based on the selection probability; dividing the selected subrange into additional subranges; and recursively iterating the computing, the selecting, dividing, and repeating for the additional subranges, until the selected subrange has a size of one, each iteration being performed on subranges of the selected subrange from an immediately previous iteration. Example 9. The method of Example 8, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;jointly securely computing the rank of the subrange for data with at least one other party in the multiparty system;combining the rank of the subrange of data from the at least one other party and the rank of the subrange for data contained in the database of the first party in to a global rank;determining a utility of the subrange using the global rank; andcalculating a selection probability for the subrange using a fixed value and the utility. Example 10. The method of Example 9, wherein the combining includes adding the ranks. Example 11. The method of Example 9 or 10, wherein the fixed value is ln(2). Example 12. The method of any of Examples 8-11, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;calculating a utility for the subrange based on the rank;jointly securely computing the weight of the subrange for data with at least one other party in the multiparty system; anddetermining a selection probability for the subrange by combining the weights for the subrange. Example 13. The method of Example 12, wherein the combining includes multiplying the weights. Example 14. The method of Example 12, wherein the calculating a utility includes bringing e to the power of the rank. Example 15. A non-transitory machine-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform operations comprising: at a first party in a multiparty system having a plurality of parties with their own independent databases: identifying a function to be evaluated over a range of data contained in a database of the first party; dividing the range into a plurality of subranges of data; computing, for each of the plurality of subranges of data, a selection probability for the subrange, based on information received about each of the plurality of subranges of data from at least one other party in the multiparty system; selecting one of the plurality of subranges of data based on the selection probability; dividing the selected subrange into additional subranges; and recursively iterating the computing, the selecting, dividing, and repeating for the additional subranges, until the selected subrange has a size of one, each iteration being performed on subranges of the selected subrange from an immediately previous iteration. Example 16. The non-transitory machine-readable medium of Example 15, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;jointly securely computing the rank of the subrange for data with at least one other party in the multiparty system;combining the rank of the subrange of data from the at least one other party and the rank of the subrange for data contained in the database of the first party in to a global rank;determining a utility of the subrange using the global rank; andcalculating a selection probability for the subrange using a fixed value and the utility. Example 17. The non-transitory machine-readable medium of Example 16, wherein the combining includes adding the ranks. Example 18. The non-transitory machine-readable medium of Example 16 or 17, wherein the fixed value is ln(2). Example 19. The non-transitory machine-readable medium of any of Examples 15-18, wherein the selecting includes: for each of the plurality of subranges:computing the rank of the subrange for data contained in the database of the first party;calculating a utility for the subrange based on the rank;jointly securely computing the weight of the subrange for data with at least one other party in the multiparty system; anddetermining a selection probability for the subrange by combining the weights for the subrange. Example 20. The non-transitory machine-readable medium of Example 19, wherein the combining includes multiplying the weights. FIG.6is a block diagram600illustrating a software architecture602, which can be installed on any one or more of the devices described above.FIG.6is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software602is implemented by hardware such as a machine700ofFIG.7that includes processors710, memory730, and input/output (I/O) components750. In this example architecture, the software architecture602can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture602includes layers such as an operating system604, libraries606, frameworks608, and applications610. Operationally, the applications610invoke API calls612through the software stack and receive messages614in response to the API calls612, consistent with some embodiments. In various implementations, the operating system604manages hardware resources and provides common services. The operating system604includes, for example, a kernel620, services622, and drivers624. The kernel620acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel620provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services622can provide other common services for the other software layers. The drivers624are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers624can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low-Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries606provide a low-level common infrastructure utilized by the applications610. The libraries606can include system libraries630(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries606can include API libraries632such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in 2D and 3D in a graphic context on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries606can also include a wide variety of other libraries634to provide many other APIs to the applications610. The frameworks608provide a high-level common infrastructure that can be utilized by the applications610, according to some embodiments. For example, the frameworks608provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks608can provide a broad spectrum of other APIs that can be utilized by the applications610, some of which may be specific to a particular operating system604or platform. In an example embodiment, the applications610include a home application650, a contacts application652, a browser application654, a book reader application656, a location application658, a media application660, a messaging application662, a game application664, and a broad assortment of other applications, such as a third-party application666. According to some embodiments, the applications610are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications610, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application666(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application666can invoke the API calls612provided by the operating system604to facilitate functionality described herein. FIG.7illustrates a diagrammatic representation of a machine700in the form of a computer system within which a set of instructions may be executed for causing the machine700to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.7shows a diagrammatic representation of the machine700in the example form of a computer system, within which instructions716(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine700to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions716may cause the machine700to execute the methods ofFIGS.2-5. Additionally, or alternatively, the instructions716may implementFIGS.1-5and so forth. The instructions716transform the general, non-programmed machine700into a particular machine700programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine700operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine700may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine700may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions716, sequentially or otherwise, that specify actions to be taken by the machine700. Further, while only a single machine700is illustrated, the term “machine” shall also be taken to include a collection of machines700that individually or jointly execute the instructions716to perform any one or more of the methodologies discussed herein. The machine700may include processors710, memory730, and I/O components750, which may be configured to communicate with each other such as via a bus702. In an example embodiment, the processors710(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor712and a processor714that may execute the instructions716. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions716contemporaneously. AlthoughFIG.7shows multiple processors710, the machine700may include a single processor712with a single core, a single processor712with multiple cores (e.g., a multi-core processor712), multiple processors712,714with a single core, multiple processors712,714with multiple cores, or any combination thereof. The memory730may include a main memory732, a static memory734, and a storage unit736, each accessible to the processors710such as via the bus702. The main memory732, the static memory734, and the storage unit736store the instructions716embodying any one or more of the methodologies or functions described herein. The instructions716may also reside, completely or partially, within the main memory732, within the static memory734, within the storage unit736, within at least one of the processors710(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine700. The I/O components750may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components750that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components750may include many other components that are not shown inFIG.7. The I/O components750are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components750may include output components752and input components754. The output components752may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components754may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components750may include biometric components756, motion components758, environmental components760, or position components762, among a wide array of other components. For example, the biometric components756may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components758may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components760may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components762may include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components750may include communication components764operable to couple the machine700to a network780or devices770via a coupling782and a coupling772, respectively. For example, the communication components764may include a network interface component or another suitable device to interface with the network780. In further examples, the communication components764may include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices770may be another machine or any of a wide variety of peripheral devices (e.g., coupled via a USB). Moreover, the communication components764may detect identifiers or include components operable to detect identifiers. For example, the communication components764may include radio-frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as QR code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components764, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (i.e.,730,732,734, and/or memory of the processor(s)710) and/or the storage unit736may store one or more sets of instructions716and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions716), when executed by the processor(s)710, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate array (FPGA), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network780may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network780or a portion of the network780may include a wireless or cellular network, and the coupling782may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling782may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long-Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions716may be transmitted or received over the network780using a transmission medium via a network interface device (e.g., a network interface component included in the communication components764) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, the instructions716may be transmitted or received using a transmission medium via the coupling772(e.g., a peer-to-peer coupling) to the devices770. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions716for execution by the machine700, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. | 49,324 |
11861039 | While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION Various embodiments of a hierarchical system and method for identifying sensitive content in data are described herein. The hierarchical system and method can use a hierarchical detection/identification approach based on a pipeline of independently deployed models in order to identify and/or redact sensitive data at scale, according to some embodiments. The hierarchical system and method collocates a lightweight, stateless, resource-efficient pre-processing classification model along with data storage, in some embodiments. This model can be responsible for detecting the presence of sensitive data in the documents within a certain probability threshold, in some of these embodiments. This pre-processing classification model, that can be used by sensitive data classifiers, can in some embodiments use a binary classification in its most efficient and simple form. Documents can be flagged by this model, and then streamed for further analysis to a separately-deployed in-depth analysis model, capable of pinpointing exact location and nature of sensitive data—making the sensitive data available for highlighting and/or redaction, in some embodiments. This in-depth analysis model can be used by a sensitive data discovery service, for example. The hierarchical system and method for identifying sensitive content in data can thereby produce a conceptual data funnel that leads to minimizing the amount of data that needs to be transferred and analyzed in-depth, in some embodiments. There are a lot of clients, or owners or users of data, with a lot of data to process, and these entities want to have an indication of what sensitive data items exist in their overall volume of data. Previously, the way for identifying sensitive data in large quantities of data has been expensive, and has involved the transmission of large quantities of data across the boundaries of multiple services, with all the expense and complications involved in that transmission and subsequent analysis. Therefore, for a large data collection, it was traditionally burdensome to transfer the large quantities of data to an analysis service, when only a small amount of data was potentially sensitive. In traditional systems, users would have to transmit over a network, and analyze huge amount of data, which is an inefficient way to monitor or analyze for sensitive data. Some embodiments of the disclosed hierarchical system and method for identifying sensitive content in data solve the problem of identifying sensitive information in large quantities of data. The sensitive information can be Personal Identifiable Information (“PII”), in some embodiments, but can also be extended to other categories of sensitive data as well. The hierarchical system and method for identifying sensitive content in data solves these and other problems by a multi-tiered solution, in some embodiments, where the data is processed in at least two phases. During the first phase of these embodiments, a decision is made whether there is a chance that sensitive data is present in the portion of data that is being analyzed. Then, based on this decision, the data that is preliminarily classified as data that potentially contains the sensitive data, is processed using a more expensive, time-consuming, and accurate method in a second phase. The disclosed hierarchical system and method for identifying sensitive content in data eliminates the need for processing 100% of data through the second expensive model in the second phase, in some embodiments. In many datasets, the density of sensitive data, such as PII, can be quite low compared to the actual total amount of data. There can be few occurrences of sensitive data, with a lot of data in between the sensitive data occurrences. The disclosed hierarchical system and method for identifying sensitive content in data, in some embodiments, can eliminate the need of processing of a large percentage of the non-sensitive data through a more expensive model, such as a model in the second phase. In addition, the disclosed hierarchical system and method for identifying sensitive content in data can, in some embodiments, eliminate the need to transmit large quantities of data across the boundary of services and entities, where that transmission of large quantities of data could even sometimes encompass unsecured public networks, by providing local analysis in a first phase that allows a high-level determination of the sensitive content in the data. Phase I Analysis The first phase of the hierarchical system and method for identifying sensitive content in data, which can include the sensitive data classifiers, can be collocated with where the data is stored, in some embodiments, such that the first phase provides a local analysis. In some embodiments, the initial phase one classifier can be executed on the storage host for the storage system itself, in some embodiments. In other embodiments, the phase one classifier can be hosted in close network proximity to the storage system so that the data stays within some kind of defined network security boundary, and so that data does not have to be sent over a long networking distance. In other embodiments, the phase one analysis might be provided by an event-driven compute service as the data is transferred to or stored in a data storage system. The first phase analysis might be part of a storage service where data is stored, or it could be a provided model that is deployed locally by a user at its own data site, for example. A user might deploy a first phase model locally to identify the data that needs to be sent to an external component or service for a more detailed analysis. For example, the local analysis of the first phase can comprise a client-side library that the data storage service uses in order to make the decision that some portions of the data might comprise sensitive data, and therefore might require further processing to determine if sensitive data is present in a portion of the data. That client-side library can be part of the data storage system, in some embodiments. In other embodiments, the client-side library can be a hosted service that is associated with the data storage service. The hosting of the first phase implementation can be organized so that data transfer across the boundaries of a service or data center is not required, in some embodiments. The processing can be lightweight and can be either be embedded with a data storage system or service, or can be located in close proximity with the data storage service such that the data transfer cost is eliminated, depending on the embodiment. Collocating the preliminary analysis with the data storage system or service, as provided by some embodiments of the first phase of the hierarchical system and method for identifying sensitive content in data, provides both increased efficiency of the compute services, as well as increased security for the data itself. The increased security can be provided by eliminating one or more boundaries across which the sensitive data must traverse, in some embodiments. The increased efficiency can be provided eliminating the need of processing a large percentage of the non-sensitive data through a more expensive model, as well as eliminating the need to transmit large quantities of data across networks, and/or across the boundary of services and entities. In a provider network environment, the data might be stored in a data storage service of the provider network. The phase one analysis, such as the sensitive data classifiers, can be collocated with the provider network's data storage service, so that entirety of the data does not have to leave the context of the data storage service, in some embodiments. However, in other embodiments, the data might be stored on the systems of a client of the provider network, such that the client performs a phase one analysis before sending identified data items (or identified portions of data items) to the provider network for more comprehensive second phase analysis. In some of these embodiments, the provider network might provide the phase one analysis model to the client, such as in an executable package, for the client to perform the phase one analysis, before sending the identified or classified data to the provider network for the phase two analysis. In other embodiments, phase one might be provided as library or a container that can be, for example, dropped into the client's workflow. In some embodiments, the phase one analysis model, which can be embodied in a sensitive data classifier, might be a trained machine learning model. The machine learning model might be trained to identify whether data items (or portions of data items) contain sensitive data somewhere within the data item, within a certain probability threshold for example. For the embodiments in which a service or provider network provides the phase one to clients, all the clients might be provided the same model, in some embodiments, while in other embodiments at least some clients might be provided different models. In some embodiments, a client might be provided phase one software comprising a machine learning model trained specifically for that client, such as using data from the client in the training of the model. The machine learning model provided to a client can be customized for particular client cases, in some embodiments. The phase one analysis can employ a simpler, lightweight type model, in some embodiments. The phase one model can be fairly lightweight so as to not impact the performance of the storage service, since, in some embodiments, phase one is collocated with the storage service itself. The phase one analysis, such as provided by the sensitive data classifiers, might simply output an indication whether the data item contains sensitive data somewhere within the data item, within a certain probability threshold. Phase one, at a high level, can be viewed as a classification task, in some embodiments, where for every data item (or portion of a data item) being analyzed, a decision is made whether sensitive data is, or might be, contained within that data item (or portion of a data item). Phase one might simply add an extra layer of lightweight computing at the storage service, so that large quantities of data is not transmitted outside the storage service (such as being transmitted to a larger network or provider network). Phase one can employ different types of models, depending on the embodiment. Some embodiments of phase one can use models that operate at the n-grams level. Using n-grams, the model can make a classification decision whether a data item (or portion of a data item) includes sensitive data. Some embodiments of phase one can use a binary classifier that will be implemented at the data item (or portion of a data item) level. In some embodiments, a linear regression classifier can be used for the purposes of collocating the preliminary analysis with data storage that contains the data to be analyzed. Any other resource-effective technique, which is simply more resource effective than the technique performed in the second phase, can be used in the first phase as an alternative, depending on the embodiment. The phase one model can also flag data at a larger granularity, in some embodiments. Phase one might provide an indication for data at a file, document, or bucket type level, in these embodiments. However, a service can decide the size of the data portion to be analyzed in phase one, depending on the embodiment, and the disclosed hierarchical system and method for identifying sensitive content in data supports multiple different sizes of data items (or portions of data items) that can be analyzed. For example, the data to be analyzed can be a paragraph of text, a multi-page PDF document, a certain number of file bytes, either raw or encrypted, or any content or quantity of content that is needed. In some embodiments, the phase one model can operate at the chunk level. For example, the phase one model might operate on file chunks. All different sizes of data portions can be supported by the different embodiments of phase one. In some embodiments, phase one can identify, in an intelligent manner, using machine learning for example, the parts of files likely to contain certain types of information. Phase one can extract those parts of the files to be sent to the second phase. For example, phase one might determine that images or graphics in a file would not contain PII, and extract those portions of a file before sending to the second phase. As another example, phase one might determine that only the ASCII text part of a file might contain PII, and only send the ASCII text part of a file to the second phase. As another example, phase one might determine that only certain rows or columns of a table might contain PII, and only send those particular rows or columns to the second phase. Therefore, instead of processing an entire large file, for example, phase one can be smart about what portions of the file should be analyzed and/or sent to the second phase. Phase one might employ the second phase for multiple different rounds of communication and analysis on the same or similar data item, in some embodiments. Phase one might send a portion of data to phase two, and only if sensitive data is found, then it might send other portions of the data, in some embodiments. For example, phase one might come to the conclusion that if a file contains PII, then the metadata of the file would also contain PII. Phase one might only send the metadata of a file to phase two for analysis in a first round. If phase two returns a result that the metadata contains sensitive data, then phase one might thereby send the remainder of the file to phase two for analysis in a second round. If phase two returns a result that the metadata does not contain PII, then phase one would indicate the file as not containing sensitive data, for example. Phase II Analysis The second phase analysis of the disclosed hierarchical system and method for identifying sensitive content in data, can be a more comprehensive analysis, in some embodiments. The second phase analysis can be a strong tagger or sequence tagger, in some embodiments. The second phase analysis might examine every portion of text very carefully, and attempt to tag where the sensitive data or PII is located in the particular portion of data, in some of these embodiments. The second phase analysis is usually not performed by simple techniques. Rather, the second phase analysis requires a sophisticated classifier with a fairly high percentage accuracy, in some preferred embodiments. The second phase analysis can specifically identify the exact character strings in the data that are sensitive data, in some of these embodiments. The second phase analysis can be used to identify tokens of input text that belong to one of sensitive data types, in some embodiments. These sensitive data types might be a name, social security number, credit card number, or other kinds of personally identifiable information, for example. The second phase analysis might use a transformer encoder, in some embodiments. A transformer encoder can be a stronger, more thorough computation that can identify sensitive data with much greater accuracy that the phase one analysis. The second phase analysis can be used to perform sequence tagging on the input data items. In some of these embodiments, a transformer-based sequence tagging model is used to identify tokens of the input text that belong to one of sensitive data types. The model can be computationally heavy, and can require a dedicated fleet of compute servers (or accelerated instances) to be able to process large amount of data, in some embodiments. Because the second phase analysis can require a fleet of compute instances or servers, in some embodiments, it is hard to impossible to collocate the model with data storage. Therefore, the second phase analysis is separate from the data storage and a first phase analysis in most preferred embodiments. The second phase analysis can be a hosted dedicated service for doing a deeper type of analysis. The owner of the data might request to have the second phase analysis performed, in some embodiments. The owner might request a sensitive data discovery service of a provider network to perform the second phase analysis. The owner can send the data to the provider network securely. The owner needn't send every piece of data to the provider network, but only those data items (or portions of data items) classified as potentially containing sensitive data by the first phase analysis. This saves the cost of transferring large portions of data to the provider network. For example, within 100 GB of data, there might exist only 1 GB of sensitive data classified as potentially containing sensitive data by the first phase analysis, and so the disclosed system and method can save the owner of the data the transfer of the remaining 99 GB of data to the provider network. In other embodiments, the owner or user of the data might perform the second phase analysis in-house, without making an external request to an external service. The owner of the data might have their own second phase analysis that can perform a more comprehensive analysis on the data items from the first phase, to identify locations of sensitive data. In these embodiments, the owner also needn't send every piece of data to the in-house service, but only those data items (or portions of data items) classified as potentially containing sensitive data by the first phase analysis. This saves the cost of transferring large portions of data to the in-house service. For example, within 100 GB of data, there might exist only 1 GB of sensitive data classified as potentially containing sensitive data by the first phase analysis, and so the disclosed system and method can save the owner of the data the transfer of the remaining 99 GB of data to the in-house service. The output of the second phase of analysis (such as output by a sensitive data discovery component or service) of the disclosed hierarchical system and method for identifying sensitive content in data can take many forms, depending on the embodiment. The output can be a data location identifier of the sensitive data, in some embodiments. For example the output might be a data item identifier (or portion of a data item identifier) and an offset with that data item (or portion of a data item) that contains sensitive data. With documents, for example, the second phase analysis will receive the documents and can provide the location of all the sensitive data items within those documents, in some embodiments. The location can comprise offsets, such as character or Unicode character offsets, in some embodiments. In addition to the location of sensitive data, the second phase analysis can also output the type of sensitive data that was located, in some embodiments. For example, the second phase might of PII that was encountered, such as name, social security number, credit card number, or other kinds of personally identifiable information, for example. The output of the second phase analysis (such as output by a sensitive data discovery component or service) can be the original data with the sensitive data marked in some way, in some embodiments. This output can be in addition to, or instead of, the second phase analysis's output of a data location identifier of the sensitive data, depending on the embodiment. For example, the second phase analysis can receive data as an input, and output data as an output, where the output contains the sensitive data replaced with something else. This something else can be whatever the user or client requests, in some embodiments. For example, the sensitive data might be redacted, or tokenized, or highlighted, depending on the embodiment. The sensitive data might be replaced with a mask, such as a series of “*” characters, in some embodiments. The sensitive data might be replaced with a type of entity, in some embodiments. For example, a person's name in the data might be simply replaced with the characters “(PERSON)”, either in parenthesis or without the parenthesis. As another example, a credit card number might be replaced with the characters “CREDIT CARD NUMBER” in the data. The phase one analysis and the phase two analysis can be used and/or operated as separate independent analysis, in some embodiments. The phase two analysis just needs to receive the data, and doesn't depend on any information about the analysis of the first stage, in some of these embodiments. In other words, phase two just needs the data to analyze. Both phase one and phase simply input the data, in these embodiments. In other words, phase two does not receive any output of any features from phase one. This functionality can further enhance the maintainability of the system. For example, if phase one is deployed to an external network, such as a client network, along with libraries for executing phase one, the lifecycle of phase one can be independent of the lifecycle of phase two, in these embodiments. Phase one and phase two might be hosted at different places and controlled by different entities. Maintenance for phase one and phase two can therefore occur independently so that any maintenance issues, or feature upgrades, can be accomplished for one phase without waiting for or having to coordinate with the other phase. Iterations on the different phases can occur completely independently, in these embodiments. Use Cases There can be a number of use cases for the presented hierarchical system and method for identifying sensitive content in data. One use case can be with a database. Either as data is being stored in the database, or for data in the database, a local phase one analysis can classify data items of the database that contain sensitive data, or might contain sensitive data, or that contain sensitive data within a probability threshold. These classified data items can be sent to a separate data discovery component for a more detailed analysis. Another use case can be with files in a file system that operates in a similar manner. Another use case is with streaming data, where phase one can be implemented within the stream so that the classification is done as messages go through the stream, and the portions of the data from the stream which are classified as containing sensitive data (or might contain sensitive data, or that contain sensitive data within a probability threshold) are sent to phase two, in some embodiments. The downstream consumers of the stream might get a redacted version of the data, in some of these embodiments. Other use cases are can be a log data service that stores log data, with IoT devices and the data they store, or with an IoT device service that operates on data from multiple IoT devices, depending on the embodiment. More generally, with any data store, the first phase analysis might be part of a storage service where data is stored. For example, the first phase analysis might be provided by a data storage service of a provider network, or might be a separate service that operates local to, or in close network proximity to, the data storage service of a provider network. The first phase analysis could also be a provided model that is deployed locally by a user at its own data site, for example. A user might deploy a first phase model locally to identify the data that needs to be sent to an external component or service, such as in a provider network, for a more detailed analysis. The first phase analysis might operate as data in ingested into the data store, or it might operate on data already resident at the data store, depending on the embodiment. In some embodiments, the sensitive data can be redacted from the analyzed data items by the sensitive data discovery component or service, or based on the location data provided by sensitive data discovery component or service, depending on the embodiment. In different use cases, the sensitive data can be analyzed and/or redacted on the fly, or upon access, or on data write, depending on the application and use case. For example, the data might already be stored, and at access time the phase one and phase two analyses might be employed and the requestor presented with data where the sensitive data is redacted. Else, the hierarchical system and method for identifying sensitive content in data might be employed as data is written to the data store, such that the data store itself contains data with sensitive data redacted, or the data store contains the unredacted data, along with metadata that contains the results of the phase two analysis (such as the location of the sensitive data in the stored data). In other embodiments, the first phase analysis might be employed on the data in the data store (either as the data is ingested, or for data already resident in the data store). The data store can store the original data along with the results of the first phase analysis (such as whether a data item or portion of a data item has been classified as containing sensitive data by the first phase analysis). At a later time, the second phase analysis can be executed, such as when the data in the data store is being accessed, for example. In some use cases, data owners might have different requirements for redacting different subsets of sensitive data, depending on who is accessing that data. For example, the data owners might require that some users only access fully redacted data, while other users can access fully unredacted data, while other users can access data with certain types of sensitive data redacted, and certain types of sensitive data unredacted. In some of these embodiments, data owners must store the raw data in a certain storage location, and the data owners can employ the disclosed hierarchical system and method for identifying sensitive content in data at extraction time. In other embodiments, the data owners might have already executed either the first phase analysis, or both the first and second phase analysis, and have stored the results of either one or both of those analysis either with the data in the data store, or separately from the data. When the data is accessed, the phases that were run previously do not have to be run again, and the results of the phases can be used to present the data in its required format, with the required redactions. However, if only the first phase analysis was executed previously, then the second phase analysis would need to be executed at some point before the location of sensitive data could be known. In other embodiments, a data owner might have a separate data store with at least some of the sensitive data redacted. The disclosed hierarchical system and method for identifying sensitive content in data can be applicable to a wide range of use cases. Wherever a data flow is implemented (such as on a server or a system of servers), it can be expanded to use the disclosed hierarchical system and method for identifying sensitive content in data, in some embodiments. Therefore the disclosed system does not have to simply be limited to data storage. The above disclosed use cases are not an exhaustive list of use cases, and there can be any number of other use cases for the presented hierarchical system and method for identifying sensitive content in data. For example, wherever this specification discloses redacting sensitive data, the sensitive data might, in a similar manner, be tokenized or highlighted instead of being redacted. The disclosed hierarchical system and method for identifying sensitive content in data can be embedded in any system or systems that deal with data. The use cases presented here constitute some preferred embodiments, but there are many other situations where the presented hierarchical system and method for identifying sensitive content in data can be used, and the use cases presented here should not be considered definitive or exhaustive. In certain embodiments, a client or a data owner might use the disclosed hierarchical system and method for identifying sensitive content in data across a wide variety and types of data and data systems. The disclosed system can be applied across multiple different data systems, in some embodiments. For example, a client or data owner might have a large data stream, database data, file data, log data, IoT data, and/or data warehouse data, and the disclosed hierarchical system and method for identifying sensitive content in data can be used across some or all of the data, whether the data is on premises or located externally, such as in a provider network. The disclosed hierarchical system and method for identifying sensitive content in data does not have to operate on one collocated set of data. The disclosed hierarchical system and method for identifying sensitive content in data can operate within or upon multiple data sites and/or multiple data services. The discloses system can operate on data in storage, or data in-flight, depending on the embodiment. Data Security The first phase identifies data items or portions of data items which require further analysis in the second phase. Those data items or portions of data items are sent over a wider-scale network, which might even include the public Internet, for a further analysis of the second phase. Maintaining security on the transfer is important, in some embodiments, since the data being transferred is data which have been classified as potentially containing PII (such as within a probability threshold). The data needs to be well protected, in these embodiments. Regardless of whether the data transfer occurs within a provider network entirely, or occurs in between a remote premise (such as a client's or data owner's location) and the provider network, or occurs in between two different networks, it is important to ensure that the networking is established so that unencrypted data is not transferred over a public network, such as the public Internet, in some embodiments. In other embodiments, networking is configured so that there is no transfer at all over a public network. Transferring this data in a provider network can use the functionality of a virtual private cloud (“VPC”), in some embodiments. A virtual private cloud can be a provisioned logically isolated section of the provider network, where a user or client can launch provider network resources in a virtual network that the user or client might define. Transferring data between the first phase and second phase can involve VPC peering, in some embodiments. In other embodiments, transferring the data in a provider network can more generally involve the use of private networks, or virtual private networks. A VPC or private network can encompass a data store and/or a data storage service, in some embodiments. In some of these embodiments, the VPC or private network can also encompass the phase one analysis components, such as the sensitive data classifiers, that perform the classification of the data items (or portion of data items). The phase two analysis component or service (such as the sensitive data discovery component or service) might be located in a separate VPC or private network. Between the two VPCs or between the two private networks, the system can utilize VPC peering, or private network peering, in some embodiments. This can allow the traffic between the two VPCs, or two private networks, to be routed within the provider network. A similar technique can be adapted between an external network, such as an on-premise installation, and a provider network. The external network can utilize a VPC or private network that can comprise a data store and/or a data storage service, in some embodiments. In some of these embodiments, the VPC or private network can also encompass the phase one analysis components, such as the sensitive data classifiers, that perform the classification of the data items (or portion of data items). The phase two analysis component or service (such as the sensitive data discovery component or service) can be located in a separate VPC or private network in the provider network. Between the two VPCs or between the two private networks (such as the first VPC or private network being located in the external network, and the second VPC or private network located in the provider network), the system can utilize VPC peering, or private network peering, in some embodiments. In other embodiments, the system can utilize a TLS connection or virtual private network (“VPN”) type connection. This can allow the traffic between the two VPCs, or two private networks, to be routed securely from the external network to the provider network. Regardless of the networking configuration, the disclosed hierarchical system and method for identifying sensitive content in data can also ensure that data transferred between two entities is encrypted, in some embodiments. In some of these embodiments, the system might use TLS 1.2 encryption. For example, data transferred between the phase one components and the phase two components can be encrypted. This can be, for example, data transferred from a data storage service to the sensitive data discovery component or service, or data transferred between the sensitive data classifiers and the sensitive data discovery component or service. This might also include data transferred between a data storage service (or file storage service, or streaming service, etc.) and the phase one analysis component(s), such as the sensitive data classifiers. If the storage service and the phase one analysis component(s) are not located in the same secure network, for example, or to prevent an internal unauthorized eavesdropper or hacker from accessing the data as it is being transferred, then data between these two entities might be encrypted as well, in some embodiments. In some embodiments, therefore, raw unencrypted data does not transfer over a network that would allow unauthorized access to the data. Regardless of the encryption utilized, the disclosed hierarchical system and method for identifying sensitive content in data can also ensure that the receiving component or system is authorized to access the data that it is being provided, in some embodiments. Whatever entity is attempting to access, requesting to access, or being provided the data needs the right credentials to have permission to access or receive the data, in some embodiments. If the system is located entirely in the provider network, for example, an authorization service of the provider network can ensure that the classifiers, services and/or components of the system are each authorized to access the appropriate data. If the phase one components (such as the sensitive data classifiers) are located external to the provider network, for example, then the provider network might need to authorize the source of the phase 1 data and/or the external source of the phase 1 data might need to authorize the phase 2 service to receive the data, in some embodiments. Any destination of the data might also need to be authorized, in some embodiments. For example, any data redaction service, tokenization service, highlight service, tracking service, and/or user-specified destination of the data might need to be authorized by the provider of the data, such as the sensitive data discovery component of phase 2 of the system, to receive the data of output by phase 2. There are many more contexts where authorization can be used in the disclosed hierarchical system and method for identifying sensitive content in data, and the foregoing are merely some examples. Embodiments of a Hierarchical System and Method for Identifying Sensitive Content in Data FIG.1is a logical block diagram illustrating a hierarchical system and method for identifying sensitive content in data comprising a storage system100with a sensitive data classifier130that classifies data items120containing sensitive data, and a sensitive data discovery component140that identifies a location of sensitive data within the received data items, according to some embodiments. In the embodiment ofFIG.1, the sensitive data discovery component140includes a data item receiver160that obtain the data items135classified by the sensitive data classifiers as containing the sensitive data. In the embodiment ofFIG.1, the sensitive data discovery component140also includes a sensitive data location analyzer170that performs a sensitive data location analysis on the data items135to identify a location of the sensitive data, distinct from non-sensitive data, within the data items135. In the embodiment ofFIG.1, the sensitive data discovery component140also includes a generator of location information for the sensitive data180that generates location information for the sensitive data within the data items135. In the embodiment ofFIG.1, the sensitive data discovery component140also includes a data item data store150that can temporarily store data items135, received from the sensitive data classifier130, that are classified as including sensitive data. In some embodiments, a sensitive data discovery component140, as well as any number of other possible services, operates as part of a service provider network (not shown inFIG.1). However, the sensitive data discovery component140does not necessarily need to operate within a provider network, and can operate in a non-provider network client-server situation, or as simply software or an application running on one or more computers of a user or client, or in various other configurations that do not include provider networks, in some embodiments. In the embodiments that include a provider network, the services of the provider network can comprise one or more software modules executed by one or more electronic devices at one or more data centers and geographic locations, in some embodiments. Client(s) and/or edge device owner(s) using one or more electronic device(s) (which may be part of or separate from the service provider network) can interact with the various services of the service provider network via one or more intermediate networks, such as the internet. In other examples, external clients or internal clients can interact with the various services programmatically and without human involvement. A provider network provides clients with the ability to utilize one or more of a variety of types of computing-related resources such as compute resources (for example, executing virtual machine (VM) instances and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (for example, object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (for example, configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (for example, databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc. These and other computing resources may be provided as services, such as a hardware virtualization service that can execute compute instances, a storage service that can store data objects, etc. The clients (or “customers”) of provider networks may utilize one or more user accounts that are associated with a client account, though these terms may be used somewhat interchangeably depending upon the context of use. Clients and/or edge device owners may interact with a provider network across one or more intermediate networks (for example, the internet) via one or more interface(s), such as through use of application programming interface (API) calls, via a console implemented as a website or application, etc. The interface(s) may be part of, or serve as a front-end to, a control plane of the provider network that includes “backend” services supporting and enabling the services that may be more directly offered to clients. To provide these and other computing resource services, provider networks often rely upon virtualization techniques. For example, virtualization technologies may be used to provide clients the ability to control or utilize compute instances (e.g., a VM using a guest operating system (O/S) that operates using a hypervisor that may or may not further operate on top of an underlying host O/S, a container that may or may not operate in a VM, an instance that can execute on “bare metal” hardware without an underlying hypervisor), where one or multiple compute instances can be implemented using a single electronic device. Thus, a client may directly utilize a compute instance (e.g., provided by a hardware virtualization service) hosted by the provider network to perform a variety of computing tasks. Additionally, or alternatively, a client may indirectly utilize a compute instance by submitting code to be executed by the provider network (e.g., via an on-demand code execution service), which in turn utilizes a compute instance to execute the code—typically without the client having any control of or knowledge of the underlying compute instance(s) involved. As indicated above, service provider networks have enabled developers and other users to more easily deploy, manage, and use a wide variety of computing resources, including databases. The use of a database service, for example, enables clients to offload many of the burdens of hardware provisioning, setup and configuration, replication, clustering scaling, and other tasks normally associated with database management. A database service further enables clients to scale up or scale down tables' throughput capacity with minimal downtime or performance degradation, and to monitor resource utilization and performance metrics, among other features. Clients can easily deploy databases for use in connection with a wide variety of applications such as, for example, online shopping carts, workflow engines, inventory tracking and fulfillment systems, and so forth. Referring back toFIG.1, the sensitive data classifiers130, can be collocated with where the data is stored in data collection(s)110, in some embodiments, such that the first phase provides a local analysis. In some embodiments, the initial phase one classifier130can be executed on the storage host for the storage system100itself, in some embodiments. In other embodiments, the phase one classifier130can be hosted in close network proximity to the storage system100so that the data stays within some kind of defined network security boundary, and so that data does not have to be sent over a long distance. In other embodiments, the phase one analysis130might be provided by an event-driven compute service as the data is transferred to or stored in a data storage system. The hosting of the first phase implementation can be organized so that data transfer across the boundaries of a service or data center is not required, in some embodiments. The processing can be lightweight and can be either be embedded with a data storage system100or service, or can be located in close proximity with the data storage service or system100such that the data transfer cost is eliminated, depending on the embodiment. Collocating the preliminary analysis130with the data storage system100or service, as shown inFIG.1, provides both increased efficiency of the compute services, as well as increased security for the data itself. The increased security can be provided by eliminating one or more boundaries across which the sensitive data must traverse, in some embodiments. The increased efficiency can be provided eliminating the need of processing a large percentage of the non-sensitive data through a more expensive model170, as well as eliminating the need to transmit large quantities of data across networks, and/or across the boundary of services and entities. In a provider network environment, the data might be stored in a data storage service of the provider network. The phase one analysis, such as the sensitive data classifier130, can be collocated with the provider network's data storage system100, so that entirety of the data does not have to leave the context of the data storage system100, in some embodiments. However, in other embodiments, the data might be stored on the systems of a client of the provider network, such that the client performs a phase one analysis before sending identified data items (or identified portions of data items) to the provider network for more comprehensive second phase analysis. The phase one analysis can employ a simpler, lightweight type model, in some embodiments. The phase one model can be fairly lightweight so as to not impact the performance of the storage system100, since, in some embodiments, phase one is collocated with the storage system100itself. The phase one analysis, such as provided by the sensitive data classifiers130, might simply output an indication whether the data item contains sensitive data somewhere within the data item, within a certain probability threshold. Phase one, at a high level, can be viewed as a classification task, in some embodiments, where for every data item (or portion of a data item) being analyzed, a decision is made whether sensitive data is, or might be, contained within that data item (or portion of a data item). Phase one might simply add an extra layer of lightweight computing at the storage service, so that large quantities of data is not transmitted outside the storage service (such as being transmitted to a larger network or provider network). The second phase analysis of the disclosed hierarchical system and method for identifying sensitive content in data, shown as the sensitive data discovery component140inFIG.1can be a more comprehensive analysis, in some embodiments. The second phase analysis can be a strong tagger or sequence tagger, in some embodiments. The second phase analysis might examine every portion of text very carefully, and attempt to tag where the sensitive data or PII is located in the particular portion of data, in some of these embodiments. The second phase analysis is usually not performed by simple techniques. Rather, the second phase analysis requires a sophisticated classifier with a fairly high percentage accuracy, in some preferred embodiments. The second phase analysis can specifically identify the exact character strings in the data that are sensitive data, in some of these embodiments. The second phase analysis utilize a sensitive data location analyzer170to identify tokens of input text that belong to one of sensitive data types, in some embodiments. These sensitive data types might be a name, social security number, credit card number, or other kinds of personally identifiable information, for example. The second phase analysis, shown as the sensitive data discovery component140can be computationally heavy, and can require a dedicated fleet of compute servers (or accelerated instances) to be able to process large amount of data, in some embodiments. Because the second phase analysis can require a fleet of compute instances or server, in some embodiments, it is hard to impossible to collocate the model with data storage system100. Therefore, the sensitive data discovery component140is separate from the data storage100and the first phase analysis of the sensitive data classifier130in the embodiment shown inFIG.1. FIG.2is a logical block diagram further depicting contents of a hierarchical system and method for identifying sensitive content in data comprising sensitive data classifiers230abcthat operate on a large number of data items222,224,226, from their respective data sources212,214,216, and a sensitive data discovery component240that operates of a subset of data items233,235,237, classified as including sensitive data, where location information of sensitive data281,282,283,284,285can be provided to various different destinations, according to some embodiments. FIG.2depicts three different use cases for the presented hierarchical system and method for identifying sensitive content in data, according to some embodiments. One use case can be with a database system202. Either as data is being stored a large dataset212, or for data already stored the large dataset212, a local phase one analysis, such as the sensitive data classifier230a, can classify data items222of the large dataset212that contain sensitive data, or might contain sensitive data, or that contain sensitive data within a probability threshold. These classified data items233can be sent to a separate sensitive data discovery component240for a more detailed analysis. Another use case can be with files216in a file storage system206that operates in a similar manner. Either as files are being stored a file storage system206, or for files216already stored in the file storage system206, a local phase one analysis, such as the sensitive data classifier230c, can classify data items226of the files216that contain sensitive data, or might contain sensitive data, or that contain sensitive data within a probability threshold. These classified data items237can be sent to a separate sensitive data discovery component240for a more detailed analysis. Another use case is with streaming data, where phase one can be implemented within the stream214so that the classification is done as data224goes through the stream214. A local phase one analysis, such as the sensitive data classifier230b, can classify data items224of the large data stream214that contain sensitive data, or might contain sensitive data, or that contain sensitive data within a probability threshold. The portions of the data items235which are classified as containing sensitive data (or might contain sensitive data, or that contain sensitive data within a probability threshold) are sent to phase two, such as a separate sensitive data discovery component240for a more detailed analysis, in some embodiments. Other use cases not shown inFIG.2can be a log data service that stores log data, IoT devices and the data they store, or an IoT device service that operates on data from multiple IoT devices, depending on the embodiment. More generally, with any type of data, the first phase analysis, such as the sensitive data classifiers230abc, might be part of the system that encompasses the data, such as the database system202, data streaming system204, and file storage system206. For example, sensitive data classifiers230acmight be provided by a data storage service202,206of a provider network, or might be a separate service that operates local to, or in close network proximity to, the data storage service204,206of a provider network. The output of the second phase analysis (such as output by a sensitive data discovery component240) can be the original data with the sensitive data marked in some way, in some embodiments. This output can be in addition to, or instead of, the second phase analysis's output of a data location identifier of the sensitive data281,282,283,284,285, depending on the embodiment. Else the location information of sensitive data in the further subset of the data items281,282,283,284,285can be output to separate services to perform specific functions. These services can be data redaction service290, a tokenization service292, a highlight service294, a tracking service296, or a user-specified destination298. These services might be external to the sensitive data discovery component240, or one or more services might be internal to the sensitive data discovery component240, depending on the embodiment. The sensitive data discovery component240can receive data as an input233,235,237, and output data as an output, where the output contains the sensitive data replaced with something else. This something else can be whatever the user or client requests, in some embodiments. For example, the sensitive data might be redacted by a data redaction service290, or tokenized by a tokenization service292, or highlighted by a highlighting service294, or sent to a tracking service296, or sent to any other user-specified destination298, depending on the embodiment. The sensitive data might be replaced with a mask, such as a series of “*” characters, in some embodiments. The sensitive data might be replaced with a type of entity, in some embodiments. For example, a person's name in the data might be simply replaced with the characters “(PERSON)”, either in parenthesis or without the parenthesis. As another example, a credit card number might be replaced with the characters “CREDIT CARD NUMBER” in the data. FIG.3is a logical block diagram illustrating various types of data input and output by various components of a hierarchical system for identifying sensitive content in data, including a sensitive data classifier320, a storage system(s)330, and a sensitive data discovery component350, according to some embodiments. In the embodiments disclosed inFIG.3, the sensitive data classifier320can receive data inputs such as files312, documents316, text objects314, and/or chunks of data318within a file, document or text object. The sensitive data classifier320can analyze these data items, and classify at least some of the data items as containing sensitive data, based at least in part on the analysis. The sensitive data classifier320might then send all the data items322that it receives to a storage system(s)330. The storage system(s)330can store the files332, text objects334, documents336, and/or chunks of data338. In addition to this, the storage system(s)330can also store information regarding any data item (or portion of a data item was classified with sensitive data by the sensitive data classifier320. For example, the storage system(s)330can store information regarding whether individual files were classified with sensitive data333along with the stored files332. The storage system(s)330might also store information regarding whether individual objects were classified with sensitive data335along with the text objects334. The storage system(s)330might also store information regarding whether individual documents were classified with sensitive data337along with the documents336. The storage system(s)330might also store information regarding whether individual chunks of data were classified with sensitive data339, along with the chunks of data338. In addition, or instead of, sending all data items322to the storage system(s)330, the sensitive data classifier320, can also provide the classified data item(s)340to the sensitive data discovery component350. The sensitive data discovery component350might also obtain the classified data item(s)345from the storage system(s)330in addition to, or instead of, obtaining the classified data item(s)340from the sensitive data classifier320. However, the sensitive data discovery service350obtains the classified data item(s)340,345, it can perform a sensitive data location analysis on the obtained data items to identify a location of the sensitive data distinct from non-sensitive data within the obtained data items. The sensitive data discovery component350can then generate location information380for the sensitive data within the data items using the generator of location information for sensitive data360. This location information can be sent to a specified destination. As a further embodiment, the sensitive data discovery component350can send location information380for the sensitive data within the data items, as well as the data items themselves, to a generator of redacted data items370. The generator of redacted data items370can obtain the data items from the sensitive data discovery component350, or it can obtain the data items from another source, such as the storage system(s)330. The generator of redacted data items370can generate a redacted data item390where sensitive data, or PII, is redacted from the data item as redacted PII395. This redacted data item can be sent to a specified destination, such as returned to the storage system(s)330or sent to another storage system, depending on the embodiment. FIG.4is a logical block diagram illustrating a client system404requesting472a sensitive data classifier from a sensitive data discovery service440, receiving a response474including an indication of the sensitive data classifier, and the requested sensitive data classifier430classifying data items420in a storage system410, and transmitting those classified data items445to the sensitive data discovery service440to determine location information for the sensitive data480, according to some embodiments. The sensitive data classifier430might be part of a storage service where data is stored, or as depicted inFIG.4it could be a provided executable package that is provided in response474to a request472, and that is deployed locally by a client at its own client system404, for example. In some of these embodiments, the provider network might provide474the phase one analysis model to the client, such as in an executable package, for the client to perform the phase one analysis, before sending the identified or classified data445to the provider network for the phase two analysis of the sensitive data discovery service440. In other embodiments, phase one might be provided474as library or a container that can be, for example, dropped into the client system's404workflow. In some embodiments, the phase one analysis model, which can be embodied in a sensitive data classifier430, might be a trained machine learning model. The machine learning model might be trained to identify whether data items (or portions of data items) contain sensitive data somewhere within the data item, within a certain probability threshold for example. For the embodiments in which a service440or provider network provides474the phase one to clients404, all the clients might be provided the same model, in some embodiments, while in other embodiments at least some clients might be provided different models. In some embodiments, a client404might be provided phase one software, such as a sensitive data classifier430, comprising a machine learning model trained specifically for that client404, such as using data412,414416418from the client404in the training of the model. The machine learning model provided to a client can be customized for particular client cases, in some embodiments. In some embodiments, a client might deploy the first phase model locally to identify the data that needs to be sent to an external component or service, such as the sensitive data discovery service440, for a more detailed analysis. For example, the sensitive data classifier430can comprise a client-side library that the data storage service uses in order to make the decision that some portions of the data might comprise sensitive data, and therefore might require further processing to determine if sensitive data is present in a portion of the data. That client-side library can be part of the data storage system410, in some embodiments. In other embodiments, the client-side library can be a hosted service that is associated with the data storage service. The hosting of the first phase implementation can be organized so that data transfer across the boundaries of a service or data center is not required, in some embodiments. The processing can be lightweight and can be either be embedded with a data storage system410or service, or can be located in close proximity with the data storage service such that the data transfer cost is eliminated, depending on the embodiment. The Hierarchical System and Method for Identifying Sensitive Content in Data in a Provider Network FIG.5illustrates an example provider network500environment for the hierarchical system and method for identifying sensitive content in data, where the sensitive data classifiers530a-fare implemented as part of an IoT device service520, a log data service515, an object storage service502, a database service510, a data stream service505, and/or an external private network550, and where the sensitive data discovery service540is implemented by a compute server instances545of the provider network500, according to some embodiments. A service provider network500may provide computing resources (545) via one or more computing services or event-driven computing services to a client(s)560via a programmatic interface through an intermediate network590, in some embodiments. The service provider network500may be operated by an entity to provide one or more services, such as various types of cloud-based computing or storage services, accessible via the Internet and/or other networks to client(s). In some embodiments, the service provider network500may implement a web server, for example hosting an e-commerce website. Service provider network500may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and services offered by the service provider network500. In some embodiments, service provider network may employ computing resources (545) for its provided services. These computing resources may in some embodiments be offered to client(s) in units called “instances,” such as virtual compute instances. A provider network500may provide resource virtualization to clients via one or more virtualization services that allow clients to access, purchase, rent, or otherwise obtain instances of virtualized resources, including but not limited to computation (545) and storage resources, implemented on devices within the provider network or networks in one or more data centers. The storage services can include an object storage service502that stores files504, a data stream service505that operates on data streams507, a database service510that stores database instances512, a log data service515, that stores log data517, and/or an Internet of Things (“IoT”) Service that collects and stores IoT data522from external IoT devices. Each of these storage services can include one or more local sensitive data classifier(s)530a-fthat classify data items as containing sensitive data. These locally sensitive classifiers can provide the classified data items535a-fto a separate sensitive data discovery component540. In some embodiments, private IP addresses may be associated with the resource instances. The private IP addresses can be the internal network addresses of the resource instances on the provider network500. In some embodiments, the provider network500may also provide public IP addresses and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that clients may obtain from the provider501. Conventionally, the provider network500, via the virtualization services, may allow a client of the service provider to dynamically associate at least some public IP addresses assigned or allocated to the client with particular resource instances assigned to the client. The provider network500may also allow the client to remap a public IP address, previously mapped to one virtualized computing resource instance allocated to the client, to another virtualized computing resource instance that is also allocated to the client. Using the virtualized computing resource instances and public IP addresses provided by the service provider, a client of the service provider may, for example, implement client-specific applications and present the client's applications on an intermediate network, such as the Internet. Either the clients or other network entities on the intermediate network may then generate traffic to a destination domain name published by the clients. First, either the clients or the other network entities can make a request through a load balancer for a connection to a compute instance in the plurality of compute instances (545). A load balancer might responds with the identifying information which might include a public IP address of itself. Then the clients or other network entities on the intermediate network may then generate traffic to public IP address that was received by the router service. The traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the private IP address of the network connection manager currently mapped to the destination public IP address. Similarly, response traffic from the network connection manager may be routed via the network substrate back onto the intermediate network to the source entity. Private IP addresses, as used herein, refer to the internal network addresses of resource instances in a provider network. Private IP addresses are only routable within the provider network. Network traffic originating outside the provider network is not directly routed to private IP addresses; instead, the traffic uses public IP addresses that are mapped to the resource instances. The provider network may include network devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to private IP addresses and vice versa. Public IP addresses, as used herein, are Internet routable network addresses that are assigned to resource instances, either by the service provider or by the client. Traffic routed to a public IP address is translated, for example via 1:1 network address translation (NAT), and forwarded to the respective private IP address of a resource instance. Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses. In at least some embodiments, the mapping of a standard IP address to a private IP address of a resource instance is the default launch configuration for all a resource instance types. At least some public IP addresses may be allocated to or obtained by clients560of the provider network500. A client560may then assign their allocated public IP addresses to particular resource instances allocated to the client. These public IP addresses may be referred to as client public IP addresses, or simply client IP addresses. Instead of being assigned by the provider network500to resource instances as in the case of standard IP addresses, client IP addresses may be assigned to resource instances by the clients, for example via an API provided by the service provider. Unlike standard IP addresses, client IP addresses are allocated to client accounts and can be remapped to other resource instances by the respective clients as necessary or desired. A client IP address is associated with a client's account, not a particular resource instance, and the client controls that IP address until the client chooses to release it. A client IP address can be an Elastic IP address. Unlike conventional static IP addresses, client IP addresses allow the client to mask resource instance or availability zone failures by remapping the client's public IP addresses to any resource instance associated with the client's account. The client IP addresses, for example, enable a client to engineer around problems with the client's resource instances or software by remapping client IP addresses to replacement resource instances. A provider network500may provide an object storage service502that stores files504, a data stream service505that operates on data streams507, a database service510that stores database instances512, a log data service515, that stores log data517, and/or an Internet of Things (“IoT”) Service that collects and stores IoT data522from external IoT devices. These services can be implemented by storage nodes and/or by physical server nodes. The sensitive data discovery service540also contains many other server instances (545) to perform a sensitive data location analysis on the obtained data items to identify a location of sensitive data text strings, distinct from the non-sensitive data text strings, within one or more of the data items. As another example, the provider network provides a virtualized data storage service or object storage service502which can include a plurality of data storage instances implemented by physical data storage nodes. The data storage service or object storage service502can store files504for the client, which are accessed through a file access by the appropriate server instance of the client. The provider network can also include multiple other client services that pertain to one or more clients or users. The provider network can implement a monitoring and observability service to, for example, monitor applications, respond to system-wide performance changes, optimize resource utilization, and/or get a unified view of operational health. As another example, the provider network500can include a data stream service505to clients or users. This data stream service505can include a data stream507that receives data from a client's data stream and delivers a stream of data to the data storage service, such as the object storage service502, or the database service,510for use by the hierarchical system for identifying sensitive content in data. The clients may access any one of the client services502,505,510,515,520, or540for example, via programmatic interface599, such as one or more APIs to the service, to obtain usage of resources (e.g., data storage instances, or files, or database instances, or server instances) implemented on multiple nodes for the service in a production network portion of the provider network500. In addition,FIG.5illustrates the private network550that can be owned or operated by a client560, for example. This private network550can include its own storage system(s)555and its own sensitive data classifier530f. The sensitive data classifier530fcan classify at least some data items of the storage system555as containing the sensitive data text strings within a probability threshold, and can provide those data items535fto the sensitive data discovery service540, to perform its sensitive data location analysis on the data items. FIG.6is a block diagram illustrating an example networking configuration and communication of data between various components of a hierarchical system for identifying sensitive content in data within an example provider network600environment, including a data storage service620, sensitive data classifiers630, and a sensitive data discovery service640, according to some embodiments. The sensitive data classifiers630of the first phase can identify data items or portions of data items which require further analysis in the sensitive data discovery service640of a second phase, in some embodiments. Those data items or portions of data items are sent over a wider-scale network for a further analysis of the second phase. Maintaining security on the transfer is important, in some embodiments, since the data being transferred is data which have been classified as potentially containing PII (such as within a probability threshold). The data needs to be well protected, in these embodiments. It is important to ensure that the networking is established so that unencrypted data is not transferred over a the wider provider network. Transferring this data in a provider network can use the functionality of a virtual private cloud (“VPC”), in some embodiments. A virtual private cloud can be a provisioned logically isolated section of the provider network, where a user or client can launch provider network resources in a virtual network that the user or client might define. Transferring data between the first phase and second phase can involve VPC peering, in some embodiments. In other embodiments, transferring the data in a provider network can more generally involve the use of private networks602,604, or virtual private networks. A VPC or private network602can encompass the data storage service620as well as the data store610, in some embodiments. In some of these embodiments, the private network602can also encompass the phase one analysis components, such as the sensitive data classifiers630, that perform the classification of the data items (or portion of data items). The phase two analysis component or service (such as the sensitive data discovery service640) can be located in a separate VPC or private network604. Between the two private networks602and604, the system can utilize peering, in some embodiments. In other embodiments, the system can utilize a TLS connection or VPN type connection. This can allow the traffic between the two private networks to be routed within the provider network in a secure manner. While not shown inFIG.6, a similar technique can be adapted between an external network, such as an on-premise installation, and a provider network. The external network can utilize a VPC or private network that can comprise a data store and/or a data storage service, in some embodiments. In some of these embodiments, the VPC or private network can also encompass the phase one analysis components, such as the sensitive data classifiers, that perform the classification of the data items (or portion of data items). The phase two analysis component or service (such as the sensitive data discovery component or service) can be located in a separate VPC or private network in the provider network. Between the two VPCs or between the two private networks (the first VPC or private network being located in the external network, and the second VPC or private network located in the provider network), the system can utilize VPC peering, or private network peering, in some embodiments. In other embodiments, the system can utilize a TLS connection or VPN type connection. This can allow the traffic between the two VPCs, or two private networks, to be routed securely from the external network to the provider network. Regardless of the networking configuration, the disclosed hierarchical system and method for identifying sensitive content in data can also ensure that data transferred between two entities is encrypted, in some embodiments. In some of these embodiments, the system might use TLS 1.2 encryption. For example, data transferred between the phase one private network602and the phase two private network604can be encrypted. This can be, for example, data transferred from a data storage service620to the sensitive data discovery service640, or data transferred between the sensitive data classifiers630and the sensitive data discovery service640. This might also include data transferred between a data storage service620(or file storage service, or streaming service, etc.) and the phase one analysis component(s), such as the sensitive data classifiers630. If the storage service620and the classifier(s)630are not located in the same secure network, for example, or to prevent an internal unauthorized eavesdropper or hacker from accessing the data as it is being transferred, then data between these two entities might be encrypted as well, in some embodiments. In some embodiments, therefore, raw unencrypted data does not transfer over a network that would allow unauthorized access to the data. Regardless of the encryption utilized, the disclosed hierarchical system and method for identifying sensitive content in data can also ensure that the receiving component or system is authorized to access the data that it is being provided, in some embodiments. Whatever entity is attempting to access, requesting to access, or being provided the data needs the right credentials to have permission to access or receive the data, in some embodiments. If the system is located entirely in the provider network600, for example, an authorization service of the provider network can ensure that the classifiers630, services620,630and/or components610of the system are each authorized to access the appropriate data. If the phase one components (such as the sensitive data classifiers530f) are located external to the provider network500, for example, then the provider network might need to authorize the source of the phase 1 data (private network550, for example) and/or the external source of the phase 1 data might need to authorize the phase 2 service540to receive the data, in some embodiments. Any destination of the data might also need to be authorized, in some embodiments. For example, any data redaction service290, tokenization service292, highlight service294, tracking service296, and/or user-specified destination298of the data might need to be authorized by the provider of the data, such as the sensitive data discovery component240of phase 2 of the system, to receive the data of output by phase 2. There are many more contexts where authorization can be used in the disclosed hierarchical system and method for identifying sensitive content in data, and the foregoing are merely some examples. Illustrative Methods of Identifying Sensitive Content in Data Using a Hierarchy FIG.7is a high-level flow chart illustrating methods and techniques for identifying sensitive content in data using a hierarchical system or method, comprising actions taken by sensitive data classifiers and a sensitive data discovery service, according to some embodiments. The flowchart begins at710in which sensitive data classifiers, local to a data storage service, analyze data items containing text strings from data collections, and classify any data items containing sensitive data. The flowchart transitions to block720in which sensitive data classifiers send a subset of data items classified as containing sensitive data to a sensitive data discovery component. The flowchart then transitions to730where a Sensitive Data Discovery Component obtains the subset data items identified by the sensitive data classifiers as containing sensitive data. The remainder of the flowchart discloses the actions taken by the sensitive data discovery component. In block740, the Sensitive Data Discovery Component performs a sensitive data location analysis on the obtained subset of data items to identify a location of sensitive data text strings, distinct from non-sensitive data text strings, within the obtained subset of data items that contain sensitive data. The flowchart transitions to block750, where the Sensitive Data Discovery Component generates location information for the sensitive data text strings within individual data items of the obtained subset of data items that contain sensitive data. The flowchart finished at block760where the Sensitive Data Discovery Component provides the location information for the sensitive data, within the within individual data items that contain sensitive data, to a specified destination. FIG.8is a high-level flow chart illustrating methods and techniques for identifying sensitive content in data using a hierarchical system or method, from the perspective of the sensitive data classifiers, which identify data items containing sensitive data, and provide the identified data items to a separate sensitive data discovery component, according to some embodiments. The flowchart begins at block810where a sensitive data classifier, local to a data storage system, analyzes a plurality of data items either stored in the storage system, or as they are being ingested into the storage system. The sensitive data classifier then classifies a subset of data items as containing sensitive data within a probability threshold in block820. The flowchart transitions to block830where a sensitive data classifier records which data items were classified as containing sensitive data. The flowchart ends at block840where the subset of data items classified as containing sensitive data, but not the other data items not classified as containing sensitive data, are provided to a separate a sensitive data discovery component to identify a location of sensitive data text strings within the data items. In some embodiments, only data items of a data collection identified by the one or more sensitive data classifiers as containing sensitive data are provided to the separate sensitive data discovery component, such that other data items of the data collection not identified as containing sensitive data are not transferred to the sensitive data discovery component. FIG.9is a high-level flow chart illustrating methods and techniques for identifying sensitive content in data using a hierarchical system or method, from the perspective of the sensitive data discovery service, which provides the sensitive data classifiers to a remote destination, according to some embodiments. The flowchart begins at block910in which a sensitive data discovery service in a provider network obtains a request to provide a sensitive data classifier to a remote destination. Then the flowchart transitions to block920where an entity, such as the sensitive data discovery service, or a component of the sensitive data discovery service, provides a sensitive data classifier to a remote destination in accordance with the request, where the sensitive data classifier can analyze data items, and can identify which data items contain sensitive data text strings within a probability threshold. Sometime later, the flowchart then executes block930which obtains the data items identified by the sensitive data classifier as containing sensitive data. Then the method performs a sensitive data location analysis on the obtained data items to identify a location of sensitive data within the obtained data items that contain sensitive data of block940. The flowchart transitions to block950which generates location information for the sensitive data text strings within individual data items of the obtained subset of data items that contain sensitive data. In block960, the flowchart determines a specified destination for the location information for the sensitive data, possibly based on user input. The flowchart finishes in block970that provides the location information for the sensitive data to the specified destination. Illustrative System FIG.10is a block diagram illustrating an example computer system that may be used for a sensitive data classifier and/or a sensitive data discovery component, according to some embodiments. In at least some embodiments, a computer that implements a portion or all of a hierarchical system and method for identifying sensitive content in data as described herein may include a general-purpose computer system or computing device that includes or is configured to access one or more computer-accessible media, such as computer system1000illustrated inFIG.10.FIG.10is a block diagram illustrating an example computer system that may be used in some embodiments. This computer system can be used as a sensitive data classifier130and/or a sensitive data discovery component140, or as a backend resource host which executes one or more of backend resource instances or one or more of the plurality of compute instances (545) in the sensitive data discovery service540. In the illustrated embodiment, computer system1000includes one or more processors1010coupled to a system memory1020via an input/output (I/O) interface1030. Computer system1000further includes a network interface1040coupled to I/O interface1030. In various embodiments, computer system1000may be a uniprocessor system including one processor1010, or a multiprocessor system including several processors1010(e.g., two, four, eight, or another suitable number). Processors1010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors1010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors1010may commonly, but not necessarily, implement the same ISA. System memory1020may be configured to store instructions and data accessible by processor(s)1010. In various embodiments, system memory1020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above for a hierarchical system and method for identifying sensitive content in data, are shown stored within system memory1020as the code and data for an automated machine learning pipeline generator100, the code and data for a sensitive data classifier and/or a sensitive data discovery component. In one embodiment, I/O interface1030may be configured to coordinate I/O traffic between processor1010, system memory1020, and any peripheral devices in the device, including network interface1040or other peripheral interfaces. In some embodiments, I/O interface1030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory1020) into a format suitable for use by another component (e.g., processor1010). In some embodiments, I/O interface1030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface1030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface1030, such as an interface to system memory1020, may be incorporated directly into processor1010. Network interface1040may be configured to allow data to be exchanged between computer system1000and other devices1060attached to a network or networks1070, such as other computer systems or devices as illustrated inFIGS.1-6, for example. In various embodiments, network interface1040may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface1040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory1020may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIGS.1through9for implementing a hierarchical system and method for identifying sensitive content in data. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system1000via I/O interface1030. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc, that may be included in some embodiments of computer system1000as system memory1020or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface1040. Any of various computer systems may be configured to implement processes associated with the provider network, a hierarchical system and method for identifying sensitive content in data, a sensitive data classifier and/or a sensitive data discovery component, or any other component of the above figures. In various embodiments, the provider network, the hierarchical system and method for identifying sensitive content in data, or any other component of any ofFIGS.1-9may each include one or more computer systems1000such as that illustrated inFIG.10. In embodiments, the provider network, the hierarchical system and method for identifying sensitive content in data, the sensitive data classifier and/or the sensitive data discovery component, or any other component may include one or more components of the computer system1000that function in a same or similar way as described for the computer system1000. CONCLUSION Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc, as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense. | 92,971 |
11861040 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION In general, systems and techniques described herein provide an end-to-end user consent framework that systematically collects, propagates, and enforces user consents across the online ecosystem (e.g., across completely separate domains). Many different companies and other organizations collect, share, and rely on user data for various purposes, such as customizing content for the users. One way to manage user consent is for each organization to obtain each of its users' consent, e.g., by requesting the user select preferences when they visit a website or download an application. However, this can be frustrating for the users, may require entry of duplicate data, and does not ensure that the user's data is being collected and/or used in accordance with those preferences. Accordingly, the disclosed subject matter is concerned with solving a technical problem of providing a simpler and more efficient approach to managing user consent data. One or more technical solutions to this technical problem involve the disclosed user consent frameworks which may be implemented as systems, methods, apparatuses, computer-readable media and computer programs. The user consent frameworks described in this document enable users to select, from multiple consent management platforms, a consent management platform to manage their user consent settings. The user consent settings for a user define, for example, what user data can be collected, who can receive the data, and how the data can be used by each recipient. In this way, a user can centrally manage their privacy across the entire online ecosystem using a single platform. In other words, by using the consent management platform, the user can submit their consent settings once, and those settings can be enforced as the user accesses multiple different domains (e.g., websites) and applications (e.g., mobile apps) without requiring the user to re-submit their consent settings. The consent management platforms can provide a consent management module, e.g., a plug-in for an operation system, to the client device of a user, which enables the user to specify the user consent settings. The consent management module can provide one or more interactive user interfaces that enable the user to specify the user consent settings. When the client device is going to transmit a request that will include user data, the platform of the client device can query the current user consent settings to determine what, if any, user data can be included in the request and what limitations on the data should be included in the request. The client device can then generate the request according to the current user consent settings and transmit the request to its recipient. To ensure compliance with the user consent settings, requests sent from the client device can include digitally signed user consent settings that the recipient can store. In this way, an auditor can verify the user consent settings received by the recipient without the recipient being able to alter or falsify user consent settings. The consent management module can also recommend, to a user, user consent settings to make it easier for a user to specify the user consent settings. The consent management module can recommend user consent settings based on a variety of factors, including, for example, a current geographic location of the user's device, the contribution of recipients to digital components presented at the user's device, and/or user activity on the device. FIG.1is a block diagram of an environment100that provides a framework for managing user consent to data collection and usage. The example environment100includes a data communication network105, such as a local area network (LAN), a wide area network (WAN), the Internet, a mobile network, or a combination thereof. The network105connects client devices110, publishers130, websites140, a digital component distribution system150, and consent management provider systems170. The example environment100may include many different client devices110, publishers130, websites140, and consent management provider systems170. A website140is one or more resources145associated with a domain name and hosted by one or more servers. An example website is a collection of web pages formatted in HTML that can contain text, images, multimedia content, and programming elements, such as scripts. Each website140is maintained by a publisher130, which is an entity that controls, manages and/or owns one or more websites, including the website140. A domain can be a domain host, which can be a computer, e.g., a remote server, hosting a corresponding domain name. A resource145is any data that can be provided over the network105. A resource145is identified by a resource address, e.g., a Universal Resource Locator (URL), that is associated with the resource145. Resources include HTML pages, word processing documents, and portable document format (PDF) documents, images, video, and feed sources, to name only a few. The resources can include content, such as words, phrases, images and sounds, that may include embedded information (such as meta-information in hyperlinks) and/or embedded instructions (such as scripts). A client device110is an electronic device that is capable of communicating over the network105. Example client devices110include personal computers, mobile communication devices, e.g., smart phones, and other devices that can send and receive data over the network105. A client device110has a device platform113, which is an environment in which software applications execute. The device platform113can include the hardware of the client device110and/or the operation system of the client device110. A client device110typically includes applications112, such as web browsers and/or native applications, that run in the device platform113and that facilitate the sending and receiving of data over the network105. A native application is an application developed for a particular platform or a particular device. Publishers130can develop and provide, e.g., make available for download, native applications to the client devices110. In some implementations, the client device110is a digital media device, e.g., a streaming device that plugs into a television or other display to stream videos to the television. The digital media device can also include a web browser and/or other applications that stream video and/or present resources. A web browser can request a resource145from a web server that hosts a website140of a publisher130, e.g., in response to the user of the client device110entering the resource address for the resource145in an address bar of the web browser or selecting a link that references the resource address. Similarly, a native application can request application content from a remote server of a publisher130. Some resources145, application pages, or other application content can include digital component slots for presenting digital components with the resources145or application pages. As used throughout this document, the phrase “digital component” refers to a discrete unit of digital content or digital information (e.g., a video clip, audio clip, multimedia clip, image, text, or another unit of content). A digital component can electronically be stored in a physical memory device as a single file or in a collection of files, and digital components can take the form of video files, audio files, multimedia files, image files, or text files and include advertising information, such that an advertisement is a type of digital component. For example, the digital component may be content that is intended to supplement content of a web page or other resource presented by the application112. More specifically, the digital component may include digital content that is relevant to the resource content (e.g., the digital component may relate to the same topic as the web page content, or to a related topic). The provision of digital components by the digital component distribution system150can thus supplement, and generally enhance, the web page or application content. When the application112loads a resource145(or application content) that includes one or more digital component slots, the application112can send a request120(which can include an attestation token122as described below) for a digital component for each slot from the digital component distribution system150. The digital component distribution system150can, in turn request digital components from digital component providers160. The digital component providers160are entities that provide digital components for presentation with resources145. In some cases, the digital component distribution system150can also request digital components from one or more digital component partners157. A digital component partner157is an entity that selects digital components129on behalf of digital component providers160in response to digital component requests. The digital component distribution system150can select a digital component129for each digital component slot based on various criteria. For example, the digital component distribution system150can select, from the digital components received from the digital component providers160and/or the digital component partners157, a digital component based on relatedness or relevance to the resource145(or application content), performance of the digital component (e.g., a rate at which users interact with the digital component), etc. The digital component distribution system150can then provide the selected digital component(s)129to the client device110for presentation with the resource145or other application content. A client device110can also include a consent management module114that enables a user of the client device110to manage user consent settings that define whether and/or how the user's data is collected and used. The consent management module114can be implemented as a plug-in to the device platform113, e.g., as a plug-in to the operating system of the client device110. A plug-in is a software component that provides additional features to an application. In some implementations, the consent management module114can be implemented as a plug-in to a web browser or native application. The consent management module114can run in a tightly controlled environment that isolates the consent management module114from other application and/or resources of the client device110. For example, the consent management module114can run in a sandbox of the device platform113. In this way, the consent management module114cannot communicate outside of the device platform113or interfere with the execution of other applications112on the same device. The consent management module114enables the user to specify how user data, such as the user's activity on the client device, web browsing history, native applications downloaded or accessed, demographic information, location information, interests, and/or other personal data, is collected and used. In some implementations, the consent management module114enables the user to specify, for all recipients and/or each recipient individually, whether the recipient can store and/or access information on the client device110, use user data to select digital components, use user data to create one or more user profiles, use user data to select personalized digital components (e.g., using the profile(s)), measure the performance of digital components or other content (e.g., based on whether the user interacts with the digital components or other content), and/or to generate audience insights. The consent management module114can provide one or more consent management user interfaces116that enable the user to specify user consent settings. For example, a user interface can present, for each setting, a check box control that allows the user to consent to the setting or decline the setting. In a particular example, a setting may be to enable any user data to be transmitted from the client device110. In this example, the user can select the check box for the setting (e.g., checked) or not select the check box (e.g., unchecked) to decline the setting. In another example, the user interface116can enable the user to select from multiple options for a setting. For example, the user interface116can present, for each of a set of domain names (which can include websites of publishers, digital component providers160, digital component distribution systems150, and/or digital component partners157) and/or native applications, multiple buttons that each define types of data that can be sent to the domain by the application. The user can consent to the type of data by selecting the button and rescind consent by deselecting the button. The consent management module114can enable the user to specify user consent settings that define what data is transmitted from the client device110, how that data can be used (e.g., to customize content of a web page or application, to select digital components, in encrypted or non-encrypted forms, over secure channels only), to what recipients the data can be sent, whether and for how long the user data can be stored, and/or other appropriate consents to the use of user data. The consent management module114can enable the user to specify settings for all recipients, e.g., overall settings, or per recipient. In this way, users have fine-tuned control over how their data is collected and used. The consent management module114can store the user consent settings specified by the user in a consent storage unit117. The consent storage unit117can be isolated and/or encrypted to prevent access or modification by other devices or applications. The consent management module114can be used to manage the collection and use of user data by each web browser and native application on the client device110. When the client device110is going to send a request120that includes user data, e.g., on behalf of a web browser or native application, the device platform113can query the consent management module114for the current user consent settings. The device platform113can then generate a request that only includes user data to the extent consented to by the user and defined by the current user consent settings. In this way, a single consent module114can prevent the transmission, from multiple applications, of user data to which the user has not consented. As such, each client device110may only have one consent management module114installed on the client device110and/or active at a given time on the client device110, in some implementations. In some cases, there may be multiple consent management providers that operate consent management provider systems170for managing user data in accordance with user consent settings. Each consent management provider can make a consent management module114available to users. In this example, each user can download or otherwise install their consent management module114from the consent management provider system170of their preferred consent management provider. In some implementations, the consent management module114can enable the user to specify whether audio, video, and/or image data is collected, transmitted to, and/or used by others. For example, the consent management module114can enable the user to specify whether the client device110or another device, e.g., an assistant device (e.g., a smart speaker), another mobile device, etc. can collect, receive or use audio, video, or image data. In some implementations, the consent management module114can enable the user to specify whether sensor information (e.g., from a smart thermostat or Internet of Things (IoT) device) can be collected, transmitted, or used by others. In such examples, these devices can query the consent management module114to determine whether the data can be sent to another device in a similar manner as the device platform113. The consent management module114can also include standard settings, e.g., that are based on laws, regulations, or best practices that define whether user data can be collected and/or how the user data can be used. These standard settings can include whether the device platform113should send user data or requests to a recipient (e.g., a particular network domain), whether requests to a recipient should contain any user identifiers, whether a recipient could provide personalized content to the user, and/or other appropriate settings. The consent management module114can, e.g., periodically, send queries171to the consent management provider system170for updates to the standard settings, logic used to implement the consent management module114, and/or updates to a recommendation engine115(described below). In response, the consent management provider system170can provide updates173requested by the queries171. In this way, the consent management module114on each client device110can be updated, in response to changes in user privacy laws, regulations, or best practices. The recommendation engine115can recommend, to the user, user consent settings in the user interface(s)116. The recommendation115can recommend user settings based on a variety of factors, including, for example, a current geographic location of the client device110, the contribution of recipients to digital components presented at the client device110, and/or user activity on the client device110. This user activity can include, for example, web browsing history, location history, applications installed on the client device110, and/or applications accessed by the user, e.g., during a given time period. For example, the recommendation engine115can recommend user consent settings that conform to local laws, regulations, or best practices based on the user's current geographic location as defined by a Global Positioning System (GPS) receiver of the client device110, or based on the user's current geographic location inferred from the device's Internet Protocol (IP) address. In this way, a user that travels internationally can be provided recommended user consent settings appropriate for the current location. As mentioned above, the recommendation engine115can use contributions of recipients to digital components presented at the client device110. The consent management module114or another application (e.g., a web browser or native application) can determine a level of contribution of multiple domains to the presentation of digital components at the client device110over a given time period. For example, digital components can include metadata that indicates one or more domains that contributed to the delivery of the digital component. In a particular example, the metadata can indicate that a first domain contributed certain graphics in the digital component and a second domain contributed text in the digital component. The consent management module114or application can determine a level of contribution for each domain that contributed to at least one digital component being presented at the client device110. The level of contribution of a domain can be determined in various ways. For example, the level of contribution of a domain can be based on a quantity of digital components to which the domain contributed to being presented at the client device110, a percentage of digital components that were interacted with on the client device110and to which the domain contributed, the types or sizes of digital components to which the domain contributed to being presented at the client device110, and/or other appropriate factors. The recommendation engine115can use the levels of contribution to recommend user consent settings to the user. For example, if a domain stores data on the client device110and/or receive user data from the client device110but does not contribute to digital components being presented at the client device110, the recommendation engine115can recommend that the user block (e.g., do not consent to) the domain storing data on the client device110or receive user data from the client device110as it may not be known why the domain is collecting the user data. The recommendation engine115can compare the level of contribution for a domain to a threshold. If the level of contribution does not satisfy the threshold (e.g., is less than the threshold), the recommendation engine115can recommend that the user not consent to the domain storing data on the client device110or receiving user data from the client device110. If the level of contribution satisfies the threshold (e.g., meets or exceeds the threshold), the recommendation engine115can recommend that the user consent to the domain storing data on the client device110and/or receiving user data from the client device110. The recommendation engine115can perform this recommendation process for each domain that contributed to at least one digital component being presented at the client device110. The user can view recommended user consent settings in the user interface(s)116and either confirm or reject the recommended user consent settings. For example, the user interface116can present a set of recommended user consent settings that cover multiple domains and/or multiple types of consents (e.g., storing data, transmitting data, etc.) and the user can simply accept or decline the recommended user consent settings. This can make it easier and more efficient for a user to specify user consent settings relative to customizing each type of setting and/or for each domain. In some implementations, the device platform113sends user consent settings with requests120that include user data. Each recipient of the user data can be required to store the user consent settings, e.g., for auditing purposes. In this way, an auditor can audit the user data stored by a recipient and the user consent settings to ensure that the recipient is storing and using each user's data in accordance with the users' consent settings. To prevent fraud by a recipient, the device platform113(or web browser or native application sending the request) can digitally sign at least the user consent settings using a private key maintained confidentially by the device platform113(or web browser or native application). An auditor can use a public key that corresponds to (e.g., that is mathematically linked to) the private key and the stored used consent settings to verify the signature. If the signature cannot be verified using the public key and the stored user consent settings, then the auditor can determine that the user consent settings have been altered. In some implementations, the device platform113generates an attestation token122that is included in a request120or that implements the request120. The attestation token is a token that can include the consent settings and a digital signature of the consent settings (using the private key) and other data such that any modification to the user consent settings after creation can be detected. For example, the attestation token can be a complex message that includes the consent settings and other data. The signed data can include a unique identifier for the user so that recipients of the attestation token can verify that the attestation token was sent from the user. The attestation token can also include an integrity token, e.g., a device integrity token and/or a browser integrity token, so that recipients can verify that the attestation token was received from a trusted device or trusted web browser. The attestation token120can include data specifying the purpose or operation of the request (e.g., to change user consent settings or request a digital component), a user identifier that uniquely identifies the user (e.g., a public key of the client device110), an attestation token creation time that that indicates a time at which the attestation token122was created, an integrity token (e.g., a device integrity token and/or a browser integrity token), and a digital signature of at least a portion of the other data of the attestation token122. The integrity token can be a device integrity token that enables an entity to determine whether a request120was sent by a trusted client device110. For example, the device integrity token can be issued by a third-party device integrity system that evaluates fraud signals of client devices and assigns a level of trustworthiness to the client devices based on the evaluation. The device integrity token for a client device110can include a verdict that indicates the level of trustworthiness (or integrity) of the client device110at the time that the device integrity token was generated, a device integrity token creation time that indicates a time at which the device integrity token was generated, and a unique identifier for the client device110(e.g., the device public key1136of the client device or its derivative). The device integrity token can also include a digital signature of the data in the device integrity token using a private key of the device integrity system. For example, the device integrity system can sign the data using its private key, which the system maintains confidentially. The entities that receive the attestation token122can use a public key of the device integrity system to verify the signature of the device integrity token. As the integrity of a client device110can change over time, each client device110can request a new device integrity token periodically. The entities that receive the attestation token122can check the creation time of the device integrity token to identify stale device integrity tokens. For requests sent on behalf of web browsers, the integrity token can be a browser integrity token that indicates the integrity of the web browser, or whether the user's interactions with websites are genuine. Examples of non-genuine user interactions include interactions initiated by bots, etc. rather than the user. A browser integrity token can be issued by a third-party browser integrity system based on fraud detection signals sent to the third browser integrity system. The fraud signals can include, for example, mouse movement speed, direction, intermission and other patterns, click patterns, etc. Similar to the device integrity token, the browser integrity token for a web browser can include a verdict that indicates the level of trustworthiness (or integrity) of the web browser, or the level of genuineness of user interactions with websites, at the time that the browser integrity token was generated, a browser integrity token creation time that indicates a time at which the browser integrity token was generated, and a unique identifier for the client device110(e.g., the public key of the client device or web browser). The browser integrity token can also include a digital signature of the data in the browser integrity token using a private key of the browser integrity system. For example, the browser integrity system can digitally sign the data using its private key, which the system maintains confidentially. The entities that receive the attestation token122can use a public key of the browser integrity system to verify the signature of the browser integrity token. The client device110can store integrity tokens (e.g., a device integrity token and/or a browser integrity token) for inclusion in attestation tokens122. As described above, the client device110can request digital components from the digital component distribution system150. Prior to an application (e.g., a web browser or native application) presenting a digital component, the application can ensure that the user has consented to the digital component being presented. A digital component can include data, e.g., metadata, that specifies the provider (e.g., the digital component distribution system150, the digital component partner157, and/or the digital component provider160) and whether the digital component is a personalized digital component that is selected and/or customized based on the user's data (e.g., based on a user profile generated for the user). Prior to rendering a digital component, the application (e.g., a web browser or native application) can query the consent management module114whether the provider has proper user consent to show personalized content to the user. For example, the application can provide, to the consent management module114, the digital component or metadata fetched previously that specifies the provider and whether the digital component is personalized with the query. The consent management module114can determine, based on the current user consent settings and the received digital component or metadata, whether the user has consented to the digital component being presented. The consent management module114can then respond to the application with data specifying that the digital component can be presented or not presented. The application can then either present the digital component or block the digital component based on the response from the consent management module114. In some implementations, the consent management module114can also enable the user to view which domains has the user's data and what data each domain has. The consent management module114can also enable the user to request that a domain delete the user's data, to not transfer the user's data to another entity, to correct the user's data, and/or to export the user's data, e.g., to the client device110. FIG.2is a flow diagram that illustrates an example process200for installing a user-selected consent management module on a client device. The process200can be implemented, for example, by a client device110. Operations of the process200can also be implemented as instructions stored on non-transitory computer readable media, and execution of the instructions by one or more data processing apparatus can cause the one or more data processing apparatus to perform the operations of the process200. A selection of a consent management platform is received (202). An application (e.g., web browser or native application) or a device platform of a client device can present, to the user, a user interface that enables the user to select from multiple consent management platforms. Each consent management platform can provide ways for the user to manage user consent settings that control how the user's data is collected and used. For example, each consent management platform can provide a consent management module that enables the user to specify user consent settings at the client device and that manages the collection and use of the user's data at the client device and at other locations, e.g., at remote servers or other entities. The user interface can be presented in response to a determination that an application is attempting to send user data from the client device to another entity and that a consent management module is not currently installed or active on the client device. The user can select a consent management platform from the user interface, or decline to select any consent management platform. A consent management module is obtained from the selected consent management platform (204), or obtained from another location such as an application store where users can download applications and add-ons for applications. For example, an application store can ensure that the consent management module satisfies some minimum quality standards and is compliant with some policy. In response to a selection of the consent management platform, the client device can send a request to a consent management provider system of the selected consent management platform. In response, the system can send the consent management module (or an executable file for installing the consent management module) to the client device. The consent management module is installed on the client device (206). As described above, the consent management module can be implemented in the form of a plug-in for an operating system. In this example, the operating system installs the plug-in. Installation of the consent management module can also include configuring applications to interact with the consent management module when sending requests and presenting digital components. FIG.3is a flow diagram that illustrates an example process300for enabling a user to specify user consent settings and storing the user consent settings. The process300can be implemented, for example, by a consent management module of a client device. Operations of the process300can also be implemented as instructions stored on non-transitory computer readable media, and execution of the instructions by one or more data processing apparatus can cause the one or more data processing apparatus to perform the operations of the process300. An interactive interface is presented (302). The interactive interface can enables a user to specify user consent settings that define, for example, what user data can be collected, who can receive the data, the data retention policy (e.g., auto-deletion after 30 days or another appropriate time period), and how the data can be used by each recipient. In some implementations, the interactive interface can include a set of user consent settings and, for each user consent setting, a user interface control that enables the user to consent to or decline consent. For example, the interactive interface can include a setting that globally controls whether any user data can be transmitted from the user device to any domain. The interactive interface can also include a check box control (or other type of control) that enables the user to consent to user data being transmitted or decline consent which would prevent any user data from being transmitted from the client device. The interactive interface can present similar user consent settings for other types of user consents, for each domain, and/or for each application installed on the client device. As described above, the interactive interface can also present recommended settings. A consent management module executing on the client device can generate recommended settings as well as present the interactive interface. Data specifying the user consent settings is received (304). The user interface can pass the user consent settings specified by the user to the consent management module. The user consent settings are stored on the client device (306). The consent management module can store the user consent settings in secured storage, e.g., within a sandbox of the device platform of the client device to prevent access from outside of the sandbox. In some implementations, with proper user consent, the consent management module can store the user consent settings in secured storage on the Internet managed by consent management platform. Such Internet storage may be beneficial for back/restore purposes, and for consistent user experience across multiple devices, if the user signs in from multiple devices. FIG.4is a flow diagram that illustrates an example process400for transmitting requests according to user consent settings. The process400can be implemented, for example, by a consent management module of a client device. Operations of the process400can also be implemented as instructions stored on non-transitory computer readable media, and execution of the instructions by one or more data processing apparatus can cause the one or more data processing apparatus to perform the operations of the process400. A determination is made that a request will include user data (402). For example, a device platform of a client device can transmit requests on behalf of applications, such as web browser and native applications. The applications can provide the data of the requests to the device platform and data indicating whether the requests include user data. In another example, the device platform can evaluate the data received from the applications and determine that the request will include user data. In another example, the browser or native applications will query the user consent settings prior to generating and sending a request. For example, when a browser sends a HyperText Transfer Protocol (HTTP) request to a domain, if the domain has cookie in the browser cookie jar, and the cookie value has sufficient entropy to identify a user (e.g. beyond a simple boolean value), the browser will query the plug-in whether the domain has user consent to collect user data. If and only if the answer is “yes”, the browser will insert the cookie into the HTTP header. In another example, if the domain to which the browser will send the request is known to use passive fingerprinting (i.e., depending on IP address and browser user agent and other signals in the HTTP request) to track users, the browser will route the HTTP request through to the network and withhold the browser user agent, if the plug-in replied that the domain has no user consent to collect user data. A request is made for current user consent settings (404). The device platform can submit a query to a consent management module for the user consent settings. The query can ask for, e.g., request, specific user consent settings, e.g., for a domain to which the request will be sent, or all of the user consent settings. The current user consent settings are received (406). The consent management module can provide, in response to the query, the current user consent settings. These current user consent settings can include the user consent settings specified by the user and/or standard/default user consent settings of the consent management module. For example, the standard/default user consent settings can be settings that block certain user data from being transmitted based on the current location of the user indicating that the user is in a country that has regulations that do not allow that type of user data to be collected. If the query asks for specific user consent settings, the consent management module may only provide those user consent settings. Request data is generated according to the current user consent settings (408). The device platform can use the user consent settings to identify portions of the user data that can be included in the request, if any, and portions of the user data that cannot be included in the request, if any. For example, the device platform can evaluate the user consent settings to determine if there are settings for the recipient of the request. If so, the device platform can use those user consent settings to identify the portions of user data that can be included in the request. If not, the consent management module can use the general user consent settings to identify the portions of user data that can be included in the request. In a particular example, a user may consent to location data being sent to a particular digital component distribution system, but not web browsing history. In this example, the device platform can determine whether the request includes location data or web browsing history data. If the request includes web browsing history data, the device platform can remove the web browsing history data from the request. The device platform can include, in the request data that will be transmitted from the client device, the location data consented to be the user. The request data can also include user consent settings. The request can include only the user consent settings that apply to the request, e.g., the user consent setting for the recipient of the request and/or any global user consent settings used to allow user data to be included in the request. In another example, the request can include user consent settings for the recipient and any user consent settings that apply to all recipients. As described above, a digital signature of at least the user consent settings can be generated and included in the request so that the user consent settings can be verified later, e.g., in an audit. The request data is transmitted (410). The device platform can transmit the request data to the recipient of the request, e.g., to a digital component distribution system. As described above, the request can include or be in the form of an attestation token. Further to the descriptions above, a user may be provided with controls allowing the user to make an election as to both if and when systems, programs, or features described herein may enable collection of user information (e.g., information about a user's social network, social actions, or activities, profession, a user's preferences, or a user's current location), and if the user is sent personalized content or communications from a server. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over what information is collected about the user, how that information is used, the information retention policy, and what information is provided to the user. FIG.5is block diagram of an example computer system500that can be used to perform operations described above. The system500includes a processor510, a memory520, a storage device530, and an input/output device540. Each of the components510,520,530, and540can be interconnected, for example, using a system bus550. The processor510is capable of processing instructions for execution within the system500. In some implementations, the processor510is a single-threaded processor. In another implementation, the processor510is a multi-threaded processor. The processor510is capable of processing instructions stored in the memory520or on the storage device530. The memory520stores information within the system500. In one implementation, the memory520is a computer-readable medium. In some implementations, the memory520is a volatile memory unit. In another implementation, the memory520is a non-volatile memory unit. The storage device530is capable of providing mass storage for the system500. In some implementations, the storage device530is a computer-readable medium. In various different implementations, the storage device530can include, for example, a hard disk device, an optical disk device, a storage device that is shared over a network by multiple computing devices (e.g., a cloud storage device), or some other large capacity storage device. The input/output device540provides input/output operations for the system500. In some implementations, the input/output device540can include one or more of a network interface devices, e.g., an Ethernet card, a serial communication device, e.g., and RS-232 port, and/or a wireless interface device, e.g., and 802.11 card. In another implementation, the input/output device can include driver devices configured to receive input data and send output data to other input/output devices, e.g., keyboard, printer and display devices560. Other implementations, however, can also be used, such as mobile computing devices, mobile communication devices, set-top box television client devices, etc. Although an example processing system has been described inFIG.5, implementations of the subject matter and the functional operations described in this specification can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage media (or medium) for execution by, or to control the operation of, data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. | 54,299 |
11861041 | DETAILED DESCRIPTION OF EMBODIMENTS The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the features, as claimed. In this disclosure, the term “based on” means “based at least in part on.” The singular forms “a,” “an,” and “the” include plural referents unless the context dictates otherwise. The term “exemplary” is used in the sense of “example” rather than “ideal.” The terms “comprises,” “comprising,” “includes,” “including,” or other variations thereof, are intended to cover a non-exclusive inclusion such that a process, method, or product that comprises a list of elements does not necessarily include only those elements, but may include other elements not expressly listed or inherent to such a process, method, article, or apparatus. Relative terms, such as, “substantially” and “generally,” are used to indicate a possible variation of ±10% of a stated or understood value. The terms “user,” “individual,” “customer,” or the like generally encompass a person using a shared computer in a public space, such as a public library, school, café, bank, or the like. The term “user session” generally encompasses a temporary interaction between a user and a publically shared computer. The user session begins when the user connects to a particular network or system and ends when the user disconnects from the network or system. The user session may temporarily store information related to the user's activities while connected to the network or system. The user session may temporarily store sensitive private information, such as account login information (e.g., username and password) associated with various service providers. In the following description, embodiments will be described with reference to the accompanying drawings. As will be discussed in more detail below, in various embodiments, systems and methods for facilitating the automatic preservation of user sessions on publically shared computers are described. In an exemplary use case, an individual may visit a café that provides computer stations or desktops for public use. The computer stations and/or the café itself may be equipped with one or more cameras. When an individual user approaches or sits down at a computer station, the computer station and/or café may collect image data of the user using a camera. The user's facial and appearance features may be extracted from the collected image data to create a session key. The user is provided with a new fresh user session (e.g., a predetermined, default, or otherwise generic user session) to work with on the computer station. While at the computer station, the user may be monitored in real time. The computer station's camera(s) and/or the café 's surveillance camera(s) may be used to collect sets of image data of the user to detect when the user leaves the computer station. The user's image data may be analyzed to detect a sequence of actions (e.g., standing up, gathering items near the computer station, putting on a jacket) and to predict a type of movement (e.g., moving away from the computer station). When the user moves away from the computer station, the user's user session is automatically encrypted (e.g., without requiring further user input) to prevent others from viewing the user's activity and private information on the computer session. The user session may be saved by comparing the user's specific user session with a fresh (default) user session. Differences between the two sessions may be recorded and stored in index-difference vectors. For example, the index-difference vectors may store information regarding temporary file locations and paths, open applications and website browsers, open browser tabs, and usage logs. The index-difference vectors and session key may be transferred to a cloud server. When other customers of the café come to the same computer station, they are met with a new fresh session instead of the previous user's user session. When the previous user returns to the same computer station, they may request to access their user session. The computer station may collect image data of the user and analyze the collected image data to extract the user's facial and appearance features. These features may be used to retrieve the user's session key and activate the user's user session on the computer station. In another exemplary use case, an individual may visit a public library that provides desktops for public use. The desktops may be equipped with one or more cameras. Image data of a user at a desktop may be collected using the one or more cameras. The one or more cameras may be used to monitor the user in real time to detect when the user moves away from the desktop and returns to the desktop. In yet another exemplary use case, an individual may visit a financial institution that provides kiosks. Each kiosk may be equipped with one or more cameras that may be used to collect image data of each kiosk user. The image data may be used to generate a session key associated with a specific kiosk user. When a kiosk user leaves a kiosk and returns at a later time, image data of the returned kiosk user may be collected. This data may be used to retrieve the session key associated with the returned kiosk user and to subsequently activate the user's user session. FIG.1is a diagram depicting an example of a system environment100according to one or more embodiments of the present disclosure. The system environment100may include one or more public access user computing devices110. Each public access user computing device110may include one or more cameras115to capture image data of one or more users120at a respective public access user computing device110. The system environment100may also include an electronic network125, one or more surveillance cameras130, and a computer system135. The computer system135may have one or more processors configured to perform methods described in this disclosure. The computer system135may include one or more modules, models, or engines. The one or more modules, models, or engines may include a machine learning model140, a session key module145, a vector module150, and/or an identification module155, which may each be software components stored in/by the computer system135. The computer system135may be configured to utilize one or more modules, models, or engines when performing various methods described in this disclosure. In some examples, the computer system135may have a cloud computing platform with scalable resources for computation and/or data storage, and may run one or more applications on the cloud computing platform to perform various computer-implemented methods described in this disclosure. In some embodiments, some of the one or more modules, models, or engines may be combined to form fewer modules, models, or engines. In some embodiments, some of the one or more modules, models, or engines may be separated into separate, more numerous modules, models, or engines. In some embodiments, some of the one or more modules, models, or engines may be removed while others may be added. The system environment100may also include one or more databases160for collecting and storing data including session keys165and index-difference vectors170generated by computer system135. These components may be connected to (e.g., in communication with) one another via the network125. As used herein, a “machine learning model” is a model configured to receive input, and apply one or more of a weight, bias, classification, or analysis on the input to generate an output. The output may include, for example, a classification of the input, an analysis based on the input, a design, process, prediction, or recommendation associated with the input, or any other suitable type of output. A machine learning model is generally trained using training data, e.g., experiential data and/or samples of input data, which are fed into the model in order to establish, tune, or modify one or more aspects of the model, e.g., the weights, biases, criteria for forming classifications or clusters, or the like. Aspects of a machine learning model may operate on an input linearly, in parallel, via a network (e.g., a neural network), or via any suitable configuration. The execution of the machine learning model may include deployment of one or more machine learning techniques, such as linear regression, logistical regression, random forest, gradient boosted machine (GBM), deep learning, and/or a deep neural network. Supervised and/or unsupervised training may be employed. For example, supervised learning may include providing training data and labels corresponding to the training data. Unsupervised approaches may include clustering, classification or the like. K-means clustering or K-Nearest Neighbors may also be used, which may be supervised or unsupervised. Combinations of K-Nearest Neighbors and an unsupervised cluster technique may also be used. Any suitable type of training may be used, e.g., stochastic, gradient boosted, random seeded, recursive, epoch or batch-based, etc. The machine learning model140may be trained to detect when users120move away from public access user computing devices110. In some embodiments, the machine learning model140may be trained to (i) extract facial and/or appearance features of a specific user120from image data received from public access user computing device110and/or surveillance cameras130, and/or (ii) analyze a sequence of actions (movements) of user120to predict a type of movement. The facial and/or appearance features extracted may include, but are not limited to, a user's120eye shape, eye color, relative dimensions between eyes, nose, and/or mouth, hair color, hair style, articles of clothing, and/or color or pattern of specific articles of clothing. Types of movements predicted by the machine learning model140may include, but are not limited to, reaching for items, standing up, sitting down, moving away from and/or moving towards the public access user computing device110. In some embodiments, the machine learning model140may be trained to detect when an individual user120gathers their belongings and/or puts on specific articles of clothing, such as, for example, a jacket, sweater, coat, and/or scarf. Additionally or alternatively, the machine learning model140may be trained to detect when user120puts on a hat, backpack, and/or purse. The machine learning model140may be a trained machine learning model, such as, for example, a k-nearest neighbor (kNN) and dynamic time warping (DTW) model, or a trained neural network model. The machine learning model140may be trained on a dataset of sequences of actions collected from the actions of previous users120. A neural network may be software representing the human neural system (e.g., cognitive system). A neural network may include a series of layers termed “neurons” or “nodes.” A neural network may comprise an input layer, to which data is presented; one or more internal layers; and an output layer. The number of neurons in each layer may be related to the complexity of a problem to be solved. Input neurons may receive data being presented and then transmit the data to the first internal layer through connections' weight. A neural network may include, for example, a convolutional neural network (CNN), a deep neural network, or a recurrent neural network (RNN), such as a long short-term memory (LSTM) recurrent neural network. Any suitable type of neural network may be used. In some embodiments, a combination of neural network models may be used to detect when user120moves away from public access user computing device110. For example, a CNN model may be used to extract image features (e.g., facial and appearance features) from the image data, and an LSTM recurrent neural network model may be used to predict a specific type of movement based on the sequence of movements captured in the image data. In some embodiments, an LSTM recurrent neural network model may be used to extract image features. In other embodiments, a combination of a CNN model and an LSTM recurrent neural network model may be used to extract image features from the image data. The session key module145may be configured to generate a session key for a specific user120. The session key is associated with a specific user session, and is based upon the user's120appearance and facial features. The session key may be a hash based upon the extracted facial and/or appearance features of user120from the captured image data. The session key may be generated when user120begins a new session. One or more algorithms may be used to process or convert the extracted image data to create a session key. The generated session keys may be stored in databases160(shown inFIG.1as session keys165). The vector module150may be configured to create one or more index-difference vectors associated with a user session. The index-difference vectors may be stored in databases160(shown inFIG.1as index-difference vectors170). An index-difference vector may be created for each application opened during an active user session. For example, if three website tabs are opened in a web browser, the vector module150may be utilized to create three separate index-difference vectors for each website tab. The associated Uniform Resource Locator (URL) links may also be saved in the respective index-difference vectors. The identification module155may be configured to locate and retrieve a session key associated with a specific user session when a requestor, such as user120, returns to public access user computing device110and requests access to their user session. The identification module150may be utilized to analyze image data of the requestor to find, within databases160, a session key associated with the specific facial and/or appearance features of the requestor. In one implementation, the image data of the requestor may be compared with stored image data of users120to determine whether the requestor is one of the users120for whom an active session key has been generated. In this implementation, upon determining that the image data of the requestor matches a specific user120, computer system135may retrieve the session key associated with the specific user120from databases160. The retrieved session key may be used to decrypt the first user's user session and activate the user session at the public access user computing device110. Computer system135may be configured to receive data from other components (e.g., public access user computing devices110, surveillance cameras130or databases160) of the system environment100via network125. Computer system135may further be configured to utilize the received data by inputting the received data into the machine learning model140, the session key module145, vector module150, or the identification module155to produce a result (e.g., session keys, index-difference vectors, etc.). Network125may be any suitable network or combination of networks, and may support any appropriate protocol suitable for communication of data to and from the computer system135and between various other components in the system environment100. Network125may include a public network (e.g., the Internet), a private network (e.g., a network within an organization), or a combination of public and/or private networks. Network125may be configured to provide communication between various components depicted inFIG.1. Network125may comprise one or more networks that connect devices and/or components of environment100to allow communication between the devices and/or components. For example, the network125may be implemented as the Internet, a wireless network, a wired network (e.g., Ethernet), a local area network (LAN), a Wide Area Network (WANs), Bluetooth®, Near Field Communication (NFC), or any other type of network that provides communications between one or more components of environment100. In some embodiments, network125may be implemented using cell and/or pager networks, satellite, licensed radio, or a combination of licensed and unlicensed radio. Network125may be associated with a cloud platform that stores data and information related to methods disclosed herein. Public access user computing device110may operate a client program used to communicate with the computer system135. The public access user computing device110may be used by a user or any individual (e.g., an employee) employed by, or otherwise associated with computer system135, or an entity105. Public access user computing devices110, surveillance cameras130, computer system135, and/or databases160may be part of an entity105, which may be any type of company, organization, institution, enterprise, or the like. In some examples, entity105may be a financial services provider, a provider of public access user computing devices110, a provider of a platform such as an electronic application accessible over network125, or the like. In such examples, the computer system135may have access to data pertaining to a communication through a private network within the entity105. In some cases, the computer system135may have access to data collected by public access user computing devices110and/or surveillance cameras130. Such a user or individual may access public access user computing device110when visiting a public facility such as, a public library, an airport, public lobbies, public transportation (e.g., subways), and/or a café or other facility associated with entity105. The client application may be used to provide information (e.g., real time image data of user120) to the computer system135. Public access user computing device110may be a smartphone, tablet, a computer (e.g., laptop computer, desktop computer, server), or a kiosk associated with a public facility and/or entity105. Public access user computing device110may be any electronic device capable of capturing a user's120biometric data. Public access user computing device110may optionally be portable and/or handheld. Public access user computing device110may be a network device capable of connecting to a network, such as network125, or other networks such as a local area network (LAN), wide area network (WAN) such as the Internet, a telecommunications network, a data network, or any other type of network. Databases160may store any data associated with any entity105, including, but not limited to, financial services providers, or other entities. An entity105may include one or more databases160to store any information related to a user session associated with the user120. In some embodiments, the entity105may provide the public access user computing device110. In other embodiments, the entity105may provide a platform (e.g., an electronic application on the public access user computing device110) with which a user120or an operator can interact. Such interactions may provide data with respect to the user's120user session, which may be analyzed or used in the methods disclosed herein. FIG.2is a flowchart illustrating a method200of automatically preserving a user session when a user moves away from public access user computing device110, according to one or more embodiments of the present disclosure. The method may be performed by computer system135. Step205may include receiving, via one or more processors, from at least one camera, image data associated with a first user at public access user computing device110. In some embodiments, image data may be frames of a camera or video feed. In other embodiments, image data may include a live stream of data from the camera or video feed. Image data may be received from one or more cameras115associated with the public access user computing device110. Alternatively or additionally, the image data may include video or camera data from surveillance camera130configured to monitor the area of the public access user computing device110. For example, public access user computing devices110may be computer stations provided at an internet café or a coffee shop. In another example, public access user computing devices110may be kiosks and/or computer stations provided at a bank. Image data may be received in near real time or real time. In some embodiments, image data may be received on a periodic basis (e.g., frames captured over an interval of time). Step210may include analyzing, via the one or more processors, the received image data to extract facial and appearance features associated with the first user at public access user computing device110. Facial and appearance features may be detected and extracted from the received image data using machine learning algorithms. For example, machine learning model140may be employed to detect and extract specific facial and/or appearance features of the first user from the received image data. The extracted facial and appearance features may include face shape, eye color, size and distance between eyes, nose shape, mouth shape, distance between nose and mouth, eyebrow shape, distance between eyebrows and eyes, width of forehead, hair color, hair style, type of clothing, clothing color, and/or type of accessories (e.g., a bag, hat, scarf, earrings, necklace, bracelet, and/or shoes). Step215may include generating, based on the analyzed image data, a session key associated with the user session of the first user. In the exemplary embodiment, the first user activates a fresh new user session when first using the public access user computing device110. The session key is specific to the first user's user session. The session key may be based upon the extracted facial and appearance features of the first user. The session key may be generated any time after extraction of the first user's features from the image data. The generated session key may be stored in databases160. Step220may include detecting, via the one or more processors, based on the received image data, that the first user has moved away from the public access user computing device110by employing machine learning model140. The machine learning model140may be trained using a dataset of actions collected from a plurality of previous users. The computer system135may receive sets of image data of the first user and the public access user computing device110on a continuous basis over a period of time. The computer system135may analyze the sets of image data in near real time to detect a sequence of actions and to predict a type of body movement and/or behavior. For example, by tracking the first user's eye movement, the computer system135may predict that the first user is reading or viewing information on the public access user computing device110. In another example, the computer system135may predict that the first user is moving away from the public access user computing device110by detecting a series of movements, such as standing up, backing away from the camera115, and/or moving outside the range of the camera115. Step225may include automatically encrypting, via the one or more processors, based upon the detection, the user session associated with the first user. The user session is automatically encrypted to prevent a subsequent user from accessing the first user's private information. The encrypted user session may be configured to be subsequently activated by the first user when the first user returns to the public access user computing device110. Encrypting the user session includes comparing the differences (changes) between the first user's current session and a fresh (default) session to create one or more index-difference vectors. The index-difference vectors store elements representing these differences to enable data associated with the first user's current session to be saved. This data may include, but is not limited to, temporary file locations/paths, applications being used by the first user, usage logs associated with these applications, and web browsers and tabs that are being used by the first user. The index-difference vectors and any accompanying files may be linked to the first user's session key and stored in databases160for subsequent retrieval. Step230may include initiating, via the one or more processors, a new generic user session on the public access user computing device110for a second user. In the exemplary embodiment, after encrypting the first user's session, the computer system135provides a new fresh (e.g., default, standard, etc.) session to others who come to the same public access user computing device110used by the first user. Thus, instead of being met with the first user's current session, a new second user is able to start their own session on the same computing device110. FIG.3is a flowchart illustrating a method300of activating an encrypted user session when a user, such as a first user, returns to the public access user computing device110, according to one or more embodiments of the present disclosure. The method may be performed by computer system135. Step305may include causing display, via the one or more processors, on the public access user computing device110, of an option to activate an encrypted user session associated with the first user. In various embodiments, when a requesting user (requestor), such as the first user or a new, subsequent user (e.g., a second user), accesses the public access user computing device110, the requestor may be prompted with a first option to activate the encrypted user session of a previous user and a second option to initiate a fresh user session (separate from the encrypted user session). In these embodiments, if the requestor is the first user, the first user would select the first option when returning to the public access user computing device110. In some embodiments, the option to activate the encrypted user session may not be available to the requestor (e.g., may not be displayed on the public access user computing device110) until after the computer system135determines that the requestor is a returning user. Step310may include receiving, via the one or more processors, a selection of the option to activate the encrypted user session from the requestor at the public access user computing device110. In some embodiments, in response to selecting the option to activate the encrypted user session, the requestor may be prompted to stand in front of and/or otherwise face the camera115. Step315may include in response to receiving the selection, receiving, via the one or more processors, from at least one camera (such as camera115and/or surveillance camera130), image data of the requestor at the public access user computing device110. Step320may include detecting, via the one or more processors, based on the image data, that the requestor is the first user. In some embodiments, the computer system135may compare the received image data of the requestor to stored image data of the first user to determine whether the requestor is the first user. Step325may include retrieving, via the one or more processors, based on the detection, the session key associated with the first user. Step330may include decrypting, via the one or more processors, the user session associated with the first user. Step335may include activating, via the one or more processors, the user session associated with the first user on the public access user computing device110. The computer system135may use the appearance and facial features of the requestor to retrieve the session key from databases160. If the session key is found, the first user's user session may be returned and decrypted. The computer system135may retrieve, from the databases160, the one or more index-difference vectors and associated files linked with the session key. The computer system135may subsequently calculate the first user's session from the retrieved index-difference vectors and activate the user session at the public access user computing device110. In some embodiments, the first user may return to a different public access user computing device110than the one used by the first user prior to stepping away. For example, the public access user computing devices110may be part of a network of computing devices and/or may be provided by a single entity. In an exemplary use case, the public access user computing devices110may be provided in a public library. The first user may step away from their desk and return to find someone else at their desk using the public access user computing device110. The first user may find an available public access user computing device110, at a different desk, and subsequently request that their encrypted user session be activated on that computing device instead. In these embodiments, the first user may select the option to activate an encrypted user session at the available public access user computing device110. The computer system135may receive image data of the first user from the camera115of the available public access user computing device110, and extract facial and/or appearance features of the first user from the received image data. The computer system135may subsequently use the extracted facial and/or appearance features to locate the first user's session key from the databases160. In various embodiments, after the first user is done using the public access user computing device110, the computer system135deletes the first user's data before the first user leaves. In one example, the computer system135may cause display of an “end session” option or the like on the public access user computing device110. When the first user selects this option, the computer system135may remove data associated with the first user's user session from databases160. For example, the computer system135may delete the index-difference vectors, session key, and/or image data associated with the first user. In some embodiments, the computer system135may cause display of a pop-up notification or a message on the public access user computing device110informing the first user that their data has been (or will be) deleted. Unlike personal computers and portable devices, publically shared computers are shared among members of the public, and thus may be more susceptible to security risks, as users of publically shared computers are generally restricted from accessing and changing computer control settings. For example, users of publically shared computers are generally unable to download security protection measures onto publically shared computers to protect sensitive persona data, as these are borrowed devices (as opposed to privately owned devices). The disclosed embodiments herein provide techniques that improve upon the data security and privacy features of publically shared computers, thereby facilitating the protection of a user's sensitive online work (e.g., sensitive data associated with a user's work or workplace) and/or personal data (e.g., social security information, credit card information, account passwords and logins). Instead of (i) ending (and subsequently requesting) a user session on a publically shared computer each time a user has to leave the publically shared computer, and (ii) remembering to take safety measures, such as, for example, manually closing each browser tab, saving documents, and/or erasing one's web history prior to ending the user session, the disclosed embodiments herein automatically encrypt and securely store a user's user session data for subsequent access by the user when the user moves away from the publically shared computer. This automatic process improves upon at least some known data security measures associated with publically shared computers, and further reduces the number of access attempts associated with a single user. For example, if a user steps away from a publically shared computer multiple times in a short span of time to take a phone call, use the restroom, and/or retrieve an item from their vehicle, the user may end a user session each time they leave and request a new user session each time they return to the publically shared computer, thereby consuming network bandwidth and resources. Thus, by generating a session key associated with the facial and appearance features of a user, and by automatically encrypting the user's user session for subsequent retrieval, the embodiments described herein reduce the consumption of network bandwidth and resources. Additionally, protecting a user's sensitive personal data in the manner described herein prevents bad actors from performing unauthorized transactions and activities with compromised personal data, thereby further saving computational and human resources to resolve such issues in the future. Further aspects of the disclosure are discussed in the additional embodiments below. It should be understood that embodiments in this disclosure are exemplary only, and that other embodiments may include various combinations of features from other embodiments, as well as additional or fewer features. In general, any process discussed in this disclosure that is understood to be computer-implementable, such as the process illustrated inFIGS.2and3, may be performed by one or more processors of a computer system, such as computer system135, as described above. A process or process step performed by one or more processors may also be referred to as an operation. The one or more processors may be configured to perform such processes by having access to instructions (e.g., software or computer-readable code) that, when executed by the one or more processors, cause the one or more processors to perform the processes. The instructions may be stored in a memory of the computer system. A processor may be a central processing unit (CPU), a graphics processing unit (GPU), or any suitable types of processing unit. A computer system, such as computer system135, may include one or more computing devices. If the one or more processors of the computer system135are implemented as a plurality of processors, the plurality of processors may be included in a single computing device or distributed among a plurality of computing devices. If a computer system135comprises a plurality of computing devices, the memory of the computer system135may include the respective memory of each computing device of the plurality of computing devices. FIG.4is a simplified functional block diagram of a computer system400that may be configured as a device for executing the methods ofFIGS.2and3, according to exemplary embodiments of the present disclosure.FIG.4is a simplified functional block diagram of a computer that may be configured as the computer system135according to exemplary embodiments of the present disclosure. In various embodiments, any of the systems herein may be an assembly of hardware including, for example, a data communication interface420for packet data communication. The platform also may include a central processing unit (“CPU”)402, in the form of one or more processors, for executing program instructions. The platform may include an internal communication bus408, and a storage unit406(such as ROM, HDD, SDD, etc.) that may store data on a computer readable medium422, although the system400may receive programming and data via network communications. The system400may also have a memory404(such as RAM) storing instructions424for executing techniques presented herein, although the instructions424may be stored temporarily or permanently within other modules of system400(e.g., processor402and/or computer readable medium422). The system400also may include input and output ports412and/or a display410to connect with input and output devices such as keyboards, mice, touchscreens, monitors, displays, etc. The various system functions may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load. Alternatively, the systems may be implemented by appropriate programming of one computer hardware platform. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine-readable medium. “Storage” type media include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of the mobile communication network into the computer platform of a server and/or from a server to the mobile device. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links, or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution. While the presently disclosed methods, devices, and systems are described with exemplary reference to preserving a user session in a public environment, it should be appreciated that the presently disclosed embodiments may be applicable to transmitting data and may be applicable to any environment, such as a desktop or laptop computer, a banking environment, a kiosk environment, etc. Also, the presently disclosed embodiments may be applicable to any type of Internet protocol. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims. In general, any process discussed in this disclosure that is understood to be performable by a computer may be performed by one or more processors. Such processes include, but are not limited to: the processes shown inFIGS.2and3, and the associated language of the specification. The one or more processors may be configured to perform such processes by having access to instructions (computer-readable code) that, when executed by the one or more processors, cause the one or more processors to perform the processes. The one or more processors may be part of a computer system (e.g., one of the computer systems discussed above) that further includes a memory storing the instructions. The instructions also may be stored on a non-transitory computer-readable medium. The non-transitory computer-readable medium may be separate from any processor. Examples of non-transitory computer-readable media include solid-state memories, optical media, and magnetic media. It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention. Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination. Thus, while certain embodiments have been described, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention. The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other implementations, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description. While various implementations of the disclosure have been described, it will be apparent to those of ordinary skill in the art that many more implementations are possible within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents. | 42,490 |
11861042 | DETAILED DESCRIPTION OF THE INVENTION FIG.1is a diagram of an example computer system10for enhancing the security of user data. The computer system10includes an example central user data server12, a server14that provides additional services to users, an example point of service (POS) computer system16, an example computing device18, an example authentication computer system20, and example individual data units (IDUs)22-1to22-nthat communicate over a network24. The central user data server12and the server14constitute an array of servers that communicate with each other via the network24. The server14may include any number of the same or different servers that communicate with each other via the network24. For example, the server14may include a web server, an application server, an authentication server, an email server, e-discovery servers, or any servers associated with any services provided over the network24. Alternatively, the computer system10may not include the server14. The designation “n” as used in conjunction with the IDUs22-1to22-nis intended to indicate that any number “n” of IDUs may be included in the computer system10. Although the example computer system10includes one POS computer system16and one computing device18, the example computer system10may alternatively include any number of POS computer systems16and computing devices18. For example, there may be millions of computing devices18, typically one or perhaps more for each user whose data is stored in the computer system10. Any networking scheme and any stack of network protocols may be used to support communications over the network24between the central user data server12, the server14, the POS computer system16, the computing device18, the authentication computer system20, the example individual data units (IDUs)22-1to22-n, and any computer systems (not shown) and computing devices (not shown) that communicate over the network24. One example of a networking scheme and stack of protocols is Transport Control Protocol (TCP)/Internet Protocol (IP). Any type of network protocol may be used that facilitates the security of user data as described herein. A person who obtains or purchases goods or services during a network-based transaction, or who obtains or purchases goods or services in a brick and mortar store, is referred to herein as a user. Typically, entities, for example, merchants require that users be successfully authenticated before conducting a network-based transaction with the user. The central user data server12includes subcomponents such as, but not limited to, one or more processors26, a memory28, a bus30, and a communications interface32. General communication between the subcomponents in the central user data server12is provided via the bus30. The processor26executes instructions, or computer programs, stored in the memory28. As used herein, the term processor is not limited to just those integrated circuits referred to in the art as a processor, but broadly refers to a computer, a microcontroller, a microcomputer, a programmable logic controller, an application specific integrated circuit, and any other programmable circuit capable of executing at least a portion of the functions and/or methods described herein. The above examples are not intended to limit in any way the definition and/or meaning of the term “processor.” As used herein, the term “computer program” is intended to encompass an executable program that exists permanently or temporarily on any non-transitory computer-readable recordable medium that causes the central user data server12to perform at least a portion of the functions and/or methods described herein. Application programs34, also known as applications, are computer programs stored in the memory28. Application programs34include, but are not limited to, an operating system, an Internet browser application, authentication applications and any special computer program that manages the relationship between application software and any suitable variety of hardware that helps to make-up a computer system or computing environment. The central user data server12manages user data for any type of entity, for example, a merchant. As such, the central user data server12performs functions including, but not limited to, establishing a central user data server token and sharing a key for validating the central user data server token, registering new user accounts, registering new POS computer systems16, accepting new or revised data from registered users, and conducting authentication transactions. New or revised user data may include user contact information, reference authentication data, hash codes for user data, and keys to validate tokens for computing devices18, POS computer systems16, and IDUs22-1to22-n. Additionally, the central user data server12may compute and compare hash codes for new or updated user data, and temporarily accept and use copies of encryption keys to be applied to user data being stored on the IDUs22-1to22-n. Such temporarily accepted copies of encryption keys are securely destroyed immediately after use. The memory28may be any non-transitory computer-readable recording medium used to store data such as, but not limited to, computer programs, decryption keys36for logical addresses of IDUs22-1to22-n, decryption keys38for access codes of IDUs22-1to22-n, decryption keys40for user data records, keys42to validate tokens from POS computer systems16and computing devices18, encryption keys44used to encrypt user data records, and a central user data server token46. The memory28may additionally include, or alternatively be, a disk storage unit (not shown) coupled to and in communication with the central user data server12. As used herein, a logical address includes any addressing scheme that can ultimately be used to resolve the logical address to a specific physical address within the network24. In a TCP/IP scheme this would resolve to an IP address. An example IP address using IPv6 might be: 2001:0db8:85a3:0000:0000:8a2e:0370:7334. A logical address is typically a URL (Uniform Resource Locator) that is resolved into an IP address by a Domain Name Server (DNS). Media Access Control (MAC) addresses that are physically embedded within each device are automatically resolved using the associated protocols of TCP/IP networks. Depending on how IP addresses for IDUs are assigned and maintained, the logical addresses for IDUs described herein may be the respective IP address of each IDU. The decryption keys40correspond to the encryption keys44used to encrypt respective user data records. The encryption44and decryption40keys are different from each other for asymmetric encryption and may be the same for symmetric encryption. In the computer system10, all encryption-decryption pairs of keys are asymmetric cryptographic keys. However, symmetric keys may alternatively be used should the computer system10use asymmetric key pairs to securely transmit symmetric keys. The encryption44and decryption40keys are different for each user data record. Because entities like merchants may collect data for millions of customers, millions of decryption keys36,38, and40may be stored in the memory28. Although the central user data server12stores the decryption keys36,38, and40, the central user data server12typically does not store information regarding the physical location or the logical address of the user data records corresponding to any of the decryption keys36,38, and40. As a result, if a cyber-criminal successfully attacked the central user data server12, the cyber-criminal would be able to steal the decryption keys36,38, and40but not information regarding the physical or logical address of the user data record. The physical or logical address as well as the access code are necessary to access the user data record corresponding to decryption keys36,38, and40. Consequently, the decryption keys36,38, and40by themselves are useless to cyber-criminals. Non-transitory computer-readable recording media may be any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information or data. Moreover, the non-transitory computer-readable recording media may be implemented using any appropriate combination of alterable, volatile or non-volatile memory or non-alterable, or fixed, memory. The alterable memory, whether volatile or non-volatile, can be implemented using any one or more of static or dynamic RAM (Random Access Memory), a floppy disc and disc drive, a writeable or re-writeable optical disc and disc drive, a hard drive, flash memory or the like. Similarly, the non-alterable or fixed memory can be implemented using any one or more of ROM (Read-Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), an optical ROM disc, such as a CD-ROM or DVD-ROM disc, and disc drive or the like. Furthermore, the non-transitory computer-readable recording media may be implemented as smart cards, SIMS, any type of physical and/or virtual storage, or any other digital source such as a network or the Internet from which a central user data server can read computer programs, applications or executable instructions. The communications interface32provides the central user data server12with two-way data communications. Moreover, the communications interface32may enable the central user data server12to conduct wireless communications such as cellular telephone calls or to wirelessly access the Internet over the network24. By way of example, the communications interface32may be a digital subscriber line (DSL) card or modem, an integrated services digital network (ISDN) card, a cable modem, or a telephone modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communications interface32may be a local area network (LAN) card (e.g., for Ethernet™ or an Asynchronous Transfer Model (ATM) network) to provide a data communication connection to a compatible LAN. As yet another example, the communications interface32may be a wire or a cable connecting the central user data server12with a LAN, or with accessories such as, but not limited to, keyboards or biometric capture devices used to support login by system administrators. Further, the communications interface32may include peripheral interface devices, such as a Universal Serial Bus (USB) interface, a PCMCIA (Personal Computer Memory Card International Association) interface, and the like. Thus, it should be understood the communications interface32may enable the central user data server12to conduct any type of wireless or wired communications such as, but not limited to, accessing the Internet. The communications interface32also allows the exchange of information across the network24. The exchange of information may involve the transmission of radio frequency (RF) signals through an antenna (not shown). Moreover, the exchange of information may be between the central user data server12and any other POS computer system16, computing devices18, and IDUs capable of communicating over the network24. The network24may be a 5G communications network. Alternatively, the network24may be any wireless network including, but not limited to, 4G, 3G, Wi-Fi, Global System for Mobile (GSM), Enhanced Data for GSM Evolution (EDGE), and any combination of a LAN, a wide area network (WAN) and the Internet. The network24may include Radio Frequency Identification (RFID) subcomponents or systems for receiving information from other devices. Alternatively, or additionally, the network24may include subcomponents with Bluetooth, Near Field Communication (NFC), infrared, or other similar capabilities. The network24may also be any type of wired network or a combination of wired and wireless networks. The POS computer system16may store data such as, but not limited to, a logical address48for the central user data server12, a POS system token50for the POS computer system16, keys52for validating tokens from other POS computer systems (not shown) and other computing devices (not shown), and transaction numbers and audit data54. One example of a POS computer system16is a service provider computer system that functions as a concentrator and a firewall that users communicate with to remotely obtain goods or services via the Internet. Other examples include, but are not limited to, computerized registers typically used to purchase goods inside a brick and mortar store. The POS computer system16performs functions such as, but not limited to, establishing the POS computer system token50, and sharing with other POS computer systems (not shown) and other computing devices (not shown) the key used to validate the token50. The POS computer system16may also register other central user data servers (not shown), register the authentication computer system20, conduct authentication transactions, create user data stored in a user data record, update user data stored in a user data record, and retrieve user data from a user data record. Additionally, the POS computer system16may include policies for determining levels of risk acceptable to the service provider for conducting different types of network-based transactions. Alternatively, the policies for determining acceptable levels of risk may be included in other computer systems (not shown). Moreover, the POS computer system16may access any other data or services provided by any other POS computer system (not shown). POS computer systems16that are computerized registers are typically found in a brick and mortar store and typically accept payments or otherwise authenticate users. Such POS computer systems may perform other functions including, but not limited to, creating user data stored in a user data record, updating user data stored in a user data record, and retrieving user data from a user data record. One example of the computing device18is a smart phone. Other examples include, but are not limited to, tablet computers, phablet computers, laptop computers, and desktop personal computers. The computing device18is typically associated with a user or with any type of entity including, but not limited to, commercial and non-commercial entities. The computing device18associated with each respective user stores an encrypted logical address56of the IDU associated with the user and an encrypted access code58required to access the data record of the respective user. The logical address56is different for each IDU as is the access code58. The computing device18may also store a computing device token60and keys62to validate tokens from the central user data server12, the server14, the POS system16, the authentication system20, and any other computer systems (not shown) and any other computing devices (not shown) operable to communicate over the network24. The logical address56and access code58are encrypted before being stored in the computing device18. Some users might be associated with more than one computing device18. For example, some users may be associated with a smart phone, a tablet computer, and a laptop computer. When a user is associated with more than one computing device18, the encrypted logical address56and the encrypted access code58may be stored on each computing device18associated with the user. The central user data server12receives the encrypted logical address56and the encrypted access code58from the user computing device18, and decrypts the encrypted logical address56and the encrypted access code58with the decryption keys36and38, respectively. Should the computing device18associated with a user be stolen or successfully compromised by a cyber-criminal, the encrypted logical address56and the encrypted access code58would be useless unless the central user data server12was also hacked and the corresponding decryption keys obtained. The computing device18performs functions including, but not limited to, establishing a computing device token and sharing the key that validates the computing device token. Moreover, the computing device18may validate hash codes, and accept and store encrypted logical addresses and encrypted access codes for IDUs. Additionally, the computing device18may collect user data, and securely send the user data to the central user data server12which arranges for the use data to be stored on an IDU22-1. The collected user data may include user authentication data captured by the user computing device18. The computing device18may also include policies for determining levels of risk acceptable to a user for conducting different types of network-based transactions. The authentication computer system20may store authentication policies, user liveness detection applications, authentication applications, and reference authentication data records. Authentication policies facilitate determining authentication data to be obtained from users during authentication transactions. Some policies may consider the maximum level of acceptable risk for a desired network-based transaction acceptable to the user and the service provider when determining the authentication data to be obtained from a user during an authentication transaction. User liveness detection applications enable determining whether or not obtained biometric authentication data is of a live person. Authentication applications enable conducting user verification and identification transactions with any type of authentication data. The process of verifying the identity of a user is referred to as a verification transaction. Typically, during a verification transaction authentication data is captured from a user. The captured authentication data is compared against corresponding reference authentication data previously collected and stored on the authentication server and typically a matching score is calculated for the comparison. When the matching score meets or exceeds a threshold score, the captured and reference data are judged a match and the identity of the user is verified as true. Authentication data is the identifying data desired to be used during a verification or identification transaction. Authentication data as described herein includes, but is not limited to, data of a biometric modality, combinations of data for different biometric modalities, pass-phrases, personal identification numbers (PIN), physical tokens, global positioning system coordinates (GPS), and combinations thereof. Example biometric modalities include, but are not limited to, face, iris, finger, palm, and voice. Data for such biometric modalities is typically captured as an image or an audio file that may be further processed into templates for facilitating rapid comparisons with live authentication data captured during a verification transaction. User authentication data records include reference authentication data which is used in authentication transactions. Reference authentication data is the data registered for each user to establish his or her identity using different techniques. When authentication is based on data of a biometric modality, the reference authentication data may be as captured from a user or may be a template derived from the captured data. The authentication computer system20may store reference authentication data for different users on different storage devices (not shown) which may be located in different geographical locations. The authentication computer system20may include servers to facilitate performing complex biometric or other comparisons between captured and reference authentication data. A merchant may conduct out-of-store network-based transactions by having the central user data server12communicate directly with the computing device18during the transactions. Alternatively, or additionally, merchants may include points of service16between the computing devices18and the central user data server12to minimize the number of direct connections to the central user data server12. Such designs may include thousands of POS computer systems16. The POS computer system16, computing device18, and authentication computer system20typically include subcomponents similar to the subcomponents included in the central user data server12. That is, the central user data server12, the POS computer system16, the computing device18, and the authentication computer system20are typically general purpose computers capable of performing any of thousands of different functions when properly configured and programmed. The POS computer system16and the computing device18may also include a user interface (not shown) and a display (not shown). The display (not shown) may include a visual display or monitor that displays information to a user. For example, the display may be a Liquid Crystal Display (LCD), active matrix display, plasma display, or cathode ray tube (CRT). The user interface (not shown) may include a keypad, a keyboard, a mouse, a light source, a microphone, cameras, and/or speakers. Moreover, the interface and the display may be integrated into a touch screen display. Accordingly, the display may also be used to show a graphical user interface, which can display various data and provide “forms” that include fields that allow for the entry of information by the user. Touching the screen at locations corresponding to the display of a graphical user interface allows the user to interact with the POS computer system16or the computing device18to enter data, change settings, control functions, etc. Consequently, when the touch screen is touched, the interface communicates this change to the processor in the POS computer system16or user computing device18, and settings can be changed or information can be captured and stored in the memory. The subcomponents of the central user data server12, the server14, POS computer system16, computing device18, and authentication computer system20involve complex hardware, and may include such hardware as large scale integrated circuit chips. Such complex hardware is difficult to design, program, and configure without flaws. As a result, the central user data server12, the server14, the POS computer system16, the computing device18, and the authentication computer system20typically include design or configuration flaws. The central user data server12, the server14, POS computer system16, computing device18, and authentication computer system20also run large numbers of sophisticated and complex software applications which typically include bugs, or flaws. Such sophisticated software programs typically have hundreds or thousands of known and documented flaws, and an unknown number of unknown flaws. The general purpose nature of computers like the central user data server12, POS computer system16, computing device18, and authentication computer system20enables them to be economically manufactured based on the production of high volumes of identical computers, each of which is custom configured by an administrator to perform desired functions. However, such economical manufacturing introduces additional complexity and the potential for human error because a person may make an error while customizing the configuration or administering day-to-day operations of the computer. Thus, the human factor involved in customizing the configuration and programming for these computers adds to security vulnerabilities that can be exploited by cyber-criminals. As a result, the overall complexity of general purpose devices introduces additional flaws that may be exploited by cyber-criminals during cyber-attacks. Any device that can serve unlimited multiple purposes is inherently more complex than a device that serves a limited purpose only. In view of the above, it can be seen that the central user data server12, the server14, POS computer system16, computing device18, and authentication computer system20are vulnerable to cyber-attacks due to the complexity of their hardware and internal firmware/software, the potential for human errors, and potentially inconsistent administrative management during a lifetime of operational use. It is the flaws introduced by at least these factors that are typically exploited by cyber-criminals during cyber-attacks. FIG.2is a block diagram of an example IDU22-1. Because each IDU22-1to22-nis the same, IDU22-1only is described. Subcomponents of the IDU22-1include a processor64, a memory66, a bus68, and a communications interface70. Communication between the subcomponents is provided via the bus68. The processor64executes instructions from applications78and other computer programs stored in the memory66. The memory66may be any non-transitory computer-readable recording medium used to store data including, but not limited to, applications78, a user data record72, an IDU token74, and keys76to validate tokens from POS computer systems16and computing devices18. The encrypted user data record72may be decrypted by the decryption key40. Additionally, the IDU22-1may include security features that are common on Hardware Security Modules such as tamper resistance and detection. The communications interface70performs functions similar to those described herein for the communications interface32. However, the communications interface70does not require the broad array of different communications options required of a general purpose computer. In addition, the application programs78of the IDU22-1are far less complex than the central data server application programs34described herein because the IDU22-1is specifically designed to perform a limited number of functions in a secure manner. A limited number of functions may be as many as several dozen functions, but it is not thousands, and the functions are not unlimited in scope—the functions are focused solely on operations that facilitate securing user data. These simpler applications, in turn, may be run on a smaller and less powerful processor64communicating across a simpler and less general purpose bus68. As a result, the subcomponents of the IDU22-1are easier to design and are thus less likely to include design flaws versus general purpose computers. Likewise, the software subcomponents underlying the applications78of the IDU22-1are also orders of magnitude less complex than those in the central user data server12, the server14, the POS computer system16, or the computing device18. For instance, a general purpose operating system such as WINDOWS® or LINUX® may not be required on an IDU. The IDU22-1is thus not a general purpose computer. The IDU22-1is physically smaller, simpler, less expensive, and more secure than a general purpose computer because the IDU22-1is dedicated to the single purpose of securing user data. As the number of functions that can be performed by the IDU22-1increases, the number of potential design flaws increases as does the software complexity. As a result, the IDU22-1becomes more vulnerable to successful cyber-attacks. As the number of functions that can be performed by the IDU22-1decreases, the number of potential design flaws decreases as does the software complexity. As a result, the IDU22-1is less vulnerable to successful cyber-attacks and thus facilitates increasing the security of the user data record72stored therein. In view of the above, it can be seen that security is facilitated to be maximized when the IDU22-1is specifically designed and manufactured to facilitate a single function like securely storing the data of one user. The IDU22-1may be any device capable of running simple application programs78that enable performing basic functions only. The IDU22-1may alternatively perform more functions than the basic functions described herein. However, there is a tradeoff in that additional functions imply increased complexity which in turn increases the susceptibility to design or implementation flaws that can be exploited by cyber-criminals. In addition, as the complexity increases so does the possibility of errors resulting from the human factor. The IDU22-1may be no larger in physical size or logical complexity than an electronic car key fob. Example basic functions include, but are not limited to, receiving and transmitting the data in the user data record72, and storing and retrieving the data in the user data record72. The application programs78are very small and simple, thus are easily verifiable and auditable and typically include few if any flaws. Consequently, the IDU22-1has fewer software flaws that can be exploited by cyber-criminals conducting cyber-attacks. Because the IDU22-1has fewer design and software flaws, the IDU22-1is more secure against cyber-attacks than general purpose devices like mobile phones or small laptop computers. Other basic functions that may be performed by the IDU22-1include, but are not limited to, establishing the IDU token74, sharing a key used to validate the IDU token, and receiving and storing setup information from a user via a computing device18or buttons and displays on the IDU itself. Such setup information may include information for connecting the IDU to a the network24, information required to establish the physical and logical address of the IDU22-1, information to establish keys to validate tokens from servers or devices that may communicate directly with the IDU22-1, and information for establishing restrictions on which sources of network messages may be processed. The IDU22-1may also receive via the network24the encryption key44and a user data record, encrypt the user data record with the encryption key44, and store the encrypted user data record as the user data record72in the memory66. Additionally, the IDU22-1may receive the encryption key44and store the key44in the memory66. Upon receiving a user data record and a request from the central user data server12, the IDU22-1may encrypt the received user data record and store the encrypted user data record as the user data record72in the memory66. The IDU22-1may also receive the decryption key40and decrypt the user data record72stored therein using the decryption key40. Alternatively, or additionally, the IDU22-1may encrypt the logical address56and access code58, compute a hash code for the user data record72, and send the encrypted logical address56, the encrypted access code58, and the hash code to the computing device18. The IDU22-1may also send to the central user data server12decryption keys for the logical address56and access code58of the IDU22-1as well as the decryption key40for the user data record72, and a hash code for the user data record72. The IDU22-1may also obfuscate physical addresses or use dark net technologies to improve anonymity of the IDU and protect the IDU against sniffing and traffic analysis threats. Additionally, the IDU22-1may back-up the user data record72to any computing device or computer system included in the computer system10. The IDU22-1may also detect networks, accept inputs to complete connection to a network, automatically restore connection to the network after the connection is temporarily disrupted, display the status of a network connection, and restrict network access to the IDU22-1to only specified computing devices and computer systems. The inputs may be entered using buttons or displays on the IDU22-1. The IDU22-1may also include basic functions to change the physical or logical addresses of the IDU, and change the access code58required to access the user data record72. A most likely additional function of the IDU22-1is storing multiple data records72for one user who interacts with multiple service providers. When multiple data records72are stored in the IDU22-1, the data from one service provider is not disclosed to a different service provider. Moreover, an access code for each of the multiple data records72is stored in the IDU22-1. Each different access code corresponds to a different service provider with whom the user interacts. The IDU22-1responds to an incoming communication only when an access code in the communication matches one of the multiple access codes stored therein. Alternatively, the IDU may use a completely separate access code for authorizing access to the IDU that is different from the access codes used to authorize access to specific user data records. Adding such simple functions to the IDU does not remove the characteristic of being orders of magnitude less complex than a general purpose computer and thus far more secure. The user data record72may include any information about a user as well as information collected by a service provider about the user. For example, data collected by airlines for a passenger may include the name, date of birth, passport number, billing address, credit card information, and various flight preferences such as aisle or window seating of the passenger. Thus, the user data record72of an airline passenger, or user, typically includes such information. Additional data that may be stored in a user data record72includes, but is not limited to, reference authentication data, and the user's gender, age, citizenship and marital status. Although the user data record72is stored in the IDU22-1, the central user data server12orchestrates access to the user data record72. Each IDU22-1to22-nincluded in the computer data system10is associated with a respective user and stores the data for that respective user in a user data record72. Storing the data for each user in a respective user data record72decentralizes user data. This decentralization combined with secure distributions of data and decryption keys results in the user data records72constituting a less attractive target to cyber-criminals than a centralized data base because a limited number of successful cyber-attacks will only compromise the data of one or a few users. That is, decentralization of the user data records72enhances security for user data by both increasing the cost and decreasing the benefit for cyber-criminals to conduct attacks. Compromise as used herein is intended to mean gaining access to data on a computing device or computer system that was intended to be secret. For example, in order to compromise all of the user data managed by the central user data server12a cyber-criminal would need to successfully gain access to the all the encrypted logical addresses56, all the encrypted access codes58as well as the central user data server12itself. Additionally, if a cyber-criminal compromised the IDU or computing device18of a user, the cyber-criminal would not have sufficient information to access and decrypt the data record72of that user. The IDU for each respective user may be located at and operated from a geographical location associated with the respective user. Alternatively, the IDU for each respective user may be located at and operated from geographic locations not associated with the respective user. Such alternative locations may include co-location with the central user data server12, or locations not co-located with the central user data server12. Hence, the IDUs may be geographically distributed and may thus alternatively be referred to as distributed data units. Because the IDUs may be geographically distributed the user data records72may also be geographically distributed. As the IDU22-1is simple, the IDU22-1is very inexpensive relative to general purpose computing devices such as laptops or mobile phones. This low cost makes the massive distribution of user data via IDUs practical. Such massive distribution and effective security would not be practical, for instance, using a second mobile phone for each user. This is because mobile phones would be orders of magnitude more costly than the IDUs. In addition, mobile phones are general purpose devices and as such are far more susceptible to successful cyber-attacks. A user is typically responsible for managing his or her IDU. Some users wish to retain personal control over their IDU to prevent mismanagement of the user data record72stored therein by a third party, and to avoid human error by a third party that may leave the user data record72stored therein more vulnerable to successful cyber-attacks. These users believe that the user data record72stored in his or her IDU is more secure when managed by his or her self. As part of managing his or her user data record72, some users may purchase several IDUs to more thoroughly distribute his or her user data and decryption keys to further enhance security against cyber-attacks. Thus, it should be understood that using multiple IDUs may facilitate increasing the security of user data records72to arbitrarily high levels. Although the access code58facilitates enhancing the security of user data records72, some users who manage the user data record72in his or her IDU may decide that an access code is not necessary to meet his or her desired level of data security. Thus, the access code58may alternatively not be stored in the computing devices18of such users. As a result, for such users the access code58is not factored into the security of his or her user data record72. It should be understood that by not using the access code58, security of the user data record72may be reduced. IDU manufacturers may omit the implementation of an access code, or for IDUs that include the option of using an access code users may indicate during setup of the IDU that the access code58is not to be used, for example, by activating a switch on the IDU22-1or by interacting with an application on the computing device18to configure the IDU. Users who decide to enhance security for his or her IDU may indicate in the same manner that the access code58is to be used. The access code is optional because the IDU22-1provides orders of magnitude better security than current systems, even without an access code. Instead of geographically distributing the IDUs and thus the user data records72, the IDUs may alternatively be co-located with the central user data server12. Specifically, the IDUs of multiple users may be physically co-located within one or more physical devices accessed by the central user data server12. For example using Large Scale Integrated Circuits (LSICs) or Application Specific Integrated Circuits (ASICs), a single circuit board could host the equivalent of hundreds or thousands of individual IDUs. Such a large number of IDUs, whether hosted on an integrated circuit board or not, is referred to herein as a “hosted IDU platform” and all the included IDUs are managed by a third party instead of by the individual users. However, it should be understood that IDUs implemented in this manner each retain a unique logical address and a unique access code and would need to be individually compromised by a cyber-criminal attempting to access all the user data in the physical device. IDUs hosted on the IDU platform may also retain unique physical addresses within each circuit board. The IDUs may alternatively, or additionally, be similarly included on a hosted IDU platform in any other computer system (not shown) or any other computing device (not shown) capable of communicating with the central user data server over the network24. It should be understood that each IDU in a hosted IDU platform is considered a separate component for purposes of describing or calculating the security protections afforded to the user data record72of each IDU included in a hosted platform. The circuit board may have a single physical address that encompasses all of the IDUs on the circuit board, in which case there is a unique logical address or a unique access code or both for accessing the IDU of each user within the circuit board. Alternatively, each IDU on each circuit board may have a different physical address, a unique and secret logical address, and a secret access code that adds a layer of security. Each individual data unit has functions including, but not limited to, functions for power supply, external connections, tamper resistance, tamper detection, encryption, decryption, and communications with other computing devices. When the IDUs are physically co-located within one or more physical devices, it should be appreciated that some or all of these functions may be shared between the IDUs. When the circuit board assigns unique IP addresses to each IDU on the circuit board, it is possible to build dark net technologies into the circuit boards that could mask the IP addresses of the individual IDUs, thus adding yet another layer of security. The most common dark net technique for masking IP addresses includes a processor on the circuit board that acts as an intermediate web node that is the only real IP address that can be observed while monitoring network traffic. This node would assign changing virtual IP addresses to the IDUs and use these virtual IP addresses when communicating with external computers. When the non-IDU computers use the virtual IP address to respond to the IDU, this node translates the virtual IP address into the real physical IP address of the IDU being addressed. Such dark net techniques are not restricted to hosted IDU platforms and could also be applied to IDUs that are not part of a hosted platform. A hosted IDU platform utilizing multiple IDUs on a single circuit board facilitates reducing manufacturing costs and also facilitates third party management of the data records72for any user not interested in managing his or her user data record72. If dark net technologies are included on the circuit board, the hosted IDU platform facilitates further enhancing the security of user data records72stored on these IDUs. Because some users prefer to personally manage the data record72in his or her IDU and others prefer third party management, the computer system10includes both personally managed IDUs22-1to22-nas well as hosted IDU platforms (not shown) managed by third parties. Alternatively, the computer system10may include personally managed IDUs22-1to22-nonly, or hosted IDU platforms only (not shown). In current state-of-the-art computer systems with distributed and encrypted user data records, the associated addresses and decryption keys for the user data records are typically stored on the central user data server only. Storing the user data records72, encrypted logical addresses56of those records, encrypted access codes58, and associated decryption keys36,38, and40on different computer systems and different computing devices that are not all known to the central data server enhances the difficulty of compromising any user data record72in a single cyber-attack because at least two physically separate computer systems must be compromised instead of one and successfully compromising two computer systems yields only a single user data record. Compromising the data records72of all users, or a large number of users, for example a million users, requires compromising N plus one separate computers where N is the number of users. That is, the resources required to obtain user data increases proportionately to the amount of data to be compromised. In the example computer system10, a cyber-criminal needs to successfully compromise one of the following pairs of computers and/or computing devices to compromise the user data record72for a single user: 1) central user data server12and IDU22-1; 2) central user data server12and user computing device18; or, 3) user computing device18and IDU22-1. After the central user data server12is compromised in a successful cyber-attack, the central user data server12need not be attacked again because the information stored therein was already obtained. However, to compromise a million users by attacking the first and second pairs, cyber-criminals need to replicate successful attacks against either a million computing devices18or a million IDUs. To compromise a million users by attacking the third pair listed above, cyber-criminals need to replicate successful attacks against a million computing devices18as well as a million IDUs. As a result, the attractiveness of attacking the computer system10is facilitated to be reduced and the security of the user data records72is facilitated to be enhanced because a single or small number of user data records72is typically of little value to cyber-criminals. IDUs enable additional locations for storing data and keys which facilitates increasing the number of successful cyber-attacks required to compromise any of the user data in the computer system10compared to known state-of-the-art security methods. Moreover, the hacking effort required by cyber-criminals to compromise a large number of IDUs and computing devices18increases in direct proportion to the number of users being attacked. FIG.3is a diagram of an example computer system80that expands on the computer system10shown inFIG.1by including two IDUs22-1and22-2for one user, and showing an example distribution of encrypted logical addresses56, encrypted access codes58, and decryption keys36,38, and40E that increases the security of user data for that user. The encrypted decryption key40E for user data is the same as the decryption key40for user data shown inFIG.1; however, the decryption key40E is encrypted. Additionally, the encrypted logical addresses56, encrypted access codes58, and decryption keys36,38,40E are distributed throughout the computer system80in a manner that enhances the difficulty of compromising the user data record72because at least three physically separate components of the computer system80need to be compromised instead of two. More specifically, the encrypted logical address56and encrypted access code58for IDU22-2are stored on the POS computer system16, the decryption keys36,38for IDU22-1are stored on the IDU22-2, the encrypted user data record72is stored on the IDU22-1, and the decryption key40E is stored in the central user data server12. Additionally, the encrypted logical address56for IDU22-1, the encrypted access code58for IDU22-1, the decryption keys36,38for the IDU22-2, and a decryption key82for the decryption key40E are stored in the user computing device18. The central user data server12may also store additional encrypted decryption keys40E in the event that each IDU for that user is encrypted with a different key. That is, different users encrypt user data using different encryption keys, but a single user may encrypt his or her user data stored on multiple IDUs using either the same encryption key or different encryption keys. Associating users with two IDUs facilitates distributing the encrypted logical addresses56, the encrypted access codes58, and the decryption keys36,38,40E in a manner that requires compromising at least three components of the computer system80to gain access to the data record72of a single user. Increasing the number of IDUs associated with each user facilitates causing cyber-criminals to compromise M+1 components of the computer system80to access the data record72of a single user, where M is the total number of IDUs used by each user to distribute encrypted logical addresses56, encrypted access codes58, and decryption keys40, or fractional parts thereof. That is, for users associated with two IDUs, M=2. If all users of the computer system80use two IDUs in the described manner, then a cyber-attack would have to compromise at least (M×N)+1 components of the computer system80to compromise all the user data in the system80, where N is the number of users in the computer system80. Thus, it can be seen that the security of all user data in the computer system80is enhanced by orders of magnitude rather than incrementally. Although the user is associated with two IDUs22-1and22-2in the computer system80, each user may alternatively be associated with any number of IDUs such that any subset of users or all users are associated with multiple IDUs. When a user is associated with more than one IDU, each IDU associated with that user may store the same user data record72. The extra IDUs improve redundancy and thus the reliability of storage for the user data record72. For example, by replicating the user data record72in multiple IDUs and distributing the IDUs across different networks and different power sources, a user can protect against network or power failures. Alternatively, a subset of users could elect to use a separate IDU for each service provider storing their user data. Security of user data records72may be further enhanced by breaking the encrypted logical addresses56, the encrypted access codes58, and the decryption keys36,38,40E into fractional parts and distributing the parts to an arbitrary number of IDUs. Allowing encrypted access code58use to be optional, associating users with more than one IDU, and breaking data into parts which are distributed amongst the components of the computer system80are factors effecting the security of user data records72. By manipulating at least these factors any single user, any subset of users, or all users can facilitate increasing the security and reliability of his or her user data records72. Thus, it should be appreciated that the level of security for the data record72of each user can be tailored by the respective user in many different ways. For example, some users may opt to use the access code58while others may not, some users may opt to use multiple IDUs while others may not, and some users may opt to break data into parts while others may not. Instead of distributing the data and keys as described herein with regard to the computer system80as shown inFIG.3, the data and keys may be distributed in the computer system80similar to the distribution described with regard to the computer system80as shown inFIG.3except as follows: adding centralized storage of encrypted user data records72attached directly to the central user data server12, or adding network24storage of encrypted user data records72accessible to the central user data server12instead of storing user data records72on IDUs; and, distributing the encrypted decryption keys for user data40E in the IDU22-1instead of in the central user data server12. Such an alternative distribution of data and keys requires the compromise of at least three separate components to compromise the data for a single user and the compromise of (2×N)+1 components to compromise all the user data for all the users. The tradeoff is that central user data server12is more vulnerable to a brute force attacks of all the encrypted user data records72stored in the central location. This tradeoff may be acceptable to many network service providers. It should be appreciated that such a distribution of data and keys facilitates achieving most of the security advantages of IDUs in legacy computer systems with centralized storage of user data records72, but without having to immediately distribute the user data records72centrally stored therein to IDUs. FIG.4is a table84which summarizes an analysis showing that compromising any two components of the computer system80is not sufficient to compromise a user data record72. For example, it can be seen from the first line of table84that when the computing device18and the IDU22-2of a user are compromised, the data record72of the user is not compromised because the cyber-criminal does not have the encrypted decryption key40E from the central user data server12. As another example, as can be seen from the seventh line of the table84, when IDU22-1and IDU22-2are compromised, the data record72of the user is not compromised because the cyber-criminal does not have the encrypted decryption key40E from the central user data server12and does not have the decryption key82from the computing device18of the user. The same is true when the POS computer system16and the IDU22-1of a user are compromised. As yet another example, as can be seen from the third line of the table84, when the computing device18of the user and the central user data server12are compromised, the data record72of the user is not compromised because the cyber-criminal does not have the decryption keys36and38for the encrypted logical address56and encrypted access code58, respectively, of the IDU22-1. In view of the above, it should be appreciated that cyber-criminals need to compromise at least three components of the computer system10, the computer system80, or any similar computer system to compromise the data record72of a single user, and (2×N)+1 components to compromise all the data for N users if all the users are using two IDUs configured for additional security as described herein with regard toFIG.3. By continuing to add IDUs for each user it is possible to continue increasing the security such that (M×N)+1 components must be compromised to compromise all the data for N users if all the users are using M IDUs configured for additional security. FIG.5is a flowchart86illustrating an example method for updating a user data record72in the computer system10as shown inFIG.1. The method starts88with a user operating his or her computing device18to request initiating90a network-based transaction with the POS computer system16. Such transactions include, but are not limited to, purchasing merchandise from a merchant website, purchasing an airline ticket, and accessing information from a computer system. For an airline, information that may be accessed might include the date, times, and costs of available flights. In addition to initiating90the network based transaction, the computing device18of the requesting user transmits92to the POS computer system16the encrypted logical address56and encrypted access code58for the IDU associated with the requesting user to enable retrieval of the user data record72. If the requesting user is associated with multiple IDUs, then the encrypted logical address56and encrypted access code58for each IDU associated with the requesting user are transmitted. In response, the POS computer system16continues by requesting that the authentication computer system20verify94the identity of the requesting user in a verification transaction. Alternatively, the POS computer system16may conduct the verification transaction. Verification of the user identity implicitly authorizes the requesting user to conduct the transaction. Alternatively, the POS system12may continue by determining whether or not a verified user is authorized to execute a requested transaction. When the identity of the user is not verified94, the POS computer system16does not conduct the requested network-based transaction, may notify the user of the unsuccessful verification, and processing ends96. Otherwise, the POS computer system16continues by retrieving98the user data record72. More specifically, the POS computer system16continues by automatically transmitting the encrypted logical address56and encrypted access code58to the central user data server12. The central user data server12uses the decryption keys36and38to decrypt the encrypted logical address56and the encrypted access code58, respectively, and uses the logical address56and access code58to access the user data record72in the IDU22-1of the requesting user. The central user data server12then decrypts the user data record72using the decryption key40and transmits the decrypted user data record72to the POS computer system16. Next, the POS computer system16continues by conducting100the transaction, which may or may not involve updates to the user data record72. Conducting100the transaction may involve multiple communications over the network24with the computing device18of the requesting user, resulting from, for example, retrieving multiple airline flight schedules before purchasing tickets. After conducting100the transaction, processing continues by deciding102whether or not to update the data record72of the requesting user. The decision to update is based on whether the requesting user changed any information stored in his or her user data record72, or whether the transaction included additional information that should be stored in the data record72of the requesting user. Such changes may include changing his or her mailing address and such additional information may include data regarding a purchase. Alternatively, any other criteria may be used to determine if the user data record should be updated. If no update102is required, processing ends96. If the user data record72is to be updated102, the POS computer system16continues by updating the data record72of the requesting user, and requesting encryption and storage104of the updated user data record72. More specifically, the POS computer system16continues by transmitting the updated user data record72and the encrypted logical address56and encrypted access code58for the IDU associated with the requesting user to the central user data server12. The central user data server12continues processing by encrypting104the user data record72using the encryption key44, decrypting the encrypted logical address56and the encrypted access code58for the IDU of the requesting user, and storing104the updated data record72on the IDU associated with the requesting user. Next, the central user data server12continues by notifying106the POS computer system16that the user data record72was successfully updated and stored in the IDU associated with the user. In response, the POS computer system16continues by notifying the computing device18of the requesting user that the network-based transaction was completed. Depending on the type of network based transaction, the computing device10of the requesting user may or may not display an acknowledgement for the user to see. Next, processing ends96. Although the example method for updating a user data record72implicitly releases the encrypted logical address56and encrypted access code58for the IDU of the requesting user from the computing device18of the requesting user after successful verification, this release may alternatively not be implicitly authorized after successful verification. Rather, the POS computer system16may request the encrypted logical address56and the encrypted access code58from the computing device18of the requesting user, and the requesting user may be required to explicitly authorize the release of the encrypted logical address56and the encrypted access code58from the computing device of the requesting user. The user may authorize release in any manner, for example, by speaking a voice command into the computing device16or by pressing a button or icon on the computing device18. Although the updated user data record72is encrypted and stored104after each update102in the example method for updating a user data record72, the updated user data record72may alternatively be encrypted104and stored104after the end of the user session or after a set number of network-based transactions have been conducted and the results for the set number of transactions have been accumulated by the POS computer system16. The set number of transactions may be any number that facilitates efficiently updating user data records72. Although the POS computer system16conducts the network-based transaction in the example method, the network-based transaction may alternatively be conducted directly between the computer device18of the requesting user and the central user data server12. In such network-based transactions, the user data record72is updated by the central user data server12to include data collected from the computing device18of the requesting user. Moreover, in such network-based transactions, the central user data server12may perform all the functions that the POS computer system16performs in the example method. It should be understood that communications over the network24may be secured in any manner. For example, a decrypted user data record72may be temporarily encrypted while being transmitted over the network24from the central user data server12to the POS computer system16. Although the computing device18of the requesting user transmits92the encrypted logical address56and encrypted access code58for each IDU associated with the requesting user, the computing device18of the requesting user may alternatively transmit the encrypted logical address56and encrypted access code58for a single IDU associated with the requesting user to the central user data server12. The other encrypted logical addresses56and encrypted access codes58could be sent upon central user data server request. The encrypted logical addresses56and encrypted access codes58may alternatively be sent according to many different protocols. FIG.6is a diagram of an example Identity Management System (IDMS)108for conducting authentication transactions that uses IDUs to store user data associated with the IDMS function.FIG.6includes similar information asFIG.1. Consequently, features illustrated inFIG.6that are identical to features illustrated inFIG.1are identified using the same reference numerals inFIG.6. The example IDMS108is similar to the computer system10shown inFIG.1. However, the IDMS108includes an external computer system110and the IDUs store reference authentication data only. Because the IDUs store reference authentication data only, the IDUs are described herein as authentication data IDUs and are identified with reference numerals22-1-AD to22-n-AD. The computing device18of each user stores the token60and keys62, an encrypted logical address112and an encrypted access code114for the authentication data IDU of a respective user. The IDMS108may be used to facilitate conducting verification transactions. For example, for users desiring to conduct a network-based transaction with the external computer system110using his or her computing device18, the external computer system110may communicate with the IDMS108to authenticate the user before allowing the user to conduct the desired transaction. More specifically, after receiving a request to conduct the desired transaction, the external computer system110may transmit to the POS computer system16a request to authenticate the user. The POS computer system16may transmit the authentication request to the authentication computer system20. By virtue of receiving and transmitting the authentication request, the POS computer system16can be said to function as a firewall. Alternatively, the external computer system110may transmit the authentication request directly to the authentication computer system20. FIG.7is a flowchart116illustrating an example method for authenticating a user using the example IDMS108and IDUs as shown inFIG.6. In this example method, reference authentication data is stored in the IDU associated with the user. The method starts118with a user initiating120a transaction with the external system110using his or her computing device18. In response, the external system110continues by requesting120that the IDMS108verify the identity of the user. More specifically, the external system110transmits the request to the POS computer system16which forwards the authentication request to the authentication computer system20together with any information required to communicate with the computing device18of the user. Next, the authentication computer system20continues by instructing the computing device18of the user to capture live authentication data from the user. In the example method, the live authentication data is data for a biometric modality. In response, the computing device18continues by prompting122the user to capture live authentication data of his or her self. Next, the user responds to the prompt by capturing124live authentication data of his or her self with the computing device18which continues by transmitting124the captured live authentication data to the authentication computer system20. The computing device18also transmits124to the authentication computer system20the encrypted logical address112and the encrypted access code114for the authentication data IDU22-1-AD of the user. The authentication computer system20continues by transmitting the captured live authentication data, the encrypted logical address112, and the encrypted access code114to the central user data server12with a request to retrieve the reference authentication data from the IDU22-1-AD associated with the user. The central user data server12continues by decrypting126the encrypted logical address112and the encrypted access code114using the decryption keys36and38, respectively, then requesting126the reference authentication data of the user from the authentication data IDU22-1-AD of the user. In response, the authentication data IDU22-1-AD of the user continues by transmitting128the encrypted reference authentication data of the user to the central user data server12. Next, the central user data server12continues by decrypting the reference authentication data using the decryption key40, computing and validating a hash code that proves the reference authentication data has not been tampered with, and transmitting the reference authentication data and validation result to the authentication computer system20. The authentication computer system20may alternatively calculate and validate the hash code for the reference authentication data. After receiving the reference authentication data and validation result from the central user data server12, the authentication computer system20continues by conducting130a verification transaction with the decrypted reference authentication data and the captured live authentication data, and transmitting130the verification transaction result to the external system110. Next, processing ends132. Users may use several different IDUs to partition and separately store different kinds of reference authentication data. For example, a user may store fingerprints from his or her right hand on one IDU, fingerprints from his or her left hand on a second IDU, a facial image on another IDU, and a voice print on yet another different IDU. By partitioning the reference authentication data of a user in this manner, even if one IDU was compromised, uncompromised data would still exist in another different IDU. IDUs may also be used to help create very secure email systems. Email systems may be secure or non-secure. Non-secure email systems typically include an email server which stores emails in a database and which manages access to the emails based on a password only. In a non-secure email system, if the email password of a user is compromised, all email content for that user could be compromised. If the password of an email administrator is compromised, all the emails for all the users in the email system could be compromised. Secure email systems typically include an email server, a database for non-secure email contents and a separate database for secure email contents. Generally, upon receiving a secure email an email server stores the contents of the secure email in the secure email database, assigns a transaction number to the received secure email, and creates a link between the transaction number and the stored secure email contents. Additionally, the email server typically creates a non-secure email using the addresses from the secure email and includes the transaction number as the contents of the non-secure email. Such a non-secure email is referred to herein as a cover email. The non-secure cover email is in the inboxes of all the addressees identified in the secure mail. E-mails typically include a message and perhaps attachments. The message and attachments are generally known as the contents of the email. A user who creates and sends an email is referred to herein as an originator or a sender, a user who re-sends an email but did not create the email may also be referred to herein as a sender, and a user who receives an email is referred to herein as a recipient. Recipients may include users to whom the email is addressed as well as users copied on the email. The example secure email computer system illustrated inFIG.8is similar to the IDMS illustrated inFIG.6. As such, features illustrated inFIG.8that are identical to features illustrated inFIG.6are identified using the same reference numerals used inFIG.6. FIG.8is a diagram of an example secure email computer system134for enhancing email security using IDUs while enabling e-discovery processes. The example secure email (SE) computer system134includes an IDMS that can perform the same authentication functions as the IDMS108described herein with regard toFIG.6. Additionally, the SE computer system134includes a hosted IDU platform136, and the server14includes an offline e-discovery server14-O, an e-discovery search server14-S, and an email server14-E. Although not included in the SE computer system134, the external computer system110may also be included. Each user is associated with one Secure Email IDU and one authentication data IDU. Each authentication data IDU stores reference authentication data of a respective user, and each Secure Email IDU stores encrypted secure email contents sent and received by a respective user. Storing both the sent and received encrypted secure email contents minimizes the number of users involved in any e-discovery process because none of the users who sent secure emails to a user of interest need be involved. Alternatively, the Secure Email IDU for each user may store the encrypted contents of sent emails only. The Secure Email IDUs and the authentication data IDUs may additionally store the same information described herein with regard to the example IDU22-1. Although the reference authentication data and encrypted contents sent and received by a respective user are stored in separate IDUs in the example SE computer system134, a single IDU may alternatively store both the reference authentication data and the encrypted email contents of a single user. In order for the SE computer system134to support e-discovery, the Secure Email IDUs are not separate devices that can be physically managed by respective users in different locations associated with each respective user. Rather, the Secure Email IDUs are physically consolidated on one or more circuit boards140included in the hosted IDU platform136. The circuit boards140constitute the hosted IDU platform136which is managed by the organizational entity responsible for compliance with e-discovery regulations. Each IDU is still a separate component within the system even though many IDUs are hosted on a single chip, circuit board, or physical device. The Secure Email IDU of each user is not physically managed in a location associated with and controlled by the respective user because such an arrangement would not comply with the regulatory obligations for e-discovery. That is, any user who owns and manages his or her Secure Email IDU could avoid having incriminating emails discovered by simply destroying the emails on his or her Secure Email IDU. Thus, user-hosted Secure Email IDUs are not included in the computer system134. Although one hosted IDU platform136managed by an organizational entity is included in the SE computer system134, the SE computer system134may alternatively include any number of hosted IDU platforms136each of which may be in the same or different geographic location. The hosted IDU platforms136may be managed by the same or different organizational entities. It should be understood that if the SE computer system134is not used for e-discovery, the hosted IDU platform136need not be included in the SE computer system134. Rather, Secure Email IDUs for system134could be implemented either using the hosted IDU platform136or separate IDUs that are physically managed by respective users in different locations associated with each respective user, or some combination of user managed and hosted IDUs. The offline e-discovery server14-O stores the encrypted logical address56for the Secure Email IDU of each user, an encrypted alternative access code142for the Secure Email IDU of each user, one or more switching addresses144of the Secure Email IDU for each user, and decryption keys146for decrypting the secure email content138of the Secure Email IDU of each respective user stored in the hosted IDU platform136. Although the switching addresses144are not encrypted, the switching addresses144may alternatively be encrypted and the decryption keys146may be stored on one or more different servers, for example, the e-discovery search server14-S. The switching address144is intended to denote the information required by the manager of the hosted IDU platform136in order to switch a Secure Email IDU from using the access code58to the alternative access code142. This may involve physically accessing the Secure Email IDU or electronically addressing a specific circuit board140containing multiple Secure Email IDUs using a dedicated network connection that is only accessible from a computer within a data facility. An example switching address144for manually accessing a Secure Email IDU to flip a physical switch may be rack203, circuit board in slot5of the rack, switch number105on the circuit board. An example of an electronic switching address144, using IPv4 terminology, may be circuit board switching IP address 12.34.56.78. Although the offline e-discovery server14-O stores the encrypted logical addresses56and encrypted alternative access codes142, the e-discovery server14-O is not a centralized target susceptible to remote cyber-attack by virtue of being offline and thus not hackable from a remote location. Additionally, by virtue of the offline e-discovery server14-O storing the encrypted alternative access codes142instead of the encrypted access codes58for the Secure Email IDUs, the SE computer system134is less vulnerable to attacks against the offline e-discovery server14-O perpetuated by one or more individuals associated with the organizational entity that manages the hosted IDU platform136. Specifically, stealing all the data on the offline e-discovery server14-O will not enable an external cyber-criminal to hack into Secure Email IDUs because the Secure Email IDUs typically operate based on the access code58which is not stored on the offline e-discovery server14-O. In addition, much of the data stored on the e-discovery server14-O is encrypted and would require compromising multiple additional devices in the SE computer system136to access user data for more than a single user. The dotted lines between the e-discovery search server14-S and the offline e-discovery server14-O and the hosted IDU platform136are intended to indicate that a direct electronic connection may be established between the servers14-S and14-O, and between the server14-S and the hosted IDU platform136. An electronic connection may be desirable because the administrative convenience of such a connection may outweigh the additional security afforded by remaining completely offline. Such an electronic connection could also avoid using any externally accessible network connections by plugging directly into the circuit boards of the hosted IDU platform136. The electronic connection could be made only when needed so could be temporary, thus minimizing windows of increased vulnerability. Alternatively, the connection between the e-discovery search server14-S and the offline e-discovery server14-O may be an air gap which adds additional protection for the data stored in the offline e-discovery server14-O. Alternative access codes142are useful only when a Secure Email IDU is switched from using the access code58to the alternative access code142. The switch to using the alternative access code142is temporary and requires physical access to the hosted IDU platform136as well as knowing the switching address144secured in the offline e-discovery server14-O as well as the access code58stored on the computing device18associated with a user. An organizational entity responsible for e-discovery compliance manages the offline e-discovery server14-O and thus has control of a copy of the decryption keys146for secure email content138as well as the switching addresses144, encrypted alternative access codes142, and encrypted logical addresses56of the Secure Email IDUs. This same entity also manages the hosting of the Secure Email IDUs on the hosted IDU platform136. Alternatively, the hosted IDU platform136could be managed by a different organizational entity or be in a different geographic location, albeit with some additional complications regarding any temporary electronic connections by the e-discovery search server14-S to switch to alternative access codes142. The e-discovery search server14-S includes an application that causes the e-discovery search server14-S to conduct an e-discovery search process and an application that causes the e-discovery search server14-S to obtain data from the offline e-discovery server14-O when necessary. This is necessary to account for cases in which a recipient does not comply with an e-discovery request to release the encrypted logical address56and encrypted access code58for his or her Secure Email IDU. For example, this could occur due to being on vacation and not responding to emails during an e-discovery process. It could also occur if the recipient does not want his or her secure emails searched during an e-discovery process. The data for non-compliant recipients obtained from the offline e-discovery server14-O may be temporarily stored in the e-discovery search server14-S. The e-discovery search application enables the search server14-S to accept search parameters useful for defining the scope of an e-discovery search. Such search parameters include, but are not limited to, a list of specific users, dates, and keywords. Additionally, the search server14-S stores the decryption keys148for encrypted logical addresses56of the Secure Email IDUs, the decryption keys152for the alternative access codes142for the Secure Email IDUs, and the decryption keys146for secure email content. The e-discovery search server14-S is configured to securely communicate over the network24 The servers14-O,14-S, and14-E include subcomponents similar to the subcomponents described herein for the additional server14. The offline e-discovery server14-O, the e-discovery search server14-S, and the email server14-E may alternatively be any type of computing device, for example, a personal computer, capable of performing the functions described herein for these servers. The e-discovery search server14-S may alternatively be included within the email server14-E, but is shown separately as this enables strong protection of the data stored in the offline e-discovery server14-O while still allowing a direct electronic connection with the offline e-discovery server14-O. The computing device18associated with each user stores the encrypted logical address56and the encrypted access code58for the Secure Email IDU of a respective user, as well as the computing device token60and keys62. The computing device18of each user also stores the decryption key146for that user's secure email content, and an encrypted logical address112and an encrypted access code114for the authentication data IDU of the user associated with the computing device18. It should be appreciated that the encrypted logical addresses56and the decryption keys146are also stored in the offline e-discovery server14-O. The email server14-E also performs all user verifications for non-secure email functions. However, verification functions for accessing secure email content are performed by the authentication computer system20. Separating the secure from the non-secure email verification functions imposes the least impact on day-to-day use of the non-secure email system while applying the highest security standards for secure emails. Alternatively, either the email server14-E or the authentication computer system20may conduct all verification transactions. The email server14-E also performs the functions of the POS computer system16described inFIG.1. As a result, the POS computer system16is not included in the SE computer system134. The email server14-E manages all non-secure email content and includes a storage unit14-NSE, or equivalent, for storing all non-secure emails, including cover emails. The internal subcomponents of the central user data server12are not shown as they are the same as described herein with regard toFIG.1. The email server14-E may store data such as, but not limited to, decryption keys148for decrypting encrypted logical addresses56of Secure Email IDUs included in the hosted IDU platform136, decryption keys150for decrypting encrypted access codes58of Secure Email IDUs included in the hosted IDU platform136, decryption keys152for decrypting encrypted alternative access codes142of Secure Email IDUs included in the hosted IDU platform136, keys154to validate tokens from other computer systems, encryption keys156for encrypting email content138stored in the hosted IDU platform136, and an email server token158. The email server14-E may also temporarily store encrypted secure email content138as part of transmitting or buffering secure emails within the SE computer system134. FIG.9is a flowchart160illustrating an example method for transmitting a secure email within the SE computer system134. The method starts162after a sender initiates a secure email using his or her computing device18and is successfully verified as the result of a verification transaction conducted by the authentication computer system20as described herein with regard toFIG.6. Alternatively, the email server14-E may perform the verification transaction or the identity of the user may be verified in any other manner. After the authentication computer system20sends a successful verification transaction result to the email server14-E, the email server14-E continues by requesting164the secure email contents from the computing device18of the sender, the encrypted logical address56of the Secure Email IDU of the sender, and the encrypted access code58of the Secure Email IDU of the sender. In response, the computing device18of the sender continues by transmitting166the secure email contents, the encrypted logical address56, and the encrypted access code58to the email server14-E. After receiving the requested information, the email server14-E continues by encrypting168the secure email contents for each recipient of the secure email as well as the sender using the encryption key156for each respective recipient and for the sender. The e-mail content is encrypted but the email addresses of the sender and recipients are not. The computing device18of the sender may collect additional information that is not encrypted and that is transmitted with the securely encrypted email contents. For example, a non-secure email subject line could be collected for display in the cover email displayed in the inbox of each recipient. Instead of a subject line that discloses sensitive information, such as “Travel Plans with Vladimir Putin in Russia”, the non-secure subject line might be “Travel Plans.” Alternatively, the sender and recipient email addresses may be encrypted albeit with some additional steps required during e-discovery. Additionally, after receiving the requested information, the email server14-E continues by decrypting170the encrypted logical address56and encrypted access code58of the sender and storing170the encrypted secure email content in the Secure Email IDU of the sender. The email server14-E also creates separate encrypted copies of the secure email content for each recipient using the corresponding encryption key156of each respective recipient and temporarily stores the copies therein. After storing170the secure email content, the email server14-E continues by initiating172a non-secure cover email which includes the transaction number, and transmitting172the non-secure cover email to each intended recipient of the secure email. Doing so allows recipients to monitor the inbox of a single email system and be alerted when secure emails are available to be read. Next, processing ends174. The encrypted copies of the secure email content for each recipient may be temporarily stored in the email server14-E until requested by a recipient. More specifically, when a recipient requests to read his or her copy of the secure email content, the email server14-E requests the encrypted logical address56and encrypted access code58for the Secure Email IDU of the requesting recipient from the computing device18of the requesting recipient. After receiving the requested information from the computing device18, the email server14-E continues by decrypting170the received encrypted logical address56and encrypted access code58, and using the decrypted logical address56and access code58to store170the copy of the secure encrypted email content in the Secure Email IDU of the requesting recipient. Next, the email server14-E permanently deletes the temporary copy of the encrypted secure email for the requesting recipient. Thus, it should be understood that the recipient copies are temporarily stored in the email server14-E. Instead of storing the copy of the secure encrypted email in the email sever14-E until the recipient attempts to read the secure email, the email server14-E may immediately store the encrypted email contents in the Secure Email IDU of each recipient so long as there is a secure mechanism by which the email server14-E can securely obtain the encrypted logical address56and encrypted access code58of the Secure Email IDU of each recipient. For example, the email server14-E could send a text message or other notification to the computing device18of a recipient notifying the recipient of an incoming secure email. The recipient could then authorize release of his or her encrypted logical address56and encrypted access code58to enable immediately storing the secure email content in the Secure Email IDU of the recipient. Although the example method of transmitting secure emails uses cover emails to notify recipients of received secure emails, any method of notifying recipients of secure emails may alternatively be used. FIG.10is a flowchart176illustrating an example method for receiving a secure email within the SE computer system134. The method starts178when a recipient of a secure email attempts to access his or her email inbox using his or her computing device18. The computing device18initiates a transaction with the email server14-E to read the inbox, and the email server14-E initiates a verification transaction. For each verification transaction recipients are verified to the same security level. However, in other example methods of receiving a secure email the level of verification may be tied to the security level of the email. The identity of the recipient may be verified using the method described herein for the IDMS108or in any other manner. After the recipient is successfully verified, the email server14-E continues by transmitting180to the computing device18of the recipient a non-secure email inbox for display, the recipient selects182an email to read from the displayed email inbox, and the computing device18continues by transmitting182the selection to the email server14-E. In response, the email server14-E continues by recognizing the selection as a request to read a secure email and requesting184from the computing device18of the recipient, the encrypted logical address56and encrypted access code58for the Secure Email IDU of the recipient, as well as the decryption key146of the secure email content of the recipient. Alternatively, the email server14-E may request the decryption key146before or after requesting the encrypted logical address56and encrypted access code58. Alternatively, the encrypted logical address56, the encrypted access code58for the Secure Email IDU, and the decryption key146for the email contents may all be sent by the user computing device18at the same time as the selection of the secure email to be read. Next, the computing device18of the recipient continues by transmitting186the encrypted logical address56, encrypted access code58, and decryption key146to the email server14-E. In response, the email server14-E continues by decrypting188the encrypted logical address56and encrypted access code58using the decryption keys148,150, respectively, obtaining188the transaction number from the cover email, and using the transaction number to identify the secure email contents corresponding to the selection. Next, the email server14-E continues by deciding190whether or not this is the first time the recipient requested to read the secure email contents of this specific email. If yes, the email server14-E continues by accessing192the temporary copy of the encrypted secure email contents138stored therein using the transaction number, and storing192the encrypted email contents in the Secure Email IDU of the recipient using the decrypted logical address56and the decrypted access code58for the Secure Email IDU of the recipient. Next, the email server14-E continues by decrypting194the secure email contents using the decryption key146of the recipient and securely transmitting194the decrypted secure email contents to the computing device18of the recipient. Alternatively, the email server14-E may transmit the encrypted secure email contents to the computing device18of the recipient which decrypts the secure email contents. Next, the computing device18continues by displaying196the secure email contents for the recipient to see. After the recipient reads the secure email content, the recipient causes the computing device18to transmit a message to the email sever14-E indicating the secure email contents were read. In response, the email server14-E continues by securely erasing196therefrom the temporary copy of the secure email contents. Next, processing ends198. If it is not the first time the recipient requested to read the secure email contents190, the email server14-E continues by retrieving200the encrypted secure email contents from the Secure Email IDU of the recipient using the transaction number with the decrypted logical address56and decrypted access code58. Next, processing continues by conducting operations194and196as described herein and processing ends198. Some email systems are required to support e-discovery in the event of litigation involving the organizational entity using the email system. FIG.11is a flowchart202illustrating an example method for conducting an e-discovery search within the SE computer system134. The method starts204with an e-discovery operator entering206e-discovery search parameters into the e-discovery search server14-S. The e-discovery operator is a person associated with the organizational entity responsible for e-discovery. The e-discovery search parameters at least identify users included in the e-discovery search. After receiving the parameters, the e-discovery search server14-S continues by securely transmitting206an e-discovery directive to the computing device18of each user identified in the search parameters. The directive requests each user to take actions that will facilitate e-discovery. At the time of transmission206, the e-discovery search server14-S also establishes206a period of time, referred to herein as a directive time, within which each user has to comply with the directive. Each identified user is considered noncompliant until complying with the directive. The directive time may be any period of time judged to facilitate complying with the legal requirements of discovery. For example, the directive time may range from five to ten days. The directive is in the form of a secure email and instructs each identified user to release the encrypted logical address56and encrypted access code58for his or her Secure Email IDU, as well as the decryption key146for his or her secure email content. Each identified user complies with the directive by verifying his or her identity which may be done using the method described herein with regard to the IDMS108or in any other manner. After successfully verifying his or her identity, the computing device18of a respective identified user receives the secure directive email and in response automatically releases and transmits the encrypted logical address56, the encrypted access code58, and the decryption key146in a secure email to the email server14-E. The identified user may also be requested to take an explicit action before the encrypted logical address56, the encrypted access code58, and the decryption key146are released and transmitted. The encrypted logical address56, the encrypted access code58, and the decryption key146may alternatively be transmitted in any manner, for example, as a direct transmission between the computing device18of the identified user and the e-discovery search server14-S. The directive may be authenticated in any other manner, for example, using Public Key Infrastructure (PKI) which supports signed transmissions that can be authenticated by the recipient. Next, processing continues by deciding208whether or not the directive time has expired. If not, processing continues by determining210whether or not any secure emails have been received in response to the e-discovery directive. If not, the e-discovery search server14-S continues by deciding208whether or not the directive time has expired. Otherwise, when secure email responses have been received210, the e-discovery search server14-S continues by requesting212from the email server14-E, for each received email, the encrypted logical address56, the encrypted access code58, the decryption key148for the encrypted logical address56, the decryption key150for the encrypted access code58, and the encrypted secure email content. After receiving the requested information from the email server14-E, the e-discovery search server14-S continues by accessing, decrypting, and scanning214the secure email contents of each identified user from whom a reply to the directive was received. Next, the e-discovery search server14-S continues by storing216any scanned emails that satisfy the e-discovery search parameters and registering216the identified users as compliant. A scanned email that satisfies the search parameters is referred to herein as a hit. The e-discovery search processor14-S also securely erases216all the data for each identified user from whom a reply was received, but the hits are not erased. The accessing, decrypting, scanning and erasing operations require little time so most of the information is retained by the e-discovery search server14-S for only a short period of time, with only the hits retained until they are formatted and conveyed to the appropriate e-discovery manager. Next, processing continues by deciding208whether or not the directive time has expired. Identified users may not comply with the directive before the directive time expires for many different reasons such as, but not limited to, losing his or her computing device18, being sick or on vacation from work, or willfully obstructing the e-discovery process. When the directive time has expired208, the e-discovery search server14-S continues processing by establishing218a temporary direct electronic connection with the hosted IDU platform136as well as the offline e-discovery server14-O, selecting218an identified user, and determining220whether or not the identified user is registered as compliant. If the identified user is registered as compliant220, the e-discovery search server14-S continues by determining222whether or not any more identified users need to be evaluated for compliance. If so, processing continues by selecting224another identified user and determining220whether or not the identified user is registered as compliant. When an identified user is not registered as compliant220, the e-discovery search server14-S continues by requesting226the switching address144of the Secure Email IDU of the identified user from the offline e-discovery server14-O via the temporary electronic connection. Transferring the switching address144may alternatively be done manually to avoid connecting the offline e-discovery server14-O to any other device. In response to the request, the offline e-discovery server14-O continues by transmitting the switching address144of the identified user to the e-discovery search server14-S. After receiving the switching address144, the e-discovery search server14-S continues by decrypting the encrypted logical address56and alternative access code142of the identified user, and electronically switching228the Secure Email IDU of the identified user to use the decrypted alternative access code of the identified user. Next, the e-discovery search server14-S continues by accessing230the Secure Email IDU of the identified user using the decrypted logical address56, decrypted alternative access code, and decryption key146for secure email content of the identified user. Next, the e-discovery search server14-S continues by decrypting230the secure emails of the identified user using the decryption key146of the identified user, scanning230the decrypted emails based on the e-discovery search parameters, and storing230the hits. Instead of storing230the hits electronically a printout may be generated that includes the hits. Alternatively, the hits may be put in any other form that an authorized person associated with the e-discovery would understand. Next, processing continues by deciding222if there are any more identified users whose compliance was not evaluated at operation220. If so, processing continues by selecting224another identified user and determining220whether or not the other identified user is registered as compliant. However, when there are no more identified users222to evaluate for compliance, the e-discovery search processor14-S continues by creating232an e-discovery report based on the hits, destroying234data temporarily stored as part of the e-discovery process, and severing the temporary direct electronic connections with the offline e-discovery server14-O and the hosted IDU platform136. Next, processing ends236. In the example method of conducting an e-discovery search, the Secure Email IDU of each non-compliant identified user reverts to the access code58after a single request has been processed using the respective alternative access code142. The alternative access code142can be factory installed in the IDU, or updateable as one of several optional basic functions the IDU is capable of performing. As an alternative to switching access codes for a single transaction, the switch could temporarily disable the need for any access code or could enable transferring the contents of a Secure Email IDU from the hosted IDU platform136to a portable storage device that plugs into the hosted IDU platform136. Another alternative may require a facility operator (not shown) to manually switch the Secure Email IDU of an identified non-compliant identified user. Such an alternative eliminates the need for the temporary direct electronic connections. Although the e-discovery search server14-S establishes a direct electronic connection with the hosted IUD platform136and the offline e-discovery server14-O to facilitate transferring the switching address144, the switching addresses144may alternatively be manually transferred to avoid connecting the offline e-discovery server14-O to any other device or computer system. The security of Secure Email IDUs stored in the hosted IDU platform136during the example e-discovery search method is enhanced by the following factors: a) the logical addresses56stored in the offline e-discovery server14-O are encrypted and the offline e-discovery server14-O does not store the access codes58; b) The alternate access codes142stored in the offline e-discovery server14-O are encrypted. Also, switching a Secure Email IDU requires either physical access to the hosted IDU platform136or access to the temporary dedicated electronic connection, both of which require physical access to a highly protected data center; c) Alternate access codes142are used briefly and temporarily; and, d) E-discovery operations can be scheduled days in advance which facilitates maintaining an exceptionally small group of people with access to the e-discovery servers14-O and14-S. This minimizes the exposure to insider attacks. In addition, e-discovery operations could be implemented as two-person functions. That is, e-discovery operations could require two separate e-discovery managers to log in before gaining access to servers14-S and14-O. The example method of conducting an e-discovery search maintains most of the security advantages of using IDUs while enabling organizations to comply with e-discovery regulations even when some recipients may desire to conceal questionable or perhaps criminal activity by withholding the release of the encrypted logical address56and access code58of his or her Secure Email IDU. The example methods described herein may be conducted partly on the central user data server12, any server included in the additional server14, the POS computer system16, the user computing device18, the authentication computer system20, and on other computing devices (not shown) and other computer systems (not shown) operable to communicate over the network24. Moreover, the example methods described herein may be conducted entirely on the other computer systems (not shown) and other computing devices (not shown). Thus, it should be understood that the example methods may be conducted on many combinations of computers, computer systems (not shown), and computing devices (not shown). The functions described herein as being performed by the central user data server12may alternatively be performed by other components of the computer systems described herein. For example, any server included in the additional server14, the POS computer system16, the user computing device18, the authentication computer system20, or other computer systems (not shown) and computing devices (not shown) may perform the functions described herein for the central user data server12. Likewise, the functions described herein as being performed by the POS computer system16, the user computing device18, the authentication computer system20, and any server included in the additional server14may be performed by any other component of the computer systems described herein. However, the IDUs are not generally capable of performing the functions described herein for any other component so the IDUs cannot perform these functions. Conversely, the other general purpose components of the system are not capable of performing the functions of the IDU with the same levels of security. There are specific functions, such as encryption of the logical address of an IDU and user data record72that could be delegated to IDUs to provide additional security protections by minimizing the number of computing devices that see this data in unencrypted form. Data described herein as being managed by the central user data server12, any server included in the additional server14, the POS computer system16, the user computing device18, and the authentication computer system20may alternatively be stored in other components of the computer systems described herein, including computer systems (not shown) and computing devices (not shown) operable to communicate with the central user data server12over the network24. Data may be partially stored on different components of the computer systems described herein. For example, the encrypted access code58may be divided into two encrypted files one of which is stored on the user computing device18and the other of which is stored on another component of a computer system described herein. Overall, the inclusion of at least one IDU per user in a computer system enables the distribution of data and keys such that security of user data is greatly enhanced. Although one or more specific distributions of data and keys that enhance security are described herein, there are other distributions that achieve similar results. There are also alternative distributions that offer different tradeoffs between added security, convenience, and other factors that are important in real world implementations. Components as used herein is intended to refer to logically distinct computer devices that may be logical targets for cyber-criminals. Such components include, but are not limited to, the central user data server12, the server14, POS computer systems16, computing devices18, authentication computer systems20, and individual data units22-1to22-nincluding individual IDUs included within the hosted IDU platform136. The server14includes the e-discovery search server14-S, the email server14-E, the storage unit14-NSE, and the offline e-discovery server14-O. The example methods described herein may be implemented with many different numbers and organizations of computer components and subcomponent. Thus, the methods described herein are not limited to specific computer-executable instructions. Alternative example methods may include different computer-executable instructions or components having more or less functionality than described herein. The example individual data unit described herein is a simple component specifically designed to perform a small number of functions. Such a simple design facilitates reducing design flaws of the individual data unit and the complexity of software required to cause the individual data unit to perform the functions. As a result, the individual data unit is facilitated to be less vulnerable to successful cyber-attacks versus general purpose computers and computing devices which in turn facilitates increasing the security of user data records stored in the individual data unit. As described herein, the individual data unit is not a general purpose computer. However, as technology evolves, it might become possible to formally validate the designs and eliminate exploitable flaws in progressively more complex devices. Thus, while the individual data unit is described herein as not being a general purpose computer with respect to today's state of the art, it is conceivable that future technologies would allow an individual data unit to be built upon general purpose computer technology while still retaining the necessary characteristic of being far less expensive and far less vulnerable to cyber-attacks than the other components in the system. One example computer system described herein includes a central user data server, a server, a point of service computer system, a computing device, an authentication computer system, and a plurality of individual data units that each store the data of one user in a respective data user record. The data unit of each respective user may be located at and operated from a geographic location associated with the respective user. Moreover, there may be a large number of users and associated individual data units included in the computer system. As a result, the individual data units, as well as the data stored therein, may be massively distributed. Such massive distribution to as many as millions of different locations is not practical without an IDU. The components of the computer systems described herein securely communicate with each other over a network and the central user data server manages the data record of each user. By virtue of massively distributing the individual data units, the user data is decentralized and thus constitutes a less attractive target for cyber-criminals than a centralized database containing the data of all users. Additionally, the individual data units provide additional locations for storing data and keys which increases the number of successful cyber-attacks needed to steal any data stored in the computer system. As a result, security of user data as well as of the components of the computer system is facilitated to be enhanced in a cost effective and reliable manner. An example method for updating a user data record is also disclosed. More specifically, after a user initiates a network-based transaction with his or her computing device, the computing device transmits to a POS computer system the encrypted logical address and encrypted access code for each individual data unit associated with the user. The POS computer system requests that an authentication computer system verify the identity of the user. Verification of the user implicitly authorizes the requesting user to conduct the network-based transaction. When the identity of the user is verified, the POS computer system retrieves the user data and conducts the network-based transaction. When the user changed any information stored in his or her user data record or if the network-based transaction included additional information that should be stored in the data record of the user, the data record of the user is updated. When the user data record is to be updated, the POS computer system updates the data record of the user and requests that the updated user record be encrypted and stored. The central user data server encrypts the user data record and arranges to store the updated data record on the IDU associated with the user. As a result, the security of user data records is facilitated to be enhanced, and the time and costs associated with updating user data records are facilitated to be reduced. An example method for authenticating a user is also disclosed. More specifically, in response to receiving a transaction request from the computing device of a user, an external computer system requests the authentication computer system to verify the identity of the user. The authentication computer system sends a capture request to the computing device of the user. In response, the computing device of the user prompts the user to capture live authentication data of his or her self which is transmitted to the authentication computer system with other information. The authentication computer system transmits the captured live authentication data and other information to the central user data server which obtains reference authentication data from the IDU of the user. After validating the reference authentication data, the central user data server transmits the captured live authentication data and reference authentication data to the authentication computer system which conducts a verification transaction based on the received data. As a result, accuracy and trustworthiness of authentication transaction results are facilitated to be enhanced, and the time and costs associated with conducting verification transactions are facilitated to be reduced. An example method for transmitting a secure email is also disclosed. More specifically, a sender who initiates a secure email is successfully verified and the successful verification result is sent to an email server which requests secure email contents from the computing device of the sender and other information. In response, the computing device of the sender transmits the requested information to the email server. The email server decrypts the information and arranges to store the secure email contents in the Secure Email IDU of the sender. The email server also creates encrypted copies of the secure email content for each email recipient, initiates a non-secure cover email, and transmits the non-secure email to each intended recipient of the secure email. As a result, the security of secure emails is facilitated to be enhanced in a cost effective and reliable manner. A method of receiving a secure email is also disclosed. More specifically, when a recipient of a secure email attempts to access his or her email inbox using his or her computing device, the computing device initiates a transaction with the email server to read the inbox, and the email server initiates a verification transaction. After the recipient is successfully verified, the email server continues by transmitting to the computing device of the recipient a non-secure email inbox for display. The recipient selects an email to read from the displayed email inbox, and the computing device transmits the selection to the email server. The email server obtains a transaction number from a cover email and uses the transaction number to identify the secure email contents corresponding to the selection. If it is the first time the recipient requested to read the email, the email server accesses a temporary copy of the secure email contents stored therein using the transaction number and stores the encrypted email contents in the Secure Email IDU of the recipient. If it is not the first time, the email server retrieves the encrypted email contents from the Secure Email IDU of the recipient using the transaction number, and the email server securely transmits the secure email content to the computing device of the recipient. As a result, the security of email content is enhanced in a cost effective and reliable manner. A method for conducting e-discovery is also disclosed. More specifically, after e-discovery search parameters are entered into an e-discovery search server, the e-discovery search server securely transmits an e-discovery directive to the computing device of each user identified in the search parameters. Responding to this directive releases the encrypted logical address and encrypted access code for that user's IDU where secure email content is stored. If the directive has not expired and secure email responses have been received in response to the directive, the e-discovery search server requests from the email server, for each received email, the encrypted secure email content. After receiving the requested information from the email server, the e-discovery search server accesses, decrypts, and scans the secure email contents of each identified user from whom a reply to the directive was received. Next, the e-discovery search server stores any scanned emails that satisfy the e-discovery search parameters and registers the identified users as compliant. When the directive time has expired, the e-discovery search server establishes a temporary direct electronic connection with the hosted IUD platform as well as with the offline e-discovery server, selects an identified user, and determines whether or not the identified user is registered as compliant. When an identified user is not registered as compliant, the e-discovery search server continues by requesting the switching address of the Secure Email IDU of the identified user from the offline e-discovery server14-O via the temporary electronic connection. After receiving the switching address, the e-discovery search server continues by electronically switching the Secure Email IDU of the identified user to use the alternative access code of the identified user. Next, the e-discovery search server continues by accessing the Secure Email IDU of the identified user using the encrypted logical address, alternative access code, and decryption key for secure email content of the identified user. When there are no more identified users to evaluate for compliance, the e-discovery search server continues by creating an e-discovery report based on the hits, destroying data temporarily stored as part of the e-discovery process, and severing the temporary direct electronic connections with the offline e-discovery server and the hosted IDU platform. As a result, an e-discovery process is made practical even while retaining most of the improved security enabled by distributing secure email contents and decryption keys for those contents to separate IDUs for each user. The example methods described above should not be considered to imply a fixed order for performing the method steps. Rather, the method steps may be performed in any order that is practicable, including simultaneous performance of at least some steps. Moreover, the method steps may be performed in real time or in near real time. It should be understood that, for any process described herein, there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments, unless otherwise stated. Furthermore, the invention is not limited to the embodiments of the methods, systems and apparatus described above in detail. Rather, other variations of the methods, systems, and apparatus may be utilized within the spirit and scope of the claims. | 116,635 |
11861043 | DETAILED DESCRIPTION For the purpose of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will, nevertheless, be understood that no limitation of the scope of the disclosure is thereby intended; any alterations and further modifications of the described or illustrated embodiments, and any further applications of the principles of the disclosure as illustrated therein are contemplated as would normally occur to one skilled in the art to which the disclosure relates. All limitations of scope should be determined in accordance with and as expressed in the claims. Whether a term is capitalized is not considered definitive or limiting of the meaning of a term. As used in this document, a capitalized term shall have the same meaning as an uncapitalized term, unless the context of the usage specifically indicates that a more restrictive meaning for the capitalized term is intended. However, the capitalization or lack thereof within the remainder of this document is not intended to be necessarily limiting unless the context clearly indicates that such limitation is intended. As used herein “original biometric representation” generally refers to encoded features obtained by a biometric feature extraction module after processing a biometric sample. In various examples, an original biometric representation is a collection of data points or a fixed-size vector derived from a biometric scan of a person's face, palm, iris, finger, and other body parts, as well as multi-modal combinations of body parts (e.g., such as a combination of data point collections for four fingers). Biometric scans including, but not limited to, palm, finger, facial, and multi-modal scans may be algorithmically processed and encoded into the original biometric representations described herein, and the original biometric representations may be transformed via an Evergreen Hash Transform (EGH) into secure, anonymized vector representations (referred to, in some embodiments, as “EgHashes.” According to one embodiment, the transformed representations described herein function as pseudonymous identifiers according to the ISO/IEC 24745 standard on biometric information protection. As used herein “Evergreen Hash (EGH) transform” generally refers to a particular transformation function for transforming biometric representations into secure, cancellable, irreversible biometric representations. In various embodiments, the EGH transform is a type of biometric template protection (BTP) scheme that may be a cancellable biometrics system, for example, as described in the ISO/IEC 24745 standard on biometric information protection (incorporated by reference herein). According to one embodiment, the EGH transform is an anonymization function (e.g., as opposed to a pseudonymization function). It will be understood by one of ordinary skill in the art that the systems and processes described herein are not limited to biometrics, but may be used in other applications as will become apparent. As used herein “EgHash” generally refers to an output of an EGH transform. According to one embodiment, an EgHash is a vector serving as an anonymized vector representation of a subject's biometric data. In some embodiments, the EgHash functions as a pseudonymous identifier (PI) according to the ISO/IEC 24645 standard, incorporated by reference herein. In some embodiments, the anonymized vector representation may be used in pseudonymous verification and identification processes while providing enhanced security due to the anonymization processes used in the representation production. As used herein “biodata” generally refers to a data representing a subject's anatomical features (e.g., such as a biometric representation described herein) and “non-biodata” generally refers to a key (and, in some embodiments, other information, such as metadata). In various embodiments, an EGH transform blends biodata and non-biodata to generate an EgHash. In at least one embodiment, relative dimensions of the biodata and non-biodata are proportional to the level of security provided by the EGH transform. In one example, if biodata (e.g., a facial biometric representation) includes 128 floating point numbers, non-biodata (e.g., a key) is selected to include 128 floating point numbers. As used herein “auxiliary data” generally refers to a portion of data received upon enrolling a subject or person (e.g., receiving and transforming an original biometric scan or representation). In various embodiments, auxiliary data is used to generate an anonymized vector representation of biometric data (e.g., a biometric scan or representation). According to one embodiment, auxiliary data is used to generate a pseudonymous identifier (PI) (e.g., according to the ISO/IEC 24645 standard) in the form of an EgHash. In at least one embodiment, auxiliary data includes, but is not limited to, a key, a seed, or one or more EGH transformation parameters. In various embodiments, a “seed” is a random number used to initialize a random number generator that generates the transformation parameters and a key (e.g., non-biodata). According to one embodiment, the seed may be derived from a password or pin, and may be encrypted and stored, for example, as a digital token or QR code. In at least one embodiment, a key and EGH transformation parameters are transparent (e.g., viewable) to a developer using the present systems and processes. In one or more embodiments, auxiliary data, or at least a key and EGH transformation parameters thereof, are stored in a configuration file accessible by a developer. Overview Aspects of the present disclosure relate generally to systems and methods for encryption via performing one-way transforms over biometric data. To protect biometric data against theft and misuse, cryptographic methods such as encryption and hashing often cannot be used directly because biometric features contain real-valued data points which are fuzzy. As a result, to secure biometric data, several BTP schemes exist. Unfortunately, in making a biometric template more secure, previous BTP schemes can degrade the recognition performance, which is the case for transformation-based BTP schemes. Previous approaches are based on homomorphic encryption; however, the output representations of such approaches are not easily revocable, for example, in instances where the representation is compromised by an attacker. Moreover, the homomorphic encryption schemes are computationally expensive and scale poorly for biometric identification (which involves one-to-many comparisons). As a result, previous approaches not suitable for use in high-throughput biometric identification operations for example, due to lack of compactness, revocability, and irreversibility, amongst other drawbacks. According to one embodiment, the EGH process includes transforming an original biometric scan into a compact, anonymized vector representation, referred to herein generally as an “EgHash”, that can be indexed and compared in an expedient manner (i.e., in logarithmic time) to support one-to-many (1:N) matching, including identification, watch-list, and database deduplication operations. In various embodiments, the transformation operations are classified as anonymization; and as such the resultant output is non-personally identifying information (PII) that confirms to General Protection Regulation (GDPR) and other similar privacy governance requirements. In one or more embodiments, even though the transformation is irreversible and cancellable, the output EgHash still retains its original purposes as an effective (but secure) biometric representation. In at least one embodiment, by using multiple EgHashes, each only a fraction of the original size by reason of lossy compression, followed by score-level fusion, biometric matching accuracy is maintained at levels comparable to (and, in some embodiments, in excess of) biometric matching accuracy achieved with original biometric scans. According to one embodiment, EgHashes are irreversible and cannot be reconstructed to their original form. In one or more embodiments, the basis for its irreversibility is warranted by the principle of lossy data compression (also known as irreversible compression) used to generate the EgHashes. In various embodiments, the lossy approach uses inexact approximations and partial data discarding to represent the biometric information derived from a biometric scan. In various embodiments, EgHashes are revocable and, thus, can be discarded at any time. For example, if a breach of EgHashes for an organization is suspected, the organization-level EgHashes can be cancelled and replacement EgHashes can be reissued by re-enrolling the organization's subjects or from securely and remotely stored original biometric templates or lossless EgHashes. In one or more embodiments, EgHashes support 1:1 and 1:N deduplication operations, and matching algorithms enable 1:N identification/deduplication to be performed efficiently (e.g., in logarithmic time). According to one embodiment, EgHashes demonstrate significantly smaller template size compared to original biometric representations, for example, because the EGH transformation results in dimension reduction of the biometric representation. In at least one embodiment, EgHashes maintain high discrimination power for biometric matching processes while providing increased security over previous BTP and other biometric approaches. In one example, the EGH transform takes an input vector which is a biometric representation of b floating-point numbers. The biometric representation is then padded with another vector (referred to as non-biodata, a key, or nonce) that is randomly generated based on a random seed in order to increase the overall dimension to b+n numbers. The dimension of the padding vector, n, is defined in relation to the original input vector, e.g., by a factor such as 2 (e.g., so two times longer), 1 (same size), ½ (half the size), ¼, ⅛, and etc. The higher n is, the more “noise” is added. In certain embodiments, the random seed is generated from a secret (a “key” that can be selected by the enterprise deploying the EGH technology) using industry-standard hashing algorithms, such as, for example, SHA256, SHA384, and SHA512 as defined in FIPS 180-2, as well as RSA's MD5 algorithm. The concatenated vector is then subject to a matrix projection (multiplication), given as output x numbers, and, in some embodiments, permutation to create a projected representation referred to as an EgHash. The output dimension of the projected representation, x, is defined in relation to the dimension of the input vector, by a factor of at most 1 (to ensure information loss), but can be smaller, ½, ⅓,¼,⅛. The smaller x is, the more drastic the reduction is biometric representation size. Because x<b+n, it follows that the information is irreversibly lost, meaning that any attempt to perfectly reconstruct the original vector is futile. Furthermore, the inequality x<b+n demonstrates two important strategies to increase security including increasing the dimension of the non-biodata (key), n, and reducing the final output, x, (e.g., the final EgHash size). In various embodiments, while the former strategy increases the amount of noise injected into the final EgHash, the latter strategy determines the amount of information that is deliberately lost necessary to ensure irreversibility. According to one embodiment, for each biometric modality, different dimensions of x and n are chosen to maintain accuracy whilst guaranteeing irreversibility and reducing the biometric representation size. In one or more embodiments, a data set is used to empirically estimate the actual accuracy obtained for a given configuration of x and n before the configuration is used in the EGH process. In at least one embodiment, lossy data compression and its irreversibility are underpinned by rate-distortion theory, which is a branch of information theory. In various embodiments, the theory remains the same despite potential differences in the objective of the theory's typical application, (e.g., to retain as much useful information as possible in the data compression case) versus an objective to retain only biometrically-relevant information. In various embodiments, the EGH transform includes additional transformative steps described herein for further increasing the security of output EgHashes. In various embodiments, in addition to primary authentication data and a “header” with optional and limited unencrypted data, the EgHashes discussed herein can store any secondary authentication data (such as Know Your Customer (KYC), Anti-Money Laundering (AML) and other data) and can embed “pivot points” to external data. In one or more embodiments, the storage architecture for the primary authentication fields may be fixed to ensure compatibility and search-ability within and (potentially) across institutions, and additional fields may be unique to each institution with specific permissions attributed. According to various aspects of the present disclosure, each EgHash represents a “digital-DNA” that may include a global and lifelong authentication for an individual, regardless of the evolution or proliferation of authentication techniques. In particular embodiments, not just the additional data stored, but also the methodology used for biometric authentication can be changed over time while preserving architecture of processors and registries used for producing, processing, and storing the EgHashes, thereby guarding against system and data redundancy. In other words, aspects of the present disclosure allow for generation and implementation of EgHashes in new and evolving techniques in a seamless, fast, and inexpensive manner because the EgHashes and associated architecture are agnostic to the biometric matching solution into which they are integrated. In various embodiments, the hashing or transformation-based BTP techniques discussed herein may be server and/or blockchain-based, and may allow for institutions to implement multiple and evolving authentication techniques into a standard record that can serve users based on the market, risk level, and circumstances. For example, various identification cards/resources such as drivers licenses, state ID cards, federal ID's, etc., may be accepted by various institutions based on predetermined standards at those institutions, and these forms of identification can be used for generating or verifying bio-hashes. In some embodiments, an institution may only accept EgHashes generated by its own systems, or it may enter into mutual recognition agreements and/or data sharing with other institutions with acceptable standards, whether for fraud detection, customer mobility or interoperability. In one example of the present system, a banking institution implements a biometric matching service for verifying customer (e.g., subject) identity to provide access to detailed account information and other PII. In previous approaches, the banking institution may use unencrypted and/or untransformed representations of biometric scans to verify user identity; however such approaches leave the banking institution and customers vulnerable to attack because the biometric information, if stolen, may be readily used to illicitly access the secure information. In the same example, with aspects of the present system, the banking institution may transmit original biometric scans or representations to a hash controller that applies the EGH transform to generate lossy EgHashes that are irreversible and remain uniquely associated with the corresponding subjects. Continuing with the same example, a banking customer uses a user account in the banking institution's application to provide a biometric facial scan that is transmitted (along with a unique key associated with the customer) to a trusted hash processor. The trusted hash processor: 1) generates an EgHash based on the biometric facial scan and the unique key; 2) identifies a stored EgHash associated with the customer based on the unique key; 3) and performs a 1:1 comparison to compute a similarity metric between the generated and stored EgHashes. In the same example, the trusted hash processor determines that the computed similarity score satisfies a predetermined similarity threshold and, in response, transmits a notification to the banking application that the customer's identity is verified. Based on the notification of positive verification, the customer is granted access to the portions of the banking application containing the PII. In the same example, an attacker obtains the EgHash of the customer. In previous approaches a theft of a biometric template may constitute an irrecoverable loss of PII and leave the victim permanently vulnerable to subsequent attacks, such as identity theft. In contrast, because the EgHash of the customer is non-PII, the banking institution simply cancels the EgHash and re-enrolls the customer into the system by generating a new EgHash and, thus, the PII of the customer and the integrity of the system are not compromised. Exemplary Embodiments Referring now to the figures, for the purposes of example and explanation of the fundamental processes and components of the disclosed systems and methods, reference is made toFIG.1, which illustrates an exemplary, high-level overview of one embodiment of the present system100. As will be understood and appreciated, the exemplary system100shown inFIG.1represents merely one approach or embodiment of the present system, and other aspects are used according to various embodiments of the present system. According to one embodiment, the system100includes a controller environment101, a trusted environment111, and a semi-trusted environment117. In one or more embodiments, each environment is configured to perform only certain actions (e.g., based on policies, capabilities, and data provided to each environment). In one or more embodiments, the controller environment101, trusted environment111, and semi-trusted environment117each include one or more hash registries103configured to store EgHashes. In various embodiments, the one or more hash registries103include a plurality of secure databases configured to store transformed biometric representations in the form of reversible or irreversible EgHashes. According to one embodiment, the trusted environment111and semi-trusted environment117include only lossy-transform hash registries103that are synched with lossy-transform registries103of the trusted environment101. In one or more embodiments, each of the hash registries103is associated with a specific application. In one example, the controller environment101includes first hash registry103associated with a first biometric matching application and a second hash registry103associated with a second biometric matching application. In the same example, the first and second hash registries103may include the same subjects; however, EgHashes corresponding to the same subject are unrelated between the registries and, thus, if an attacker compromised the first hash registry103, the attacker would not be able to utilize the EgHashes therein to access services of the second biometric matching application. In one or more embodiments, the controller environment101is configured to perform functions including, but not limited to: 1) enrollment of subjects into the system100via processing of biometric information and generation and transformation of biometric representations into lossy or lossless EgHashes; 2) verification of subject identity based on comparisons between a probe EgHash and associated EgHashes stored in the controller environment101; 3) identification of subject identity based on comparisons between a probe EgHash and EgHashes stored in the controller environment101; 4) database deduplication; and 5) reverse transformation of EgHashes into source biometric representations. In one or more embodiments, the controller environment101includes a controller processor105configured to transform biometric representations into EgHashes by performing EGH transforms as described herein. In at least one embodiment, the controller processor105generates transformation parameters (e.g., for use in EGH transformation) based on a seed generated by a seed generator107. According to one embodiment, the controller processor105is configured to perform both forward transforms (to obtain EgHashes) and reverse transformations (to obtain source biometric representations) in either a lossy or lossless manner. In one or more embodiments, the transform processor105performs lossless transformations on original biometric representations (e.g., from a biometric or hash registry) such that only a transformed version of the biometric representations are stored in the system100. In at least one embodiment, the controller processor105performs EGH transformation on a hash registry103of unmodified biometric representations to generate a new hash registry103of EgHashes for use by a specific application. In various embodiments, the trusted environment111is configured to perform actions including, but not limited to: 1) processing biometric information including EgHashes and original biometric scans and representations; 2) receiving EgHashes and transformation parameters (e.g., excluding a seed used to determine the parameters) from the controller environment101; 3) verification of subject identity; 4) determining subject identity; 5) database deduplication; and 6) performing forward, lossy EGH transforms to generate EgHashes from biometric representations based on received transformation parameters. In at least one embodiment, the trusted environment111is configured to be incapable of performing particular functions including, but not limited to, storing historical copies of hash registries103and performing reverse transformations on EgHashes. According to one embodiment, the semi-trusted environment117includes a semi-trusted processor configured to perform lossy EGH transformation based on auxiliary data generated from a seed generator107. In at least one embodiment, the semi-trusted environment117is configured to perform actions including, but not limited to: 1) processing biometric information including EgHashes and original biometric scans and representations; 2) receiving EgHashes and transformation parameters (e.g., excluding a seed used to determine the parameters) from the controller environment101; 3) verification of subject identity; 4) determining subject identity; 5) performing forward, lossy EGH transforms to generate EgHashes from biometric representations based on received transformation parameters; and 6) database deduplication. In various embodiments, the semi-trusted environment117is configured to be incapable of reversing EGH transformations, storing historical copies of hash registries103. In at least one embodiment, the semi-trusted environment117is configured to use encrypted transformation parameters from the controller environment101. According to one embodiment, the semi-trusted environment117includes a semi-trusted processor configured to perform lossy EGH transformation based on auxiliary data generated from a seed generator107. In various embodiments, the controller environment101includes a server109configured to communicate via a network102with a server115of the trusted environment111and a server121of the semi-trusted environment117. In at least one embodiment, the server109, server115, and server121are operative to receive biometric scans and other information from sources including biometric scanners, electronic communication devices (such as smartphones), and software applications configured for secure communication via the network102. In one example, the server109receives and transmits hash registries103and metadata (such as identifiers) from and to the trusted environment111or semi-trusted environment117. The network102includes, for example, the Internet, intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, or other suitable networks, etc., or any combination of two or more such networks. For example, such networks can include satellite networks, cable networks, Ethernet networks, and other types of networks. According to one embodiment, the network102is representative of a plurality of networks that are each associated with a specific trusted environment111or semi-trusted processor117. In at least one embodiment, the one or more hash registries103include one more secure databases that store transformed biometric representations as reversible (lossless) or irreversible (lossy) EgHashes. As will be understood by one having ordinary skill in the art, the steps and processes shown inFIG.2(and those of all other flowcharts and sequence diagrams shown and described herein) may operate concurrently and continuously, are generally asynchronous and independent, and are not necessarily performed in the order shown. FIG.2shows an enrollment process200for receiving biometric information, such as a facial scan, generating a biometric representation of the information, and transforming the biometric representation into an anonymized vector representation (e.g., an EgHash). In one or more embodiments, the process200is configured to transform the biometric representation into a pseudonymous vector representation. At step201, biometric information is received. The biometric information can include a scan of a subject, such as a facial scan, palm scan, multi-modal finger scan, or other body portion, or a biometric representation previously generated from a scan of a subject. According to one embodiment, an identifier is associated (and, in some instances, concatenated) with the biometric information for the purposes of tracking the information throughout the process200and organizing a resultant EgHash. In one or more embodiments, the system100may perform actions to confirm that a biometric scan represents a live individual and not, for example, a scanned static image of an individual (that is, one form of presentation attack). In at least one embodiments, proof of liveness (technically known as presentation attack detection) is determined as described in U.S. patent application Ser. No. 15/782,940, filed Oct. 13, 2017, entitled “SYSTEMS AND METHODS FOR PASSIVE-SUBJECT LIVENESS VERIFICATION IN DIGITAL MEDIA,” which is incorporated herein by reference as if set forth in its entirety. At step201, a seed generator107randomly generates a seed value that is associated with one application. According to one embodiment, the seed value is used to generate auxiliary data including a key and one or more transformation parameters used to perform the EGH transform. At step201, biometric information is received. The biometric information can include a biometric scan or a biometric representation encoded from the biometric scan. In some embodiments, the system100receives and encodes a biometric scan into a biometric representation. In at least one embodiment, the encoding is performed by a deep convolutional neural network acting as an encoder. In various embodiments, transformation parameters used for encoding a scan result in the application of one or more modifications (e.g., such as scales, shifts, and/or or rotations) being applied to the scan prior to its encoding into a vectorized biometric representation. In some embodiments, the steps of encoding are not performed, for example, when a biometric representation of a biometric scan is received. In at least one embodiment, the transformation parameters include scale and shift modifications that are applied to a received or system-generated biometric representation. In one or more embodiments, the modification of a biometric scan or representation increases the security of the system100to attacks because the number and complexity of steps required to reverse the resultant EgHashes are increased. At step203, the biometric representation, as biodata, is concatenated with the generated key, as non-biodata. In at least one embodiment, the concatenated bio- and non-bio representation is randomly permuted (e.g., based on the random seed or a second generated random seed from another source). As used herein, permutation generally refers to a random rearranging of values in the concatenated representation. In one or more embodiments, the key vector is stored in a hash registry103and a copy of the key vector is provided to the subject being enrolled into the system100. In one embodiment, the key vector is provided by the subject in future biometric verification or identification processes to uniquely associate the subject with a corresponding EgHash or set of EgHashes (e.g., such as a set of EgHashes for a particular organization). In another embodiment, a common key vector is provided by the system so that each subject does not need to carry keys that are unique to them. At step205, in one or more embodiments, a projected biometric representation is generated by randomly projecting the permuted biometric representation using matrix multiplication based on a randomly generated matrix. As used herein, random projection generally refers to techniques for reducing dimensionality of a set of points in a Euclidean space, for example, as described by the Johnson-Lindenstrauss Lemma for embedding high dimensional spaces into low dimensional spaces such that average distances between points in the spaces remain similar between the higher and lower dimensions. According to one embodiment, the projection process is performed such the dimension of the projected biometric representation is equal to the dimension of the permuted representation, thereby resulting in lossless transformation. In alternate embodiments, the projection process is performed such that the dimension of the projected biometric representation is less than the dimension of the permuted representation, thereby resulting in lossy transformation because some amount of biometric-related information is lost permanently. According to one embodiment, EgHashes produced using lossy transformation are irreversible and, thus, are very robust to reconstruction attacks where an attacker attempts to reverse a transformation to reconstruct an original biometric representation or scan. In one example, an attacker possessing the full knowledge of the transformation algorithm, the transformation parameters, and/or the seed which the former depend—collectively known as auxiliary data—is still unable to reconstruct the original biometric scan or representation from an EgHash with 100% fidelity due to the irreversible loss of biometric information that occurred when generating the EgHash. At step207, the projected biometric representation is randomly permuted to produce an EgHash (a final transformed version of the biometric representation). At step207, in at least one embodiment, the process200includes checking a hash registry103to determine if a duplicate of the EgHash is stored therein. According to one embodiment, the process200only proceeds to step209upon determining that there are no duplicates of the EgHash. At step209, the EgHash is stored in one or more hash registries103based on the key and/or a specific application for which the original biometric scan was obtained. For example, if the original biometric scan is associated with a healthcare group, the EgHash may be stored in a hash registry103of EgHashes processed by (or on behalf of) the healthcare group. In at least one embodiment, if a common key was used to generate the EgHash, the common key is stored in the hash registry103(or another database in the controller environment101). In one or more embodiments, during implementation of the present EGH systems and processes, a user or institution (e.g., in control of the biometric application) may use a common key or a set of unique keys for the EGH transform process. In one example, a set of unique keys is used when subjects of the institution can carry a physical token, such as a QR code, or a digital token stored on a storage device for storing a unique key and/or a seed value associated with the subject. At step211, in various embodiments, a key is generated and associated with the EgHash, and the key is transmitted to the subject (or application) that provided the original biometric scan. In one or more embodiments, the key is used to authenticate subsequent EgHashes for the subject and to retrieve the stored EgHash for verification, identification, deduplication, and other purposes. According to one embodiment, the EgHash is further transformed according to the process200using a common key of a specific application to generate an application-specific EgHash. In some embodiments, the original EgHash is stored in a hash registry103in the controller environment101as a backup for restoration purposes, and the secondary EgHash is transmitted to a trusted environment111or a semi-trusted environment117for storage in a hash registry103thereof. In various embodiments, an EgHash generated via lossless transformation is stored in a back-up hash registry103in the controller environment101. According to one embodiment, the back-up hash registry103eliminates the need for a trusted environment111or semi-trusted environment to store an original, plaintext version of biometric representations. Consequently, according to this embodiment, the system100does not store any original, plaintext version of biometric representations. In one or more embodiments, in common key schemes, biometric scans of all subjects for a particular application are transformed using the same common key designated for the particular application. In various embodiments, the common key is provided in subsequent identification, verification, and other processes to identify the stored EgHashes of a particular application. In one example, a subject is associated with multiple EgHashes for multiple applications, each application having its own common key for generating EgHashes thereof. In the same example, the EgHashes are dissimilar between applications (e.g., even though they are representative of the same subject) and, thus, if an EgHash for one application is compromised, the compromised EgHash is not usable in other applications. In various embodiments, the original EgHash (or any derivative EgHash thereof) is transformed using a unique key provided by the subject associated therewith. In one example, when a biometric scan of a subject is obtained, the subject provides a unique key that is not stored in the system100. According to one embodiment, the unique key is carried by the subject in a digital format, such as a QR code or other digital form for authentication. In one or more embodiments, the unique key serves as a second factor in a two-factor authentication, a first factor being the subject's biometric scan. In various embodiments, as discussed herein, an EgHash may be further transformed any number of times in multiple layers in succession (e.g., analogous to encrypting a secret message multiple times) for added security. In one or more embodiments, EGH transform chains include an initial lossy transformation (to prevent original reconstruction) followed by any number of additional forward transformations. According to one embodiment, reverse transformation of each link in an EGH transform chain requires the use of the same auxiliary data (e.g., seed, key, and parameters) utilized in the previously performed forward transformation. In various embodiments, without the auxiliary data, reverse transformation would require computationally expensive brute force or hill-climbing techniques. In one example, an original biometric representation of 128 bytes (e.g., 128 dimensions) is generated for a user and transformed in a lossy manner with a unique 64 byte user key to a user EgHash of 192 bytes. In the same example, the user EgHash is transformed in a lossy or lossless manner with a common 8-byte application key to an application EgHash of 200 bytes. Continuing the same example, the application EgHash is transformed in a lossy or lossless manner with a common 8 byte organization key to an organization EgHash of 208 bytes. In the same example, the final organization EgHash cannot be recovered to the user EgHash without first recovering the application EgHash. Because recovery to each iteration of the EgHash requires use of the same seed (e.g., and transformation parameters derived therefrom) used to generate the corresponding EgHash, any attempts to reverse transform any version of the EgHash by brute force or hill-climbing is computationally expensive and thus the EgHashes are substantially secure to reconstruction attacks. In the same example, even if an attacker obtains a seed and reverse transforms a subsequent iteration of the original EgHash, the compromised seed would be useless in further reverse transforming because each transform occurs with a different seed. Also in the same example, because the initial transformation was lossy, the original biometric representation or scan cannot be reconstructed. In various embodiments, because EgHashes may be useful only for an intended application (e.g., as a result of application-specific transform parameters), cross-application or cross-site attacks are not permitted. In one example, an attacker who illegitimately obtains a transformed EgHash for a first application and inserts the EgHash into a second application will find that the inserted EgHash fails to match the victim's identity because of the different seed (e.g., and thus different key and transformation parameters) used in generating the EgHashes. In the same example, even if the attacker is able to obtain the key used for forward and reverse transformation of the EgHash, the recovered EgHash or biometric representation will still have less than 100% fidelity due to the irreversible loss of information that occurred in lossy transformation of the EgHash (or biometric representation). FIG.3shows a subject identification process300for receiving biometric information, transforming a biometric representation generated therefrom into a secure EgHash, and identifying a subject based on comparing the EgHash to a plurality of stored EgHashes. At step301, biometric information is received from an application operative for data communication with the system100. In one example, the system100receives a facial scan from a biometric scanner. According to one embodiment, the biometric information is received at a trusted processor113or semi-trusted processor119configured to transform the biometric representation into an EgHash, before the representation is discarded or purged from the server memory. In various embodiments, the trusted processor113or semi-trusted processor119receives auxiliary data from a controller environment101and transforms the biometric information into an EgHash, for example, by performing the process200(e.g., however, the resultant output is not used to enroll the associated subject). In one or more embodiments, the trusted processor113or semi-trusted processor119includes a trusted or semi-trusted hash registry103that is synched with a control hash registry103of the controller environment101. In at least one embodiment, the synchronizing populates the trusted or semi-trusted hash registry with stored EgHashes associated with an operator of the trusted processor113or semi-trusted processor119. In one example, an operator of a trusted processor113is a banking institution and, thus, a trusted hash registry103is synched with a control registry103to provide copies of EgHashes of subjects associated with the banking institution and enrolled in the system100. At step303, a 1:N comparison process is performed between the generated probe EgHash and the EgHashes stored in the synched hash registry103(e.g., N representing the quantity of synched EgHashes). In one or more embodiments, the 1:N comparison process includes calculating an L2norm metric, Euclidean distance, or other distance metric between the generated EgHash and each of the synched EgHashes. In various embodiments, an output of the 1:N comparison process is a set of similarity scores describing the similarity between the generated EgHash and each of the synched EgHashes. According to one embodiment, a top-ranked similarity score of the set of similarity scores is identified. At step305, the top-ranked similarity score is evaluated to determine if it satisfies a predetermined similarity threshold. In at least one embodiment, the threshold is determined by statistical techniques. In various embodiments, the threshold value lies between the expected value of the similarity score under mated-pair (same-subject) comparisons as the upper bound and the expected value of the similarity score under non-mated-pair (different-subject) comparisons as the lower bound. According to one embodiment, since the range of feasible threshold values give different false match (acceptance) and false non-match (rejection) rates, the exact value is determined empirically, e.g., the threshold at false acceptance rates (FAR) at 0.1%, 1% and 5% are typically used. According to one embodiment, the similarity threshold is a programmable benchmark that, if met, may result in the system100determining an identification match and, if not met, may result in the system100determining no match. In various embodiments, the similarity threshold is used to reduce a likelihood of false positive identification (e.g., disparate subjects being identified as the same subject) whilst maximizing the true positive identification (i.e., the probe subject is indeed in the gallery). In one or more embodiments, upon determining that the top-ranked similarity score does not satisfy the similarity threshold, the process300proceeds to step307. In at least one embodiment, upon determining that the top-ranked similarity score satisfies the similarity threshold, the process300proceeds to step309. At step307, one or more predetermined failure actions are taken in response to the failure to satisfy the similarity threshold. In various embodiments, the one or more predetermined failure actions include, but are not limited to, sending an alert, ceasing one or more processes associated with the subject (e.g., such as processes occurring in a specific application), logging the failed identification, and other actions. In one example, an alert is transmitted to the user (e.g., to a device or biometric scanning system associated therewith) indicating the identification failure. In another example, the system100causes a user display to render a message indicating that the scanned subject's identity cannot be determined. In one or more embodiments, the EgHash of the probe subject is stored in a hash registry103configured for storing EgHashes in which identification processes failed to determine a match. At step309, a notification is transmitted to the user confirming the identification of the scanned subject. In one example, the notification confirms that the subject is affiliated with a particular organization in control of the biometric scanner. In various embodiments, the positive identification of the subject is recorded in a database used, for example, to document the providing of privileges to positively identified subjects. At step311, one or more predetermined success actions are taken in response to the satisfaction of the similarity threshold. In various embodiments, the one or more predetermine success actions include, but are not limited to, providing one or more privileges based on the positive identification, transmitting an alert, automatically activating one or more processes, logging the successful identification, and other actions. In one or more embodiments, the one or more privileges include, but are not limited to, access to a physical environment, access to a computing environment, processing of a transaction, and other privileges. In one example, upon satisfaction of the similarity threshold, the system100automatically transmits a signal to a computing environment, in response, causes a locking mechanism to disengage. In at least one embodiment, a stored privilege policy list is stored in the system100and includes identifiers for enrolled subjects of the organization as well as privileges provided to each subject. According to one embodiment, the one or more privileges provided to the subject are determined based on the privileges stored in the privilege policy list and associated with the subject. In one example, the privilege policy list is used to provide varying levels of access to subjects of an organization based on factors such as rank, seniority, experience, and other factors. As would be understood by one of ordinary skill in the art, the process300can be performed in a modified manner such that subjects in which the similarity threshold is not map (e.g., identification fails) are provided privileges, while subjects in which the similarity threshold is met are refused privileges. FIG.4shows a subject verification process400for receiving biometric information, transforming a biometric representation generated therefrom into a secure EgHash, and verifying the identity of the subject based on comparing the EgHash to a stored EgHash. At step401, biometric information is received from an application operative for data communication with the system100. In one example, the system100receives a subject's facial scan and a unique user key from an electronic device controlled by the subject. According to one embodiment, the biometric information is received at a trusted processor113or semi-trusted processor119configured to transform the biometric information into an EgHash. In various embodiments, the trusted processor113or semi-trusted processor119receives auxiliary data from a controller environment101and transforms the biometric information into an EgHash, for example, by performing the process200(e.g., however, the resultant output is not used to enroll the associated subject). In one or more embodiments, the trusted processor113or semi-trusted processor includes hash registries103that are synched with controller hash registries103as described herein. At step403, a stored EgHash is retrieved from a synched hash registry103based on the unique user key. In some embodiments, a unique user key is used to retrieve multiple EgHashes, each EgHash being associated with the same subject. As used herein, a BTP strategy of using multiple EgHashes for the same subject is referred to as “multi-template” BTP and may improve biometric matching performance. At step403, the trusted processor113or semi-trusted processor119performs a 1:1 comparison between the generated EgHash and the retrieved EgHash. In one or more embodiments, the 1:1 comparison process includes calculating an L2norm metric, Euclidean distance, or other distance metric between the generated EgHash and the retrieved EgHash. In various embodiments, an output of the 1:1 comparison process is a similarity scores describing the similarity between the generated EgHash and the retrieved EgHash. At step405, the similarity score is evaluated to determine if it satisfies a predetermined similarity threshold. According to one embodiment, the similarity threshold is a programmable benchmark that, if met, may result in the system100verifying the subject's identity and, if not met, may result in the system100determining no verification or no match. In one or more embodiments, upon determining that the similarity score does not satisfy the similarity threshold, the process400proceeds to step407. In at least one embodiment, upon determining that the top-ranked similarity score satisfies the similarity threshold, the process400proceeds to step409. At step407, one or more predetermined failure actions are taken in response to the failure to satisfy the similarity threshold. In various embodiments, the one or more predetermined failure actions include, but are not limited to, sending an alert, ceasing one or more processes associated with the subject (e.g., such as processes occurring in a specific application), logging the failed verification, and other actions. In one example, an alert is transmitted to the user (e.g., to a device or biometric scanning system associated therewith) indicating the identity verification failure. In another example, the system100causes a user display to render a message indicating that the scanned subject's identity cannot be verified. In one or more embodiments, the generated EgHash of the subject is stored in a hash registry103configured for storing EgHashes in which identity verification processes failed. At step409, a notification is transmitted to the user confirming the verification of the subject's identity. In one example, the notification confirms that the subject is affiliated with a particular organization in control of the biometric scanner. In various embodiments, the positive identification of the subject is recorded in a database used, for example, to document the providing of privileges to identity-verified subjects. At step411, one or more predetermined success actions are taken in response to the satisfaction of the similarity threshold. In various embodiments, the one or more predetermine success actions include, but are not limited to, providing one or more privileges based on the positive verification, transmitting an alert, automatically activating one or more processes, logging the successful identification, and other actions. In one or more embodiments, the one or more privileges include, but are not limited to, access to a physical environment, access to a computing environment, processing of a transaction, and other privileges. In one example, upon satisfaction of the similarity threshold, the system100automatically transmits a signal to a computing environment, in response, causes a locking mechanism to disengage. In at least one embodiment, a stored privilege policy list is stored in the system100and includes identifiers for enrolled subjects of the organization as well as privileges provided to each subject. According to one embodiment, the one or more privileges provided to the subject are determined based on the privileges stored in the privilege policy list and associated with the subject. In one example, the privilege policy list is used to provide varying levels of access to subjects of an organization based on factors such as rank, seniority, experience, and other factors. FIGS.5-8show results of one or more experimental tests performed using one or more embodiments of the present BTP systems and processes. The descriptions therein are provided for the purposes of illustrating various elements of the BTP systems and processes (e.g., as observed in the one or more embodiments). All descriptions, embodiments, and the like are exemplary in nature and place no limitations on any embodiment described, or anticipated, herein. The descriptions, embodiments, and the like are not intended to be dispositive of all data and results. FIG.5shows a chart500relating biometric matching performance501(expressed as a half total error rate (HTER) %) to EgHash and biometric representation dimension size503to demonstrate the security of the EgHash against reconstruction attacks. According to one embodiment, the chart500describes biometric matching performance operating under two conditions as the dimension size of the non-biodata key used in generating EgHashes varies. In at least one embodiment, the first operating condition is an embodiment of the present systems and processes, intended verification which involves comparisons of two EgHashes, whereas the second operating condition is under simulated reconstruction attack, which involves comparisons between an original biometric representation and its pre-image (e.g., the data reconstructed from the EgHash via the attack). In various embodiments, HTER is calculated according to Equation 1. HTER is inversely proportionate to accuracy (e.g., a lower HTER is more desirable). The chart500includes a transformed trend505and a reconstructed trend507. In at least one embodiment, the transformed trend505represents biometric performance that compares two EgHashes generated as described herein, i.e., under the normal, intended verification operation. According to one embodiment, the reconstructed trend507represents biometric performance involving the comparisons between a biometric pre-image—a template reconstructed from EgHashes—and the native template. This scenario simulates a reconstruction attack. As shown, the transformed trend505demonstrates consistently accurate biometric matching performance501across a range of dimension sizes503, while the reconstructed trend507demonstrates increasingly inaccurate biometric matching performance501across the same range. Because the accuracy of matching is preserved in the transformed mode and rapidly diminishes in the reconstructed mode, the chart500demonstrates that the EGH transform process generates EgHashes that are substantially secure against reconstruction attacks attempting to retrieve a source biometric representation from an EgHash. HTER=50%*(False Acceptance Rate+False Rejection Rate) (Equation 1) According to one embodiment, the chart500shows that the lossy transformation techniques used herein demonstrate accuracy comparable to conventional solutions, but also provide dramatically enhanced security compared to previous approaches because some of the original biometric information is irreversibly lost. Thus, in various embodiments, the present systems and processes improve upon previous BTP approaches because they do not demonstrate the previous approaches' intolerably high losses in accuracy. For example, in both lossless and lossy transformation techniques, any biometric template that varies in size (for example, a template of fingerprint minutiae) must be translated into a fixed size vector first. Transformation from variable to fixed size is an extremely challenging problem because the process of translation from a variable-dimension template to a fixed-size template invariably results in a significant drop in accuracy, and such drops in accuracy are exacerbated in previous approaches, whereas embodiments of the present systems and processes maintain sufficient biometric performance. FIG.6shows a chart600relating biometric matching performance601(expressed as HTER %) to a number of non-biodata dimensions603(e.g., dimension of a unique key) used in generating EgHashes. The chart600includes a trend605demonstrating that, as the number of non-biodata dimensions601increases, the biometric matching performance601improves (e.g., HTER % decreases). According to one embodiment, the chart600demonstrates that blending biodata with non-biodata in the EGH transform may result in EgHashes that are extremely unique and that demonstrate ideal (e.g., 100%) verification performance in biometric matching applications. Therefore, in application where each subject can carry a unique non-biodata representation (or key), perfect verification performance is achievable under the normal operation (e.g., when the key is not compromised) because unique non-biodata increases the uniqueness of EgHash. Contrary to using a unique non-biodata key for each subject as done inFIG.6,FIG.7shows a chart700relating biometric performance701(expressed as HTER %) to a number of non-biodata dimensions703using a common non-biodata representation (or key) used in generating EgHashes for a sample size of 1,000 subjects. The chart700includes a lossy trend705(e.g., associated with lossy EgHashes), a lossless trend707(associated with lossless EgHashes), and a gradient709. In various embodiments, the lossy trend705demonstrates that biometric performance701with lossy EgHashes decreases as the number of non-biodata dimensions703increases. In at least one embodiment, the gradient709demonstrates that biometric performance701is reduced by approximately 0.00178% per non-biodata dimension703increase (e.g., HTER % increases by 0.00178% per non-biodata dimension703increase). According to one embodiment, the lossless trend705demonstrates that biometric performance701is consistent regardless of the number of non-biodata dimensions703. FIG.8shows a chart800relating biometric matching performance801(expressed as HTER %) to a number of non-biodata dimensions803(e.g., dimension of a unique key) used in generating EgHashes in a sample size of 100,000 subjects. According to one embodiment, the chart700shows results associated with a small sample size, whereas the chart800shows results associated with a large sample size. The chart800includes a lossy trend805(e.g., associated with lossy EgHashes) and a gradient807. In various embodiments, the lossy trend805demonstrates that biometric matching performance801with lossy EgHashes decreases as the number of non-biodata dimensions803increases. In at least one embodiment, the gradient807demonstrates that biometric matching performance801is reduced by approximately 0.00132% per non-biodata dimension803increase (e.g., HTER % increases by 0.00132% per non-biodata dimension803increase). In various embodiments, a comparison of 1,000 sample size and −0.00178% performance trend ofFIG.7and the 100,000 sample size and −0.00132% performance trend ofFIG.8suggests that biometric matching performance501may degrade in varying degrees, albeit, insignificantly, i.e., in the order of 0.001-0.002% per increase in the non-biodata dimension, but the level of security in terms of irreversibility, unlinkability and revocability, is vastly improved. From the foregoing, it will be understood that various aspects of the processes described herein are software processes that execute on computer systems that form parts of the system. Accordingly, it will be understood that various embodiments of the system described herein are generally implemented as specially-configured computers including various computer hardware components and, in many cases, significant additional features as compared to conventional or known computers, processes, or the like, as discussed in greater detail herein. Embodiments within the scope of the present disclosure also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media which can be accessed by a computer, or downloadable through communication networks. By way of example, and not limitation, such computer-readable media can comprise various forms of data storage devices or media such as RAM, ROM, flash memory, EEPROM, CD-ROM, DVD, or other optical disk storage, magnetic disk storage, solid state drives (SSDs) or other data storage devices, any type of removable non-volatile memories such as secure digital (SD), flash memory, memory stick, etc., or any other medium which can be used to carry or store computer program code in the form of computer-executable instructions or data structures and which can be accessed by a computer. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such a connection is properly termed and considered a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a computer to perform one specific function or a group of functions. Those skilled in the art will understand the features and aspects of a suitable computing environment in which aspects of the disclosure may be implemented. Although not required, some of the embodiments of the claimed systems and processes may be described in the context of computer-executable instructions, such as program modules or engines, as described earlier, being executed by computers in networked environments. Such program modules are often reflected and illustrated by flow charts, sequence diagrams, exemplary screen displays, and other techniques used by those skilled in the art to communicate how to make and use such computer program modules. Generally, program modules include routines, programs, functions, objects, components, data structures, application programming interface (API) calls to other computers whether local or remote, etc. that perform particular tasks or implement particular defined data types, within the computer. Computer-executable instructions, associated data structures and/or schemas, and program modules represent examples of the program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps. Those skilled in the art will also appreciate that the claimed and/or described systems and methods may be practiced in network computing environments with many types of computer system configurations, including personal computers, smartphones, tablets, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like. Embodiments of the claimed systems and processes are practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. An exemplary system for implementing various aspects of the described operations, which is not illustrated, includes a computing device including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The computer will typically include one or more data storage devices for reading data from and writing data to. The data storage devices provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer. Computer program code that implements the functionality described herein typically comprises one or more program modules that may be stored on a data storage device. This program code, as is known to those skilled in the art, usually includes an operating system, one or more application programs, other program modules, and program data. A user may enter commands and information into the computer through keyboard, touch screen, pointing device, a script containing computer program code written in a scripting language or other input devices (not shown), such as a microphone, etc. These and other input devices are often connected to the processing unit through known electrical, optical, or wireless connections. The computer that effects many aspects of the described processes will typically operate in a networked environment using logical connections to one or more remote computers or data sources, which are described further below. Remote computers may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically include many or all of the elements described above relative to the main computer system in which the systems and processes are embodied. The logical connections between computers include a local area network (LAN), a wide area network (WAN), virtual networks (WAN or LAN), and wireless LANs (WLAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets, and the Internet. When used in a LAN or WLAN networking environment, a computer system implementing aspects of the systems and processes are connected to the local network through a network interface or adapter. When used in a WAN or WLAN networking environment, the computer may include a modem, a wireless link, or other mechanisms for establishing communications over the wide area network, such as the Internet. In a networked environment, program modules depicted relative to the computer, or portions thereof, may be stored in a remote data storage device. It will be appreciated that the network connections described or shown are exemplary and other mechanisms of establishing communications over wide area networks or the Internet may be used. While various aspects have been described in the context of a preferred embodiment, additional aspects, features, and methodologies of the claimed systems and processes will be readily discernible from the description herein, by those of ordinary skill in the art. Many embodiments and adaptations of the disclosure and claimed systems and processes other than those herein described, as well as many variations, modifications, and equivalent arrangements and methodologies, will be apparent from or reasonably suggested by the disclosure and the foregoing description thereof, without departing from the substance or scope of the claims. Furthermore, any sequence(s) and/or temporal order of steps of various processes described and claimed herein are those considered to be the best mode contemplated for carrying out the claimed systems and processes. It should also be understood that, although steps of various processes may be shown and described as being in a preferred sequence or temporal order, the steps of any such processes are not limited to being carried out in any particular sequence or order, absent a specific indication of such to achieve a particular intended result. In most cases, the steps of such processes may be carried out in a variety of different sequences and orders, while still falling within the scope of the claimed systems and processes. In addition, some steps may be carried out simultaneously, contemporaneously, or in synchronization with other steps. The embodiments were chosen and described in order to explain the principles of the claimed systems and processes and their practical application so as to enable others skilled in the art to utilize the systems and processes and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the claimed systems and processes pertain without departing from their spirit and scope. Accordingly, the scope of the claimed systems and processes is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein. | 67,363 |
11861044 | In the appended figures, similar components and/or features can have the same reference label. Further, various components of the same type can be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. DETAILED DESCRIPTION Certain aspects and features of the present disclosure relate to systems and methods for controlling data exposure using artificial-intelligence-based (hereinafter referred to as “AI-based”) profile models. Specifically, certain aspects and features of the present disclosure relate to systems and methods for providing a data protection platform that is configured to automatically manage the exposure of data privacy elements. For example, a data privacy element may be any item of data that can be exposed (e.g., accessible) to a third-party, such as a hacker. Data privacy elements can be evaluated (e.g., alone or in combination with other data, such as social media profiles) to expose information about users and/or or network systems (e.g., organizations). Non-limiting examples of data privacy elements include activity data (e.g., web browsing history), network data (e.g., network topology), application data (e.g., applications downloaded on the computing device), operating system data (e.g., the operating system (OS) and the corresponding version of the OS running on the computing device), hardware data (e.g., the specific hardware components that comprise the computing device), and other suitable data that exposes information about a user and/or a network. When a computing device accesses the Internet, various data privacy elements may be exposed as the computing device navigates across web servers. For example, when the computing device accesses an Internet Service Provider (ISP), certain data privacy elements may be stored at the ISP's servers as the ISP facilitates an Internet connection. However, the data privacy elements that are stored at the ISP's servers may be accessible to other network hosts, such as authorized users (e.g., network security engineers) or unauthorized users (e.g., hackers). The accessibility of the stored data privacy elements by other users exposes the data privacy elements. This data exposure creates a security risk because the data privacy elements can be used by unauthorized users, for example, to identify vulnerabilities of the computing device or of the network systems to which the computing device is connected. Identifying vulnerabilities leaves the computing device or the network to which the computing device is connected open to data breaches or other nefarious conduct. According to certain embodiments, the data protection platform can enhance data protection by controlling and/or managing the exposure of the data privacy elements. In some implementations, the data protection platform (described in greater detail atFIG.5) may include an application that is deployed in a cloud network environment. For example, the data protection platform may include an application server on which an application is stored, which, when executed, performs various operations defined by the data protection platform. The data protection platform may also include one or more database servers on which the storage functionalities associated with the application can be performed in the cloud network environment. In some implementations, the computing device (e.g., operating by a user) can connect to the data protection platform using a platform-secured browser. For example, the platform-secured browser can be hosted by the data protection platform to avoid the Internet activity performed on the computing device being stored locally at the computing device. According to certain embodiments, while the computing device navigates the Internet using the platform-secured browser, the data protection platform can automatically, dynamically, in real-time, and/or intelligently control the exposure of data privacy elements associated with the computing device or the network to which the computing device is connected. Non-limiting examples of controlling the exposure of data privacy elements can include blocking data privacy elements from being accessible by web servers or application servers, blocking data privacy elements from being stored at web servers or application servers, modifying one or more data privacy elements according to an artificial profile model, providing the data privacy elements to web servers or applications servers, detecting which data privacy elements are exposed, determining which data privacy elements are required to enable Internet activity (e.g., certain websites do not function if cookies are disabled), determining which data privacy elements are not required to enable Internet activity, modifying a feature (e.g., a time signature of keystrokes, taps, or mouse clicks) of input received from the computing device, or other suitable techniques for controlling exposure of data privacy elements. In some implementations, artificial profiles can be specific to certain organizations, industries, subject matter, or user-defined applications. For example, the artificial profiles specific to an organization would include data privacy elements that are relevant or consistent with data privacy elements that would be expected for the organization. Advantageously, the data protection platform can control the exposure of data privacy elements to protect the privacy of the user, computing device, and/or network systems (e.g., operated by organizations, companies, governments, or other suitable entities) as the computing device navigates the Internet. For instance, if a network host can collect data privacy elements of users, computing devices, and/or networks (e.g., such that the collection is authorized or unauthorized), the collected data can expose information (e.g., potentially private or sensitive information) about the organization to which the users, computing devices, and/or networks belong. Thus, by using embodiments described herein for managing or controlling the exposure of data privacy elements for users, computing devices, and/or network systems of an organization, the data protection platform thereby manages or controls the exposure of potentially sensitive information about the organization itself. Managing or controlling the exposure of data privacy elements can prevent data breaches of the users, computing devices, and/or network systems because network hosts, such as hackers, can be prevented from collecting certain data privacy elements, or can at least be prevented from collecting accurate data privacy elements, which obfuscate or mask identifies or attributes of the users, computing devices, and/or network systems. Further, the data protection platform can control the exposure of data privacy elements using artificial profiles, which are generated using an artificial profile model, to obfuscate the user and/or network in a realistic manner. In some implementations, the artificial profile model (described in greater detail with respect toFIG.7) can include a model that is generated using machine-learning techniques and/or AI techniques. For example, the artificial profile model may include data representing a relationship between two or more data privacy elements. The relationship between the two or more data privacy elements can be automatically learned using machine-learning techniques, for example, or can be user defined based one or more user-defined rules. In some implementations, when the data protection platform modifies a data privacy element to obfuscate a computing device, the modification of the data privacy element can be performed within the constraints of the relationship learned or defined by the artificial profile model. As a non-limiting example, a specific application may be downloaded on a computing device. Downloading the specific application on the computing device may also cause a specific set of fonts to be installed on the computing device. When the computing device accesses a website, the web server that provides access to the website may execute a tracking asset (e.g., a cookie) that is stored in the computing device's browser. The tracking asset can request certain data privacy elements from the computing device. For example, the tracking asset may request (from the computing device's browser) data privacy elements identifying which fonts are installed on the computing device. From the perspective of the network host (e.g., the web server providing access to the website), if the data privacy elements collected from the computing device indicate that a font is installed on the computing device, or the lack of a font installed on the computing device, that indication may be evaluated to determine (with some likelihood) whether or not an application has been downloaded onto the computing device. Again, from the perspective of the network host, if the exposure of data privacy elements from the computing device indicate with a certain likelihood that an application has been downloaded on the computing device, this information introduces an attack vector (e.g., known or unknown vulnerabilities or exploits associated with that application), exposes user information (e.g., the application is specific to an industry, which exposes the industry associated with the organization), or may not provide any information at all. According to certain embodiments, the data protection platform can obfuscate the identifiable attributes of the computing device by modifying the data privacy elements (i.e., the identity of the fonts that are installed on the computing device) so that the web server collects inaccurate data about the computing device when the computing device accesses the website. However, the modification of the data privacy elements would not appear to be realistic (e.g., to a hacker) if the identity of the fonts were modified to include a font that was inconsistent with the specific set of fonts associated with the specific application. Accordingly, in order to control the data privacy elements of the computing device in a realistic manner, the artificial profile model can include data representing the relationship between the specific application and the set of specific fonts. Thus, generating an artificial profile for the computing device may involve changing the specific application to a new application, which is exposed to the website, and to also modify the set of specific fonts to a set of new fonts associated with the new application. In this non-limiting example, the modified data privacy elements collected by the website (i.e., the identity of the new application and the set of new fonts) will seem realistic to a hacker because both data privacy elements (e.g., the application and the associated set of fonts) are consistent with each other. As an advantage of the disclosed embodiments, generating artificial profiles to be consistent with dependencies defined in the artificial profile model increases the realistic nature of the modified artificial profiles so as to enhance the data protection of computing devices and/or networks. These non-limiting and illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. For example, it will be appreciated that data privacy elements other than fonts can be collected, including, but not limited to, which plugins are installed in the browser of the computing device, or any other information collectable from a browser, computing device, or Operating System running on the computing device. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative embodiments but, like the illustrative embodiments, should not be used to limit the present disclosure. The elements included in the illustrations herein may not be drawn to scale. FIG.1is a schematic diagram illustrating network environment100, in which exposable data can be accessed by authorized or unauthorized network hosts, according to certain aspects of the present disclosure. Network environment100can include Internet110, site network120and home network130. Each of Internet110, site network120, and home network130can include any open network, such as the Internet, personal area network, local area network (LAN), campus area network (CAN), metropolitan area network (MAN), wide area network (WAN), wireless local area network (WLAN); and/or a private network, such as an intranet, extranet, or other backbone. In some instances, Internet110, site network120, and/or home network130can include a short-range communication channel, such as Bluetooth or Bluetooth Low Energy channel. Communicating using a short-range communication such as BLE channel can provide advantages such as consuming less power, being able to communicate across moderate distances, being able to detect levels of proximity, achieving high-level security based on encryption and short ranges, and not requiring pairing for inter-device communications. In some implementations, communications between two or more systems and/or devices can be achieved by a secure communications protocol, such as secure sockets layer (SSL), transport layer security (TLS). In addition, data and/or transactional details may be encrypted based on any convenient, known, or to be developed manner, such as, but not limited to, DES, Triple DES, RSA, Blowfish, Advanced Encryption Standard (AES), CAST-128, CAST-256, Decorrelated Fast Cipher (DFC), Tiny Encryption Algorithm (TEA), eXtended TEA (XTEA), Corrected Block TEA (XXTEA), and/or RC5, etc. As illustrated in the example ofFIG.1, site network120may be connected to computer160, home network130may be connected to mobile device170(e.g., a smartphone) and smart TV180(e.g., a television with Internet capabilities), and Internet110may be connected to secure server140. Site network120may be a network that is operated by or for an organization, such as a business. Computer160may connect to secure server140using site network120. Home network130may be a network that is operated by or for a residential area, such as a single family dwelling or an apartment complex. Mobile device170and smart TV180may connect to secure server140using home network130. Secure server140may be any server connected to the Internet or a cloud network environment. For example, secure server140may be a web server that is hosting a website. It will be appreciated that, while network environment100shows a single site network and a single home network, any number of network in any configuration can be included in network environment100. In some implementations, network host150may a computing device (e.g., a computer) connected to a computer network, such as any of Internet110, site network120, and/or home network130. In some implementations, network host150may be any network entity, such as a user, a device, a component of a device, or any other suitable network device. In some instances, network host150may be an authorized device, such as a web server that allows users to access a website, an application server that allows users to access an application, a network security engineer, or other suitable authorized devices. In some instances, network host150may be an unauthorized network host, such as a hacker, a computer virus, or other malicious code. For example, network host150may be able to access secure server140, site network120, and/or home network130to collect exposable data privacy elements that expose information about secure server140, site network120, computer160, home network130, mobile device170, and/or smart TV180. As computer160, mobile device170, and/or smart TV180communicate over Internet110, for example, with secure server140, various exposable data privacy elements can be collected and stored at servers or databases of any of site network120, home network130, or Internet110. Either substantially in real-time (with Internet activity of computer160, mobile device170, or smart TV180) or non-real-time, network host150can access the data privacy elements that may be stored at secure server140, site network120, and/or home network130. Network host150can access the stored data privacy elements in an authorized manner (e.g., a website that allowed access after a cookie has been installed in a browser) or an unauthorized manner (e.g., secure server140may be hacked by network host150). Either way, network host150can evaluate the collected data privacy elements to determine whether there are any vulnerabilities in any aspects of secure server140, site network120, and/or home network130. Network host150can then use the vulnerabilities to execute a data breach. The ability of network host150to collect exposable data privacy elements is described in greater detail with respect toFIG.2. Further, according to certain embodiments described herein, the data protection platform can be used to prevent network host150from accessing or collecting the data privacy elements or to obfuscate the real data privacy elements so as to provide inaccurate or useless information to network host150. FIG.2is a schematic diagram illustrating network environment200, in which exposable data associated with computing devices can be accessed by authorized or unauthorized network hosts, according to certain aspects of the present disclosure. In some implementations, network environment200can include secure server1230, network210, gateway220, mobile device250, smart TV260, and laptop270. For example, network environment200may be similar to or a more detailed example of home network130ofFIG.1. Mobile device250, smart TV260, and laptop270may be located within a defined proximity, such as within a home or residence. Secure server230may be the same as or similar to secure server140, and thus, further description is omitted here for the sake of brevity. Network210may be the same as site network120or home network130ofFIG.1, and thus, further description is omitted here for the sake of brevity. Network host240may be the same or similar to network host150, and thus, further description is omitted here for the sake of brevity. Gateway220may be an access point (e.g., a router) that enables devices, such as mobile device250, smart TV260, and laptop270to connect to the Internet.FIG.2is provided to illustrate how network host240can collect exposable data privacy elements from secure server230based on routine and seemingly innocuous data communications between devices. As a non-limiting example, smart TV260may be configured to automatically and periodically transmit a signal to secure server230. The signal may correspond to a request for updates to the software stored on smart TV260. In this non-limiting example, secure server230may be a server that stores software updates or that controls the distribution of software updates to smart TVs like smart TV260. However, the signal transmitted from smart TV260may include data privacy elements that expose information about smart TV260, gateway220, and/or network210. For example, the signal may include a variety of data privacy elements, including, but not limited to, the version of the software currently stored on smart TV260, the viewing data collected by smart TV260(if authorized by the user), the service set identifier (SSID) of gateway220, a password to connect to gateway220, login credentials associated with a user profile recently logged into on smart TV260, information about the hardware or firmware installed in smart TV260, information about the hardware, firmware, or software recognized to be installed at gateway220, the physical location of smart TV260(e.g., determined using an Internet Protocol (IP) address), applications downloaded by a user on smart TV260, and/or application usage data. The data privacy elements included in the signal may be stored at secure server230. In some cases, if relatively sensitive information is included in the signal, such as viewing data (e.g., accessed video content) recently collected by smart TV260, secure server230may store that sensitive information securely behind protection mechanisms, such as firewalls. However, secure server230may be hacked by network host240. In this scenario, the sensitive information (i.e., the data privacy elements included in the signal and subsequently stored at secure server230) may be exposed to network host240. In some cases, if relatively innocuous information is included in the signal, such as the version of software stored on smart TV260or the SSID of gateway220, the information may be stored at secure server230without many protection mechanisms, such as firewalls. For instance, secure server230may not need to securely store the version of the software currently stored on smart TV260because this information may be relatively innocuous. However, network host240can access secure server230, either in an authorized or unauthorized manner, to obtain the exposed data privacy element of the software version. The software version can nonetheless be used maliciously by bad actors because the software version can be exploited to identify vulnerabilities in the software. The identified vulnerabilities can be used to execute a data breach or hacking of smart TV260, which places at risk the privacy information associated with a user of smart TV260. FIG.2illustrates the problem of data privacy elements being exposable to other hosts, such as servers, hackers, websites, or authorized users, during an interaction between devices, such as smart TV260and secure server230. Exposable data privacy elements can be exploited by unauthorized hosts, such as hackers, to determine vulnerabilities that can be exploited to attack a network or an individual device. Further, exposable data privacy elements can also be exploited by authorized hosts, such as a website, to profile users based on online activity, however, this profiling can create risks of private information being exposed. FIG.3is a schematic diagram illustrating network environment300, in which exposable data can be accessed by authorized network hosts (e.g., a web server hosting a webpage, an application server hosting an application, and so on) or unauthorized network hosts (e.g., a hacker) at various stages of a browsing session. Further,FIG.4is a schematic diagram illustrating network environment400, which is similar to network environment300, but with the addition of an exemplary data protection platform440that controls the exposure of data privacy elements to block or obfuscate private information from being exposed, according to certain embodiments. Referring again toFIG.3, network environment300can include laptop310, gateway320, ISP330, network340, and secure server350. A browser can be running on laptop310. The browser can enable a user operating laptop310to communicate with secure server350through network340. However, as the browser running on laptop310interacts with secure server350, exposable data privacy elements370can be collected at various devices connected to the Internet. For example, gateway320, ISP330can store one or more data privacy elements that can expose information about laptop310because laptop310communicates with gateway320and ISP330to connect with secure server350. While the exposable data privacy elements370can be collected at gateway320, ISP330, or secure server350(e.g., by network host360), gateway320, ISP330, and secure server350may or may not be the source of the exposable data privacy elements. For example, the browser running on laptop310can expose certain information about the Operating System (OS) installed on laptop310, but that OS information may be collected by a web server when the web server queries the browser, or when network host360accesses the OS information in an unauthorized manner (e.g., by hacking the web server to gain access to the stored OS information). Referring again toFIG.4, the addition of data protection platform440into network environment300(as represented by network environment400) can control the exposure of data privacy elements as laptop410navigates the Internet. InFIG.4, gateway420may be the same as or similar to gateway320, ISP430may be the same as or similar to ISP330, network450may be the same as or similar to network340, and secure server460may be the same as or similar to secure server350, and thus, a description of these devices is omitted for the sake of brevity. In some implementations, data protection platform440can provide a platform-secured browser for laptop410. As the user navigates the Internet using the platform-secured browser, data protection platform440can block, modify, and/or observe the data privacy elements (at block470) that are exposed to devices across the Internet. Continuing with the example described inFIG.3, when a web server queries the platform-secured browser, the data protection platform440can block the OS information from being provided to the web server. As another example, the data protection platform440can modify the OS information (based on an artificial model profile), and provide the modified OS information to the web server. According to certain embodiments, network host480may collect artificial exposable data privacy elements495at block490, however, the collected data privacy elements obfuscate the actual information about the user operating laptop410, the platform-secured browser, or laptop410itself. Advantageously, the collected exposable data privacy elements495would not expose any real vulnerabilities of laptop410. FIG.5is a schematic diagram illustrating data protection platform500, according to certain aspects of the present disclosure. In some implementations, data protection platform500may be implemented using cloud-based network510. For example, data protection platform500may be an application that is deployed in cloud-based network510. Data protection platform500in cloud-based network510may include an application server (not shown) that is constructed using virtual CPUs that are assigned to or reserved for use by data protection platform500. Further, data protection platform500may be implemented using one or more containers. Each container can control the exposure of data privacy elements. A container may include stand-alone, executable code that can be executed at runtime with all necessary components, such as binary code, system tools, libraries, settings, and so on. However, because containers are a package with all necessary components to run the executable code, the container can be executed in any network environment in a way that is isolated from its environment. It will be appreciated that any number of cloud-based networks can be used to implement data protection platform500. For example, assuming data protection platform500is implemented using a set of containers, a subset of the set of containers can be deployed on cloud-based network510, another subset of the set of containers can be deployed on cloud-based network520, another subset of the set of containers can be deployed on cloud-based network530, and so on. It will also be appreciated that data protection platform500may or may not be implemented using a cloud-based network. Referring to the non-limiting example illustration ofFIG.5, data protection platform500can include a number of containers that are deployed using cloud-based network510. For instance, data protection platform500can include secure browser551, secure routing container552, real-time monitoring container553, profile management container554, AI container555, external integration container556, profile history database557, profile model database558, and content database559. Further, data protection platform500may control the exposure of data privacy elements that are exposable during a browsing session between a computing device (e.g., laptop410ofFIG.4) and secure server550on network540. In some implementations, secure browser551may be a container that includes executable code that, when executed, provides a virtual, cloud-based browser to the computer device. For example, the platform-secured browser running on laptop410shown inFIG.4may be provided by the data protection platform500using secure browser551. In some implementations, secure routing container552may be a container that includes executable code that, when executed, provides the computing device with a virtual private network (VPN) to exchange communications between the computing device and the data protection platform500. Secure routing container552can also facilitate the routing of communications from the computing device or from any container within data protection platform500to other devices or containers internal or external to data protection platform500. For example, if data protection platform500is implemented across several cloud-based networks, then secure routing container552can securely route communications between containers across the several cloud-based networks. Real-time monitoring container553can be a container including executable code that, when executed, monitors the exposable data privacy elements associated with a browsing session in real-time. For example, if a computing device connects with a web server to access a search engine web site, real-time monitoring container553can monitor the user input received at the search engine website as the user types in the input. In some implementations, real-time monitoring container553can control the exposure of behavioral/real-time attribution vectors (e.g., attribution vectors730, which are described in greater detail with respect toFIG.7). For example, real-time monitoring container553may modify the input dynamics of keystroke events, as described in greater detail with respect toFIG.9. Profile management container554can include executable code that, when executed, controls or manages the artificial profiles that have been created and stored. For example, profile management container554can use artificial intelligence (e.g., Type II Limited Memory) provided by AI container555to generate a new artificial profile based on the artificial profile model (e.g., artificial profile model700described in greater detail with respect toFIG.7) and/or administrator entered constraints (e.g., region, demographic, protection level requirements) to ensure that newly created or modified artificial profiles are compliant with previously generated profiles stored in the profile history database557. AI container555can include executable code that, when executed, performs the one or more machine-learning algorithms on a data set of all available data privacy elements to generate the artificial profile model. The generated artificial profile model can be stored at profile model database558. Further, external integration container556can include executable code that, when executed, enables third-party systems to integrate into data protection platform500. For example, if an organization seeks to use data protection platform500to control the exposure of data privacy elements for all employees of the organization, external integration container556can facilitate the integration of the third-party systems operated by the organizations. Content database559may store content data associated with browsing sessions in a content file system. For example, if during a browsing session between a computing device and a web server, the user operating the browser determines that content data should be stored from the web server, that content data can be stored in content database559and the content file system can be updated. It will be appreciated that data protection platform500may include any number of containers to control the exposure of data privacy elements during webpage or application navigation. It will also be appreciated that data protection platform500is not limited to the use of containers to implement controlling data privacy elements. Any other system or engine may be used in data protection platform500to implement controlling data privacy elements, in addition to or in lieu of the use of containers. FIG.6is a block diagram illustrating non-limiting example600, which includes a non-exhaustive set610of data privacy elements that can be exposed to network hosts or any other device within a network.FIG.6is provided to describe in greater detail the various data privacy elements associated with a particular browser, computing device, or network. For example, non-exhaustive set610includes the various data privacy elements that can be exposed to network hosts during online activity performed by a computing device, such as computing device310ofFIG.3. Further, the data privacy elements included in non-exhaustive set610may also be collected while the computing device is not browsing the Internet or interacting with an application. For example, even though the computing device may not currently be accessing the Internet, one or more data privacy elements may nonetheless be stored at a gateway, an ISP server, or a secure server on the Internet. The stored one or more data privacy elements may have been collected during a previous interaction with the computing device. In this example, the stored one or more data privacy elements are still exposed because a network host can access the stored one or more data privacy elements even while the computing device is not currently accessing the Internet. In some implementations, non-exhaustive set610may include data privacy elements620, which are related to the online activity of a user. Non-limiting examples of the activity of a user may include any interaction between user input devices and a browser (e.g., the user entering text into a website using a keyboard), the browser and a web server (e.g., the browser requesting access to a webpage by transmitting the request to a web server, the search history of a browser, the browsing history of a browser), the browser and an application server (e.g., the browser requesting access to an application by transmitting the request to the application server), the browser and a database server (e.g., the browser requesting access to one or more files stored at a remote database), the browser and the computing device on which the browser is running (e.g., the browser storing data from a cookie on the hard drive of the computing device), the computing device and any device on a network (e.g., the computing device automatically pinging a server to request a software update), and any other suitable data representing an activity or interaction. In some implementations, data privacy elements620may also include a detection of no activity or no interactions during a time period, for example, a period of time of no user interaction or user activity. In some implementations, data privacy elements620may include information about input received at a browser, but that was not ultimately transmitted to the web server due to subsequent activity by the user. For example, if a user types in certain text into an input field displayed on a webpage, but then deletes that text without pressing any buttons (e.g., a “send” button), that entered text may nonetheless be an exposable data privacy element that can reveal information about the user, even though that entered text was never transmitted to a web server. It will be appreciated that the present disclosure is not limited to the examples of data privacy elements620described herein. Other data privacy elements related to a user's activity or non-activity that are not mentioned here, may still be within the scope of the present disclosure. In some implementations, non-exhaustive set610may include data privacy elements630, which are related to information about networks and/or network configurations. Non-limiting examples of information about a network may include a network topology (e.g., how many web servers, application servers, or database servers are included in the network, and how are they connected); network security information (e.g., which Certificate Authorities (CAs) are trusted, which security protocols are used for communicating between devices, the existence of any detected honeypots in the network, and so on); the versions of security software used in the network; the physical locations of any computing devices, servers, or databases; the number of devices connected to a network; the identify of other networks connected to a network; the IP addresses of devices within the network; particular device identifiers of devices, such as a media access control (MAC) address; the SSID of any gateways or access points; the number of gateways or access points; and any other suitable data privacy element related to network information. Network hosts can evaluate data privacy elements630to identify and exploit vulnerabilities in the network. It will be appreciated that the present disclosure is not limited to the examples of data privacy elements630described herein. Other data privacy elements related to a network that are not mentioned here, may still be within the scope of the present disclosure. In some implementations, non-exhaustive set610may include data privacy elements640, which are related to information about applications stored on the computing device or accessed by the computing device. Non-limiting examples of application information may include an identity of one or more applications installed on the computing device; an identify of one or more applications accessed by the computing device (e.g., which web applications were accessed by the computing device); a software version of one or more applications installed on the computing device; an identity of one or more applications that were recently or not recently uninstalled from the computing device; the usage of one or more applications installed on the computing device (e.g., how many times did the user click or tap on the execution file of the application); whether an application is a native application stored on a mobile device or a web application stored on a web server or application server; an identity of one or more applications that are active in the background (e.g., applications that are open and running on the computing device, but that the user is not currently using); an identify of one or more applications that are currently experiencing user interaction; the history of software updates of an application; and any other suitable data privacy element relating to applications. It will be appreciated that the present disclosure is not limited to the examples of data privacy elements640described herein. Other data privacy elements related to an application that are not mentioned here, may still be within the scope of the present disclosure. In some implementations, non-exhaustive set610may include data privacy elements650, which expose information about the OS installed on the computing device. Non-limiting examples of OS information may include an identity of the OS installed on the computing device; a version of the OS installed on the computing device; a history of the updates of the OS; an identity of a destination server with which the computing device communicated during any of the updates; an identification of patches that were downloaded; an identification of patches that were not downloaded; and identification of updates that were downloaded, but not properly installed; system configurations of the OS; the settings or the hardware-software arrangement; system setting files; activity logged by the OS; an identity of another OS installed on the computing device, if more than one; and any other suitable data privacy element relating to the OS currently installed or previously installed on the computing device. It will be appreciated that the present disclosure is not limited to the examples of data privacy elements650described herein. Other data privacy elements related to the OS that are not mentioned here, may still be within the scope of the present disclosure. In some implementations, non-exhaustive set610may include data privacy elements660, which expose information about the hardware components of the computing device. Non-limiting examples of hardware information may include an identity of the various hardware components installed on the computing device; an identify of any firmware installed on the computing device; an identity of any drivers downloaded on the computing device to operate a hardware component; configuration settings of any hardware component, firmware, or driver installed on the computing device; a log of which external hardware devices have been connected to the computing device and which ports were used (e.g., Universal Serial Bus (USB) port); the usage of a hardware component (e.g., the CPU usage at a given time); an identify of any hardware components that are paired with the computing device over a short-range communication channel, such as Bluetooth (e.g., has the computing device connected to a smart watch, a virtual-reality headset, a Bluetooth headset, and so on); and any other data privacy elements that relate to hardware information. It will be appreciated that the present disclosure is not limited to the examples of data privacy elements660described herein. Other data privacy elements related to the hardware components of the computing device or other associated devices (e.g., a virtual-reality headset) that are not mentioned here, may still be within the scope of the present disclosure. It will also be appreciated that non-exhaustive set610may also include data privacy elements670that are not described above, but that are within the scope of the present disclosure. Further, there may or may not be overlap between data privacy elements620,630,640,650,660, and670. WhileFIG.6illustrates a non-exhaustive set of data privacy elements that may be exposed by the user, the browser running on the computing device, the computing device itself, or any device that the computing device interacted with, certain embodiments of the present disclosure include generating a model for creating artificial profiles based on the non-exhaustive set610of data privacy elements. The model may be generated using one or more machine-learning techniques and/or one or more AI techniques, as described in further detail with respect toFIG.7. FIG.7is a block diagram illustrating a non-limiting example of an artificial profile model700, according to certain aspects of the present disclosure. As described above, certain embodiments provide for generating an artificial profile model, which can be used as the basis for creating artificial profiles for users navigating the Internet. The advantage of using an artificial profile model as the basis for creating or modifying artificial profiles is that the artificial profile model ensures that the newly created or modified artificial profiles are consistent with constraints, relationships and/or dependencies between data privacy elements. Maintaining consistency with the constraints, relationships and/or dependencies that are defined in the artificial profile model makes for more realistic artificial profiles. Further, realistic artificial profiles advantageously decrease the likelihood that a network host will flag an artificial profile as fake, while at the same time obfuscates or blocks information about the user, browser, or computing device. In some implementations, artificial profile model700may be trained by executing one or more machine-learning algorithms on a data set including non-exhaustive set610ofFIG.6. For example, one or more clustering algorithms may be executed on the data set including non-exhaustive set610to identify clusters of data privacy elements that relate to each other or patterns of dependencies within the data set. The data protection platform can execute the clustering algorithms to identify patterns within the data set, which can then be used to generate artificial profile model700. Non-limiting examples of machine-learning algorithms or techniques can include artificial neural networks (including backpropagation, Boltzmann machines, etc.), bayesian statistics (e.g., bayesian networks or knowledge bases), logistical model trees, support vector machines, information fuzzy networks, Hidden Markov models, hierarchical clustering (unsupervised), self-organizing maps, clustering techniques, and other suitable machine-learning techniques (supervised or unsupervised). For example, the data protection platform can retrieve one or more machine-learning algorithms stored in a database (not shown) to generate an artificial neural network in order to identify patterns or correlations within the data set of data privacy elements (i.e., within non-exhaustive set610). As a further example, the artificial neural network can learn that when data privacy element #1 (in the data set) includes value A and value B, then data privacy element #2 is predicted as relevant data for data privacy element #1. Thus, a constrain, relationship and/or dependency can be defined between data privacy element #1 and data privacy element #2, such that any newly created or modified artificial profiles should be consistent with the relationship between data privacy elements #1 and #2. In yet another example, a support vector machine can be used either to generate output data that is used as a prediction, or to identify learned patterns within the data set. The one or more machine-learning algorithms may relate to unsupervised learning techniques, however, the present disclosure is not limited thereto. Supervised learning techniques may also be implemented. In some implementations, executing the one or more machine-learning algorithms may generate a plurality of nodes and one or more correlations between at least two nodes of the plurality of nodes. For example, the one or more machine-learning algorithms in these implementations can include unsupervised learning techniques, such as clustering techniques, artificial neural networks, association rule learning, and so on. In some implementations, the data protection platform can map data privacy elements to a machine-learning model (e.g., artificial profile model700), which includes a plurality of nodes and one or more correlations between at least two nodes. Based on the mapping and the one or more correlations, the data protection platform can intelligently predict or recommend other data privacy elements that are related to, dependent upon, and/or correlated with data privacy elements included in an existing artificial profile (e.g., in the case of modifying an artificial profile). The execution of the one or more machine-learning algorithms can generate a plurality of nodes and one or more correlations between at least two nodes of the plurality of nodes. Each node can represent a value associated with a data privacy element and correspond to a weight determined by the machine-learning algorithms. In the case of creating new artificial profiles, the data privacy elements included in the newly-created profiles can include a set of data privacy elements that are consistent with any relationships or dependencies identified in artificial profile model700, and thus, realistic artificial profiles can be created. In the case of modifying existing artificial profiles, the data privacy elements included in the existing artificial profile can be modified in a manner that is consistent with the relationship and dependencies that are identified in artificial profile model700, and thus, existing artificial profiles can be obfuscated, such that the obfuscated profile would appear to be realistic. To illustrate and only as a non-limiting example, artificial profile model700may be the result of executing one or more clustering algorithms on non-exhaustive set610. The clustering algorithm may have identified that non-exhaustive set610included several distinct groupings or clusters of data privacy elements. For example, the clusters may be identified based on one or more similarities between values of the data privacy elements. In some implementations, the clusters of data privacy elements may be referred to as attribution vectors710. Further, the clusters of data privacy elements may include environment/non-interactive attribution vector720, behavior/real-time attribution vector730, behavioral/non-real-time attribution vector740, and activity and patterns attribution vector750. It will be appreciated that any number of attribution vectors or clusters may be determined in artificial profile model700, and that environment/non-interactive attribution vector720, behavior/real-time attribution vector730, behavioral/non-real-time attribution vector740, and activity and patterns attribution vector750are merely non-limiting examples of identifiable clusters of data privacy elements. The present disclosure is not limited to the attribution vectors illustrated inFIG.7. Continuing with the non-limiting example, environmental/non-interactive attribution vector720may correspond to data privacy elements that are clustered together based on environmental or non-interactive attributes of a computing device or browser. Environmental or non-interactive attributes, in this example, may refer to attributes that are not related or dependent upon a user interaction with a webpage, or that are related to environment attributes of a computer. For example, attribution vectors720may include data privacy elements relating to hardware components of a computing device; browser attributes, such as fonts used, browser type, or installed web apps; and OS attributes, such as fonts used by the OS, OS version, information about software updates (e.g., update schedule and IP addresses of update distribution servers), and applications installed in the OS. Additionally, the machine-learning algorithms may have identified patterns in the data privacy elements clustered as environment/non-interactive attribution vectors720. For example, the dashed line between “hardware” and “browser” inFIG.7indicates that the hardware information is relevant data for the browser information (e.g., the types of browsers that can be downloaded on the computing device are constrained by the hardware information). As another example, the dashed line between “fonts” and “applications” inFIG.7indicates that the data privacy elements relating to the fonts available in the OS are correlated or dependent on the applications installed in the OS. In some implementations, behavioral/real-time attribution vector730may correspond to data privacy elements that are clustered together based on real-time attributes of a user input (e.g., input or keystroke dynamics of user input received at a browser). Behavioral real-time attributes, in this example, may refer to attributes that are related to or dependent upon real-time user interaction with a webpage, such as mouse movements, mouse clicks, or text inputs. For example, attribution vectors730may include data privacy elements relating to input profiling based on keystroke events and/or mouse movements. Input profiling will be described in greater detail below with respect toFIG.9. Data privacy elements relating to real-time input can be exposed to network hosts and exploited to reveal information about the user. In some implementations, behavior/non-real-time attribution vector740may correspond to data privacy elements that are clustered together based on non-real-time attributes of a user input. Behavioral non-real-time attributes, in this example, may refer to attributes that are determined based on aggregated information from previous online activity performed by the user. For example, attribution vectors740may include data privacy elements relating to the average duration of activity on webpages, a bounce rate indicating an average time spend on a webpage before navigating away from the webpage, statistics about clickstream data, and other suitable non-real-time attributes of user input. Attribution vectors730and740differ in that the data privacy elements relating to attribution vector730are based on in-the-moment text input or mouse movements, whereas, data privacy elements relating to attribution vector740are based on an evaluation of aggregated data associated with user input. In some implementations, activity and patterns attribution vector750may correspond to data privacy elements that are clustered together based on the content of user input. Activity and patterns attributes, in this example, may refer to attributes that are determined based on the content of the input entered into a browser by a user. For example, attribution vectors750may include a data privacy element that exposes the browsing history of the user, the dialect or idiosyncrasies used by the user, the user's engagement with content (e.g., tapping or clicking on advertisement content), and/or any other suitable activity- or pattern-based data privacy elements. It will be appreciated that artificial profile models may be used by data broker companies (e.g., in an advertising context), while still protecting user privacy. As a non-limiting example and for illustrative purposes only, a user of the data protection platform may utilize a profile to interact with another user or party. Through a trust relationship with that other user or party, the user may select which data privacy elements to expose to the other user or party. As non-limiting examples, the selected data privacy elements can be exposed to the other user or party by passing information along via HTTP headers, HTTP verbs (e.g. POST), or other techniques, such as a YAML (YAML Ain′t Markup Language) or XML (Extensible Markup Language). In some implementations, the selected data privacy elements can last for the duration of an online session, can be manually or automatically modified during the online session, or can be automatically modified after each session. For example, an online session may begin when a user logs into the data protection platform. When the user logs into the data protection platform, an artificial profile may be generated for the user, and that artificial profile may include data privacy elements that are the same or different (entirely or partially) as the data privacy elements of the last artificial profile generated for the user. Further, since many existing exploit and exploit techniques are detectable by modern firewalls, the data protection platform can generate artificial profiles to overtly pretend to have vulnerabilities that an organization is capable of defending against. Accordingly, network attacks by network hosts, such as hackers, are inhibited because the network hosts may attempt network attacks based on inaccurate information, the network's firewalls are stopping the attack attempts (and the network attacks that may succeed in accessing the network will likely fail because the data protection platform may be a hybrid mix of containers and inaccurate information). FIGS.8A-8Bare block diagrams illustrating artificial profiles generated using the artificial profile model illustrated inFIG.7, according to certain aspects of the present disclosure.FIG.8Aillustrates artificial profile800A, which represents the data privacy elements that are exposed to a web server when a computing device loads a website, for example. For the purpose of illustration and only as a non-limiting example, artificial profile800A may include four attribution vectors. The four attribution vectors may include environmental/non-interactive attribution vector810, behavioral real-time attribution vector820, behavioral non-real-time attribution vector830, and activity and patterns attribution vector840. In some implementations, an attribution vector may be a category, grouping, or classification of data privacy elements. Environmental/non-interactive attribution vector810may be detected when the computing device loads the webpage. Environment/non-interactive attribution vector810may include data privacy element815, which indicates a type of browser running on the computing device. For example, browser type A (e.g., the GOOGLE CHROME browser may be a browser type, and the MOZILLA FIREFOX browser may be another browser type) may be a value of data privacy element815, which may be detected when computing device loads the webpage. Behavioral real-time attribution vector820may include data privacy element825, which indicates a real-time input signature associated with the input received at the computing device by the user. The input signature of input received at the computing device is described in greater detail with respect toFIG.9. For example, an input signature of “English” (e.g., detected based on the key dynamics of the input indicating that the letters “TING” are typed sequentially without a pause by the user) may be a value of data privacy element825, which may be detected when computing device interacts with the webpage. Behavioral non-real-time attribution vector830may include data privacy element835, which indicates a non-real-time input signature associated with previous inputs received at the computing device while accessing the website or other websites. For example, an input signature of “English” may be a value of data privacy element835, which may be detected when computing device interacts with the webpage or any other webpage at a previous time. Behavioral real-time attribution vector820detects, analyzes, and profiles input in real-time as the inputs are being entered by the user operating the computing device, whereas, behavioral non-real-time attribute vector830represents a behavioral pattern associated with the user operating the computing device, but which occurred in the past. Lastly, activity and patterns attribution vector840may include data privacy element845, which indicates an activity or pattern of the Operating System (OS) installed on the computing device. For example, an activity or pattern of the detected OS may be that the OS transmits a signal to XYZ.com daily at 6:00 a.m. For example, XYZ.com may be a website that stores or distributes patches for the OS. The signal that is transmitted daily from the OS of the computing device may correspond to a request to download new patches, if any. While artificial profile800A represents the real data privacy elements that were exposed to the web server hosting the website accessed by the computing device, new artificial profile800B represents the modified artificial profile. For example, data protection platform can generate new artificial profile800B by modifying data privacy elements of artificial profile800A. Further, data protection platform may modify artificial profile800A based on an artificial profile model. The artificial profile model may be a model that is generated using machine-learning techniques, and that includes one or more dependences or relationships between two or more data privacy elements. Accordingly, when new artificial profile800B is generated, the data privacy elements of artificial profile800A that are modified are done so within the constraints of the artificial profile model, so as to obfuscate the user with a realistic artificial profile. Advantageously, obfuscating information about a user in a realistic manner is more likely to cause a potential hacker to accept the obfuscated information as the real information of the user. Conversely, by modifying artificial profiles without being consistent with underlying dependencies and relationships between data privacy elements, a the potential hacker may recognize the inconsistent as a flag indicating that the artificial profile is includes inaccurate or obfuscated information. If a potential hacker recognizes that the collected data privacy elements are obfuscated, the potential hacker may be more likely to continue a data breach using alternative approaches, potentially elevating the severity of an attack on the network. Continuing with the non-limiting example illustrated inFIG.8B, the data protection platform can generate new artificial profile800B (e.g., a modified version of artificial profile800A) for the user to obfuscate or mask the user's real data privacy elements (e.g., the data privacy elements included in profile800A). In some implementations, new artificial profile800B may include the same attribution vectors as artificial profile800A, however, the present disclosure is not limited thereto. In some implementations, new artificial profile800B may include more or less attribution vectors than the underlying artificial profile that is being modified. Environmental/non-interactive attribution vector850, behavioral real-time attribution vector860, behavioral non-real-time attribution vector870, and activity and patterns attribution vector880may each correspond to its respective attribution vector in artificial profile800A, however, the value (e.g., the data underlying the data privacy element) may have been changed. For example, the data protection platform may modify data privacy element815from “Browser type A” to “Browser type B” (e.g., from a GOOGLE CHROME browser to a FIREFOX browser). In some implementations, data privacy element815is modified before a network host, such as a web server providing access to a webpage, can collect any data from the browser of the computing device or from the computing device itself. When the network host collects data privacy elements from the computing device (e.g., a web server collected data privacy elements from the browser operating on the computing device), the network host will collect the obfuscated data privacy element855, which indicates that Browser type B is being used, instead of data privacy element815, which indicates the actual browser being used by the user. The data protection platform may modify data privacy element825from “input signature=English” to “input signature=Undetectable.” In some implementations, data privacy element825is modified before a network host, such as a web server providing access to a webpage, can collect any data from the browser of the computing device or from the computing device itself. When the network host collects data privacy elements from the computing device (e.g., a web server receiving input entered by the user at the computing device), the network host will collect the obfuscated data privacy element865, which indicates that the input signature is undetectable, instead of data privacy element825, which indicates the input signature indicates a likelihood that the user is an English speaker. The data protection platform can change the input signature (e.g., input dynamics) of user input received at the computing device using techniques described in greater detail with respect toFIG.9. However, as a brief summary, the data protection platform can change the time signature associated with the inputted keystroke events so as to obfuscate any detectable key event features, such as the letters “TING—being typed together without a pause (indicating that the user is likely a native English speaker). Similarly, the data protection platform can modify data privacy element835from “previous input signature=English” to “previous input signature=undetectable.” Just as with the modification of data privacy element825to data privacy element865, the data protection platform can modify data privacy element835to data privacy element875using the same or similar technique (e.g., the techniques described inFIG.9). The data protection platform may modify data privacy element845from “Operating System pings XYZ.com daily at 0600 for patches” to “Operating System pings A1B2C3.com biweekly at2300for patches” (e.g., one Operating System's automatic update procedure to another Operating System's automatic update procedure). In some implementations, data privacy element845is modified before a network host, such as a web server providing access to a webpage, can collect any data from the browser of the computing device or from the computing device itself When the network host collects data privacy elements from the computing device (e.g., a web server collected data privacy elements from the browser operating on the computing device), the network host will collect the obfuscated data privacy element885, which indicates that a the OS pings an external server on a regular schedule, instead of data privacy element845, which indicates the actual automatic update schedule of the OS installed on the computing device. Had the network host collected data privacy element845from the browser of the computing device, the network host could have identified and exploited a vulnerability in the OS installed on the computing device, or a vulnerability in the servers of XYZ.com. However, advantageously, since the network host instead collected modified data privacy element885(as part of collecting modified artificial profile800B from the browser or computing device), the network host collected realistic, yet obfuscated, information about the browser and computing device. Thus, the network host cannot effectively mount an attack on the network or the computing device because modified artificial profile800B does not expose any real vulnerabilities existing in the browser or the computing. In some implementations, the data protection platform does not need to generate artificial profile800A, which includes data privacy elements that were actually detected from the browser or computing device. Instead, the data protection platform can automatically and dynamically generate modified artificial profile800B, while or in conjunction with, the user browsing webpages on the Internet. In these implementations, the data protection platform does not need to detect the actual data privacy elements exposed by the computing device, but rather, the data protection platform can generate an artificial profile for the user, browser, or computing device, so as to obfuscate any potentially exposable data privacy elements. FIG.9is a diagram illustrating process flow900for controlling input signatures during an interaction session, according to certain aspects of the present disclosure. Process flow900may be performed at least in part at data protection platform950. Data protection platform950may be the same as or similar to data protection platform510ofFIG.5, and thus, a description of data protection platform950is omitted here. Process flow900may be performed to modify input signatures associated with input received at a platform-secured browser, such as the platform-secured browser ofFIG.4. In some implementations, an input signature may include a feature that characterizes an input received at the platform-secured browser. For example, a feature may be the time signature of keystrokes inputted at the platform-secure browser, however, the present disclosure is not limited thereto. Another example of a feature that characterizes an input may be movement associated with a cursor or mouse clicks. The feature of an input can be exposed as a data privacy element when a computing device accesses a website. To illustrate process900and only as a non-limiting example, computer910may be operated by a use. For instance, the user may be navigating a website or application using a platform-secured browser. The website displayed on the browser of computer910may include input element920. Input element920may be a text box displayed on a webpage for a search engine. Further, input element920may be configured to receive input from the user operating computer910. Continuing with the non-limiting example, the user may type the phrase “interesting news” into input element920. The natural keystroke event timing associated with inputting the letters “interesting news” into input element920is shown in keystroke time signature930. For example, the user may naturally input the letters of “interesting news” in the following pattern: “IN,” then a pause, “TERES,” then a pause, “TING,” then a pause, “NEW,” then a pause, and finally the letter “S.” The pauses of the pattern may occur naturally as the user types the phrase. The user may move or adjust his or her fingers to continue typing. Naturally, certain letters are more likely to be typed together quickly, such as “TING,” and for other letters, there may be a need for a brief pause while the user's fingers adjust or find the next letter on a keyboard. However, keystroke dynamics, such as a keystroke time signature can be a data privacy element that exposes information about the user operating computer910. For example, an input profiling technique can be used to determine that keystroke time signature930indicates that the user is an English speaker. Letter grouping940(i.e., the letters “TING”) are often used in the English language, but are not often used together in other languages. Accordingly, the keystroke time signature930can be evaluated to detect certain letter groupings, such as letter grouping940of “TING” typed sequentially without pauses. The detected letter groups can reveal information about the user to a web server, such as the language of the user. According to certain embodiments, data protection platform950can modify keystroke time signature930to obfuscate or block any information that could be extracted from keystroke time signature930. For example, data protection platform950can receive the input of “interesting news” from the platform-secured browser, however, data protection platform950can detect keystroke time signature930from the received input before transmitting the input to the web server hosting the website that includes input element920. Instead of transmitting the received input in the pattern of keystroke time signature930, data protection platform950can transmit the letters “interesting news” to the web server with the characteristic of modified keystroke time signature960. Modified keystroke time signature960can indicate that all letters of “interesting news” are typed one-after-another without any pauses. Thus, while the network host, for example, the web server hosting the web site that includes input element920, can gain access to the time signature or detect the time signature of the received input of “interesting news,” but the detected time signature at the web server would be modified keystroke time signature960, instead of the real keystroke time signature of930. Advantageously, keystroke time signature930, which represents the natural keystroke dynamics of the user operating computer910, can be obfuscated so as to prevent an accurate input profiling of the received text. In some implementations, data protection platform950can automatically (or potentially not automatically) modify features of the received input. For example, to modify the keystroke time signature of input text received at an input element, data protection platform950can provide an intermediary, such as an invisible overlay over the websites accessed by the platform-secured browser. In some implementations, the intermediary may intercept the input text received at the input element (e.g., before the text is transmitted to the web server), modify the time signature of the input text, and then transmit the input text with the modified time signature to the web server. Other techniques for performing the modification may include modifying input streams, providing on-screen input methods, and other suitable techniques. In some implementations, data protection platform950may provide additional information to the user, instead of modifying an input stream. For example, data protection platform950can notify the user that the input text is defined by a keystroke time signature that may reveal the language of the input text. In some implementations, the time signature of the input text can be modified immediately (e.g., in real-time) upon being received at the input element, whereas, in other implementations, the time signature of the input text can be modified over a period of time or at a later time. In some implementations, data protection platform950can impose an effect on inputted text or inputted mouse interactions, such that the effect automatically changes the browser to modify a time signature of the inputted text or mouse interactions. For example, data protection platform950can include a shim that serves as a wedge between the OS and the browser (or application, if being used). The shim can influence or modify how the OS reports inputs received at a keyboard or a mouse. The shim may be used to modify how the OS reports the time signature of inputted text, for example. In some implementations, an intermediary may not be used, but rather the native environment of the application or browser may be structured so that inputs received at the browser are outputted with a defined time signature. In these implementations, the input text or mouse interaction is not intercepted at the browser, but rather, the input text or mouse interaction is defined so as to have a particular time signature. The present disclosure is not limited to detecting the keystroke time signature of inputted text. In some implementations, mouse movement can also be detected as a data privacy element, and subsequently modified by data protection platform950to remove any extractable characteristics. It will be appreciated that the input may also include video signals, audio signals, motion signals, and/or haptic signals (e.g., received from a haptic glove). For example, in the context of a virtual-reality headset, the inputs received at a web server may comprise much more data than text or mouse interactions. Using the techniques described above, data protection platform950can modify the inputted video signals, audio signals, motion signals, and/or haptic signals, so as to obfuscate information about the user operating the virtual-reality headset. The foregoing description of the embodiments, including illustrated embodiments, has been presented only for the purpose of illustration and description and is not intended to be exhaustive or limiting to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art. | 74,736 |
11861045 | DETAILED DESCRIPTION The principle of the present disclosure will be described below with reference to several example embodiments illustrated in the accompanying drawings. Although the drawings show preferred embodiments of the present disclosure, it should be understood that these embodiments are merely described to enable those skilled in the art to better understand and further implement the present disclosure, and not to limit the scope of the present disclosure in any way. The term “include” and variants thereof used herein indicate open-ended inclusion, that is, “including but not limited to.” Unless specifically stated, the term “or” means “and/or.” The term “based on” means “based at least in part on.” The terms “an example embodiment” and “an embodiment” indicate “at least one example embodiment.” The term “another embodiment” indicates “at least one additional embodiment.” The terms “first,” “second,” and the like may refer to different or the same objects. Other explicit and implicit definitions may also be included below. As previously mentioned, some software may be generalized to more platforms for use. When used on multiple platforms, some features in the software may not be supported on some platforms. Unsupported features need to be disabled or hidden for specific platforms. Otherwise, it will bring bad user experience during use. Because the user will not know whether this is a bug or because it is not supported. However, there are too many features running in software, and there are also many supporting platforms. If there is no unified method to manage the feature availability, the user interface (UI) will be troubled by a lot of code logic. Moreover, if enabled states of certain features need to be modified in the future (for example, a client terminal purchases a different license, or the client terminal enables or disables certain features as needed), the UI code will have to be modified and refactored. A solution for system feature management is proposed in the embodiments of the present disclosure, so that features of software can be managed in a unified manner. According to various embodiments of the present disclosure, a feature item set is loaded, and the feature item set includes multiple feature items respectively corresponding to multiple microservices. In response to an availability indicator of a first feature item indicating that the first feature item is unavailable, the first feature item is disabled. According to the embodiments described herein, software runtime failures may be avoided by disabling, when the software is booted, those features that are not supported by the platform on which the software is running. In this way, software can be made to better adapt to more platforms. In addition, even if the user re-purchases a different service, the operator only needs to enable or disable the corresponding feature item, and does not need to redeploy to the user, thereby saving operating costs. The user may also choose to enable or disable the purchased service according to his/her own needs, so that the resources consumed by microservices running in the backend can be reduced. The basic principle and some example implementations of the present disclosure will be described below with reference to the accompanying drawings. It should be understood that these example embodiments are given only to enable those skilled in the art to better understand and thus implement the embodiments of the present disclosure, and are not intended to limit the scope of the present disclosure in any way. FIG.1is a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented. As shown inFIG.1, environment100includes system feature manager110, client terminal120, microservice130, feature item library140, and license manager150. Client terminal120may communicate with system feature manager110, microservice130, and feature item library140through API gateway160. Microservice130may provide various services to client terminal120. Although microservice130is shown as a whole, it should be understood that this is for ease of illustration only. In fact, microservice130may include one or more microservices (collectively or individually referred to as “microservice130”) that may be communicatively coupled to each other and independently deployed. These microservices communicate with each other through a representational layer state transition (REST) API. Any two of system feature manager110, microservice130, and feature item library140may communicate with each other. System feature manager110may obtain data from license manager150. In some embodiments, system feature manager110and microservice130may be arranged together. In some embodiments, system feature manager110and microservice130may be arranged separately, but can communicate with each other. The platform on which system feature manager110and microservice130run may be a local virtual machine, a public cloud, a cloud virtual machine, or the like. The scope of the present disclosure is not limited in this regard. It should be understood that the structure and functions of environment100are described for illustrative purposes only and do not imply any limitation to the scope of the present disclosure. For example, the embodiments of the present disclosure may also be applied to an environment different from environment100. FIG.2is a flow chart of example method200for system feature management according to an embodiment of the present disclosure. For example, method200may be performed by system feature manager110as shown inFIG.1. It should be understood that method200may also include additional actions not shown and/or may omit actions shown, and the scope of the present disclosure is not limited in this regard. Method200will be described in detail below with reference toFIG.1andFIG.3. At block210, system feature manager110loads a feature item set that includes multiple feature items corresponding to multiple microservices, respectively.FIG.3shows code defining a feature item according to some embodiments of the present disclosure. As shown inFIG.3, in some embodiments, a feature item may include feature identifier310, feature descriptor320, availability indicator330, and status indicator340. Feature identifier310is an identifier that can uniquely identify the feature item. Feature descriptor320can describe the feature item and may be presented on a user interface in the form of options for presenting the feature item to a user. Availability indicator330indicates whether the feature item is available. For example, if the service purchased by client terminal120does not include a microservice corresponding to the feature item, then availability indicator330should indicate that it is unavailable. In some embodiments, loading a feature item set may include obtaining license information for the multiple microservices. Then, availability indicators330in the feature item set are determined based on the license information. In the embodiment shown inFIG.3, availability indicator330is represented by a field “available,” which indicates available when a value thereof equals to “true.” Conversely, a value “false” (not shown) may be used for indicating unavailable. In some other embodiments, “true” and “false” or “1” and “0” may be used for indicating available or unavailable, respectively, and the scope of the present disclosure is not limited in this regard. Back toFIG.2, at block220, in response to availability indicator330of a first feature item in the feature item set indicating that the first feature item is unavailable, system feature manager110disables the first feature item. In this manner, even if the user re-purchases a different service, the operator only needs to enable or disable the corresponding feature item, and does not need to redeploy to the user, thereby saving operating costs. In some embodiments, in response to status indicator340of the first feature item indicating that the first feature item is disabled, system feature manager110disables the first feature item. Status indicator340may indicate whether the feature item is enabled while the feature item is available. For example, although the service purchased by client terminal120includes the microservice corresponding to the feature item, client terminal120may choose to enable or disable the feature item according to actual situation and needs. The enabling or disabling action of client terminal120may be transferred as metadata to feature item library140, and stored as cached data for the feature item set in feature item library140. In the embodiment shown inFIG.3, status indicator340is represented by a field “status,” which indicates enabled when a value thereof equals to “ENABLED.” Conversely, a value “DISABLED” (not shown) may be used for indicating disabled. In some other embodiments, “ENABLED” and “DISABLED” or “1” and “0” may be used for indicating enabled or disabled, respectively, and the scope of the present disclosure is not limited in this regard. In some embodiments, the feature item set may include a platform indicator for indicating the platform for which the feature item set is targeted. Based on the platform indicator of the feature item set, the feature item unsupported by the platform as indicated by the platform indicator is disabled. In this way, software runtime failures may be avoided by disabling, when the software is booted, those features that are not supported by the platform on which the software is running. In this way, software can be made to better adapt to more platforms. In some embodiments, loading the feature item set may include determining status indicators340in the feature item set based on cached data for the feature item set. In this way, the user may choose to enable or disable the purchased service according to his/her own needs, so that the resources consumed by microservice130running in the backend can be reduced. The feature item also includes API rule360as shown inFIG.3, and API rule360indicates part of microservice130corresponding to the feature item. When a feature item is disabled, part of microservice130corresponding to the feature item may be disabled according to the API rule of the feature item. In this way, instead of disabling the entire microservice corresponding to the feature item, part of the services may be selectively disabled, and only some necessary services are retained. Therefore, the energy consumption is reduced while ensuring the normal operation of the software. In some embodiments, some feature items in the feature item set may include, for example, association indicator350inFIG.3. The association indicator indicates that the feature item is associated with corresponding microservice130. At this point, if the feature item is disabled, microservice130associated with it will be disabled. In this way, because the feature item is associated with corresponding microservice130, entire microservice130can be disabled by disabling the feature item without considering what kind of service is defined by the API rule in the feature item. There may be dependencies between feature items. That is, microservice130corresponding to one feature item or part of this microservice130may depend on microservice130corresponding to another feature item or part of its service. In some embodiments, a second feature item in the feature item set may include one or more dependency indicators for indicating one or more upper-level feature items on which the second feature item depends. When the second feature item having upper-level feature items is being enabled, the dependency relationship needs to be considered.FIG.4is a flow chart of example method400of enabling a second feature item depending on upper-level feature items according to some embodiments of the present disclosure. Method400may be performed also by system feature manager110as shown inFIG.1. It should be understood that method400may also include additional actions not shown and/or may omit actions shown, and the scope of the present disclosure is not limited in this regard. Method400will be described in detail below with reference toFIG.1andFIG.3. At block410, system feature manager110receives a request to transition the second feature item from disabled to enabled. Then, at block420, system feature manager110determines whether availability indicator330of the second feature item indicates available. If availability indicator330indicates unavailable (“No” branch), it means that the service corresponding to the second feature item is unavailable (for example, the corresponding service has not been purchased), the second feature item cannot be enabled, and therefore, the request will be rejected by system feature manager110at block450. If availability indicator330of the second feature item indicates available (“Yes” branch), system feature manager110determines at block430whether status indicators340of one or more upper-level feature items of the second feature item all indicate enabled. As long as there is one upper-level feature item whose status indicator340indicates disabled, the second feature item cannot be enabled. This is because its corresponding service needs to depend on other services to be implemented. Therefore, the request will be rejected by system feature manager110at block450. If it is determined that status indicators340of the one or more upper-level feature items all indicate enabled, at block440, system feature manager110enables the second feature item. If an upper-level feature item is a parent feature item with a strong association relationship with the second feature item, the dependency relationship also needs to be considered when the upper-level feature item is disabled. If a request to transition a parent feature item from enabled to disabled is received, because the second feature item is a child feature item thereof, the second feature item, as the child feature item, is also disabled when the parent feature item is disabled. If there is another child feature item of the parent feature item, the another child feature item is also disabled. If the dependency between the upper-level feature item and the second feature item is weak dependency, the second feature item may or may not be disabled when the upper-level feature item is disabled. In this way, it can be ensured that enabling or disabling operations of feature items will not cause the features of software to be disordered. FIG.5is a schematic block diagram of example device500that can be configured to implement an embodiment of the present disclosure. For example, system feature manager110as shown inFIG.1may be implemented by device500. As shown inFIG.5, device500includes central processing unit (CPU)501which may perform various appropriate actions and processing according to computer program instructions stored in read-only memory (ROM)502or computer program instructions loaded from storage unit508to random access memory (RAM)503. Various programs and data required for operations of device500may also be stored in RAM503. CPU501, ROM502, and RAM503are connected to each other through bus504. Input/output (I/O) interface505is also connected to bus504. A plurality of components in device500are connected to I/O interface505, including: input unit506, such as a keyboard and a mouse; output unit507, such as various types of displays and speakers; storage unit508, such as a magnetic disk and an optical disc; and communication unit509, such as a network card, a modem, and a wireless communication transceiver. Communication unit509allows device500to exchange information/data with other devices via a computer network, such as the Internet, and/or various telecommunication networks. The various methods and processes described above, such as method200and method400, may be performed by processing unit501. For example, in some embodiments, method200and method400may be implemented as a computer software program that is tangibly included in a machine-readable medium, such as storage unit508. In some embodiments, part of or all the computer program may be loaded and/or installed to device500via ROM502and/or communication unit509. One or more actions of method200and method400described above may be performed when the computer program is loaded into RAM503and executed by CPU501. The present disclosure may be a method, an apparatus, a system, and/or a computer program product. The computer program product may include a computer-readable storage medium on which computer-readable program instructions for performing various aspects of the present disclosure are loaded. The computer-readable storage medium may be a tangible device that may retain and store instructions used by an instruction-executing device. For example, the computer-readable storage medium may be, but is not limited to, an electric storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include: a portable computer disk, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM or flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, for example, a punch card or a raised structure in a groove with instructions stored thereon, and any suitable combination of the foregoing. The computer-readable storage medium used herein is not to be interpreted as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber-optic cables), or electrical signals transmitted through electrical wires. The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the computing/processing device. The computer program instructions for executing the operation of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, status setting data, or source code or object code written in any combination of one or more programming languages, the programming languages including object-oriented programming languages such as Smalltalk and C++, and conventional procedural programming languages such as the C language or similar programming languages. The computer-readable program instructions may be executed entirely on a user computer, partly on a user computer, as a stand-alone software package, partly on a user computer and partly on a remote computer, or entirely on a remote computer or a server. In a case where a remote computer is involved, the remote computer may be connected to a user computer through any kind of networks, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, connected through the Internet using an Internet service provider). In some embodiments, an electronic circuit, such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), is customized by utilizing status information of the computer-readable program instructions. The electronic circuit may execute the computer-readable program instructions to implement various aspects of the present disclosure. Various aspects of the present disclosure are described here with reference to flow charts and/or block diagrams of the method, the apparatus (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that each block of the flow charts and/or the block diagrams and combinations of blocks in the flow charts and/or the block diagrams may be implemented by computer-readable program instructions. These computer-readable program instructions may be provided to a processing unit of a general-purpose computer, a special-purpose computer, or a further programmable data processing apparatus, thereby producing a machine, such that these instructions, when executed by the processing unit of the computer or the further programmable data processing apparatus, produce means for implementing functions/actions specified in one or more blocks in the flow charts and/or block diagrams. These computer-readable program instructions may also be stored in a computer-readable storage medium, and these instructions cause a computer, a programmable data processing apparatus, and/or other devices to operate in a specific manner; and thus the computer-readable medium having instructions stored includes an article of manufacture that includes instructions that implement various aspects of the functions/actions specified in one or more blocks in the flow charts and/or block diagrams. The computer-readable program instructions may also be loaded to a computer, a further programmable data processing apparatus, or a further device, so that a series of operating steps may be performed on the computer, the further programmable data processing apparatus, or the further device to produce a computer-implemented process, such that the instructions executed on the computer, the further programmable data processing apparatus, or the further device may implement the functions/actions specified in one or more blocks in the flow charts and/or block diagrams. The flow charts and block diagrams in the drawings illustrate the architectures, functions, and operations of possible implementations of the systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flow charts or block diagrams may represent a module, a program segment, or part of an instruction, the module, program segment, or part of an instruction including one or more executable instructions for implementing specified logical functions. In some alternative implementations, functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two successive blocks may actually be executed in parallel substantially, and sometimes they may also be executed in a reverse order, which depends on involved functions. It should be further noted that each block in the block diagrams and/or flow charts as well as a combination of blocks in the block diagrams and/or flow charts may be implemented by using a special hardware-based system that executes specified functions or actions, or implemented by using a combination of special hardware and computer instructions. The embodiments of the present disclosure have been described above. The above description is illustrative, rather than exhaustive, and is not limited to the disclosed various embodiments. Numerous modifications and alterations are apparent to persons of ordinary skill in the art without departing from the scope and spirit of the illustrated embodiments. The selection of terms used herein is intended to best explain the principles and practical applications of the various embodiments or the improvements to technologies on the market, or to enable other persons of ordinary skill in the art to understand the embodiments disclosed here. | 24,284 |
11861046 | DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. A system may be required to ensure safety and security of data (e.g., a message payload) that is transferred, stored, or otherwise processed by the system. Conventionally, a first check value is used for checking safety of the data, while a second (i.e., separate) check value is used for checking security of the data. The safety check value may be, for example, a cyclic redundancy check (CRC) value, while the security check value may be, for example, an integrity check value or message authentication code (MAC). The safety check value and the security check value are typically attached to a message payload and both check values are transferred or stored along with the payload. In some scenarios, to reduce overhead, a system can be configured to use a single check value for evaluating both safety and security of data. Here, rather than attaching both a safety check value and a security check value to the message payload, a single safety and security check value can be attached to the payload. The use of the single safety and security check value reduces overhead associated with transferring the payload, meaning that effective communication bandwidth is increased. In some cases, the safety and security check value (i.e., the single value based on which both safety and security can be evaluated) may be an integrity check value or a MAC. Here, evaluation of the safety and security check value for a message (e.g., a message carrying data from a braking sensor in an automotive application) received by a system (e.g., a microcontroller, an electronic control unit (ECU), or the like), forms the so-called “last mile” of safety protection before data carried in the message is used by an application to execute commands based on the data (e.g., “brake now”). Due to the criticality of the protection measure that evaluates data safety and security, implementation of the integrity check value or the MAC, and the evaluation of the result provided by the protection measure, should follow applicable safety standards for end-to-end protection. In the automotive context, an Automotive Safety Integrity Level (ASIL) scheme may be used to dictate functional safety requirements for a protection measure associated with evaluating safety and security of data. The ASIL scheme is a risk classification scheme defined by the International Organization for Standardization (ISO) 26262 standard (titled Functional Safety for Road Vehicles), which provides a standard for functional safety of electrical and/or electronic systems in production automobiles. An ASIL classification defines safety requirements necessary to be in line with the ISO 26262 standard. An ASIL is established by performing a risk analysis of a potential hazard by looking at severity, exposure, and controllability of a vehicle operating scenario. A safety goal for that hazard in turn carries the ASIL requirements. There are four ASILs identified by the standard: ASIL A, ASIL B, ASIL C, and ASIL D. ASIL D dictates the highest integrity requirements, while ASIL A dictates the lowest. A hazard with a risk that is low (and, therefore, does not require safety measures in accordance with ISO 26262) is identified as quality management (QM). In some cases, it is desirable or required that a protection measure for evaluating safety and security achieves a high ASIL. For example, it may be desirable or required that such a protection measure used in a braking sensor application achieves ASIL D. According to ASIL D, a malfunctioning behavior within a data path of the protection measure or a malfunctioning behavior associated with the evaluation performed by the protection measure should be detected with a probability of 99.9% and systematic faults must be detected with at least 99% coverage. Further, any malfunction (e.g., due to a single point fault) that can cause a falsified message to be consumed by an application that could result in harm to a person must be prevented. This means, in the context of the braking application for example, that a falsified message must be detected before a braking action is triggered based on data included in the falsified message. For hardware, providing such protection means that any malfunctioning behavior within the data path of the protection measure or with respect to the evaluation performed by the protection measure during operation should be detected. Conventional hardware platforms that enable utilization of the combined safety and security check value described above do not fulfill these criteria. As a result, a handshake protocol between software tasks dedicated to and qualified for safety, and software tasks dedicated to and qualified for security, is needed. However, such a handshake protocol is complex and costly in terms of system performance and, therefore, is undesirable. Some implementations described herein provide techniques and apparatuses for an improved safety and security check that utilizes a single safety and security check value. In some implementations, the improved safety and security check is enabled by a single cryptographic accelerator and a pair of redundant comparators (e.g., a first comparator and a second comparator). Here, the cryptographic accelerator generates a check value based on a payload received in a message and provides the generated check value to a first comparator and to a second comparator. In some implementations, the first comparator receives the generated check value from the cryptographic accelerator, determines whether the generated check value matches a check value received in the message, and provides a first output indicating whether the generated check value matches the received check value. Similarly, the second comparator receives the generated check value from the cryptographic accelerator, determines whether the generated check value matches the received check value, and provides a second output indicating whether the first check value matches the second check value. Here, the use of the redundant comparators (i.e., the first comparator and the second comparator) enables detection of a hardware fault associated with the comparison of the check value generated by the cryptographic accelerator and the check value in the message. Such a hardware fault can be safety related (e.g., a stuck-at fault, a random hardware fault due to alpha particles, or the like) or can be security related (e.g., a fault caused by a laser attack, needle forcing, or the like). In case of a hardware fault, a given comparator could erroneously provide an output indicating that the generated check value matches the received check value even though the generated check value differs from the received check value. However, the redundant comparators prevent such false-positives from being missed by the system. In some implementations, because a single safety and security check value is used, overhead associated with transferring or storing the payload is reduced (e.g., as compared to using separate safety and security check values). Further, an amount of area required to implement hardware for performing the improved safety and security check is relatively small (e.g., since a single cryptographic accelerator is needed). Additionally, the need for a complex and costly handshake protocol between software tasks is eliminated. FIG.1is a diagram of an example implementation100of an improved safety and security check in accordance with aspect of the present disclosure. As shown inFIG.1, example implementation100includes a system200comprising a cryptographic accelerator202, a comparator204a, and a comparator204b. In some implementations, the cryptographic accelerator202, the comparator204a, and the comparator204bare implemented in hardware. The components of system200are described in more detail below in connection withFIG.2. In some implementations, the system200receives a message M that includes a payload P and a safety and security check value SSR. For example, the system200may be implemented on a microcontroller or an ECU associated with controlling braking in an automotive application, and may receive, from a braking sensor, a message M that includes a payload P carrying data based on which a braking action may be decided. Here, the check value SSRmay be, for example, a MAC or an integrity check value generated by the braking sensor and appended to the payload P. As shown inFIG.1by reference102, the cryptographic accelerator202is provided with the payload P of a message M. As shown by reference104, the cryptographic accelerator202generates a safety and security check value SSGbased on the payload P. That is, the cryptographic accelerator202generates the check value SSGbased on the payload P received by the cryptographic accelerator202. In some implementations, the check value SSGmay be, for example, a MAC or an integrity check value. In some implementations, the cryptographic accelerator202is configured to generate the check value SSGusing a key configured on the cryptographic accelerator202. In some implementations, safety of the key used by the cryptographic accelerator202is provided by a safe key write. Alternatively, safety of the key used by the cryptographic accelerator202may be provided by a key check operation performed after a non-safe key write. Additionally, in some implementations, the cryptographic accelerator202is configured to generate the check value SSGaccording to a message specific configuration. As shown by reference106, the cryptographic accelerator202provides the check value SSGto the comparator204aand to the comparator204b, and the comparator204aand the comparator204breceive the check value SSG, accordingly. As shown by reference108, the comparator204aand the comparator204balso receive the check value SSR(i.e., the safety and security check value received in the message M). As shown by reference110, the comparator204adetermines whether the check value SSGmatches the check value SSR. That is, the comparator204adetermines whether the check value SSGgenerated by the cryptographic accelerator202matches (e.g., is equal to or different from by less than a tolerance) the check value SSRreceived in the message M. As shown by reference112, the comparator204athen provides an output A indicating whether the check value SSGmatches the check value SSRas determined by the comparator204a. In some implementations, the comparator204aprovides the output A to a processor (e.g., a CPU) of the system200(not shown). Additionally, as shown by reference114, the comparator204bdetermines whether the check value SSGmatches the check value SSR. That is, the comparator204bmakes a redundant determination of whether the check value SSGgenerated by the cryptographic accelerator202matches the check value SSRreceived in the message M. As shown by reference116, the comparator204bthen provides an output B indicating whether the check value SSGmatches the check value SSRas determined by the comparator204b. In some implementations, the comparator204bprovides the output B to the processor of the system200(not shown). In some implementations, the comparator204aand the comparator204boperate in lockstep when determining whether the check value SSGmatches the check value SSRand providing the output A and the output B. That is, the comparator204aand the comparator204bmay perform the same set of operations (e.g., determining whether the check value SSGmatches the check value SSRand providing an output) at the same time, in parallel. In some implementations, the processor to which the comparator204aand the comparator204bprovide the output A and the output B can detect hardware faults associated with the cryptographic accelerator202, the comparator204a, or the comparator204bbased on the output A and the output B. For example, the processor may detect a hardware fault when the output A does not match the output B. Conversely, the processor may detect no hardware fault when the output A matches the output B. Further, in some implementations, the processor to which the comparator204aand the comparator204bprovide the output A and the output B can evaluate authenticity of the message M and safety of the payload P based on the output A and the output B. For example, the processor may authentic the message M and determine that the payload P is safe when both output A and output B indicate that the check value SSGmatches the check value SSR. Conversely, the processor may not authentic the message M or may determine that the payload P is not safe when at least one of the output A and output B indicates that the check value SSGdoes not match the check value SSR. That is, in some implementations, the output A and the output B are provided to software running on the processor and the software can determine, based on the output A and the output B, whether the message M and the payload P have passed the safety and security check or that the message M should be discarded. Notably, only a single cryptographic accelerator202is included in the system200, meaning that cost and size of the system200is reduced (e.g., as compared to a safety and security solution that requires multiple cryptographic accelerators). Further, the system200can detect a malfunctioning behavior on a data path associated with the safety and security check performed by the system200. In some implementations, detection of a malfunctioning behavior is enabled due to argumentation of the mathematics of the cryptographic accelerator202(e.g., when a Hamming distance of an algorithm used in the cryptographic accelerator202is sufficient to hold the safety argumentation, no double computation of the check value SSGis required). As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. For example, while example implementation100is described in the context of a use case associated with a payload P carried in a message M received by the system200, other use cases are possible, such as authentication of memory content (e.g., when software code stored in a memory is protected with an authentication code). In this example, during boot of the system200, the cryptographic accelerator202and the comparators204can be used to evaluate authenticity of a software image stored in the memory. In general, the system200(e.g., including the cryptographic accelerator202and the comparators204) can be used in any application in which a safety and security check of two values is needed and where software can use a result of a redundant comparison for decision making or program flow control. Further, the number and arrangement of components shown inFIG.1are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.1. Furthermore, two or more components shown inFIG.1may be implemented within a single component, or a single component shown inFIG.1may be implemented as multiple, distributed components. Additionally, or alternatively, a set of components (e.g., one or more components) shown inFIG.1may perform one or more functions described as being performed by another set of components shown inFIG.1. FIG.2is a diagram of an example system200in which a cryptographic accelerator202and a group of comparators204(e.g., comparator204a, comparator204b) described herein may be implemented. As shown, the system200may include the cryptographic accelerator202, the comparators204(e.g., including the comparator204aand the comparator204b), a communication component206, a processor208, a memory210, and a bus212. Cryptographic accelerator202is a component to perform cryptographic operations associated with system200, as described herein. In some implementations, the cryptographic accelerator202is capable of generating a check value SSG(e.g., a MAC, an integrity check value, or the like) based on a payload P of a message M received by the system200, as described above. In some implementations, the cryptographic accelerator202is implemented in hardware. In some implementations, cryptographic accelerator202improves performance of the system200by providing hardware for performance of cryptographic operations (rather than cryptographic operations being performed by software and/or by a general purpose CPU of the system200). Comparator204includes two or more components (e.g., the comparator204aand the comparator204b) capable of determining whether the check value SSG(i.e., a check value generated by the cryptographic accelerator202based on the payload P of the message M) matches the check value SSR(i.e., a check value received in the message M), as described above. In some implementations, the comparator204includes the comparator204aand the comparator204b. That is, in some implementations, the comparator204includes one or more redundant comparators (e.g., in order to enable the improved safety and security check described herein). Communication component206includes a component that enables system200to communicate with other devices or systems. For example, communication component206may include a receiver, a transmitter, a transceiver, a modem, or another type of component that enables system200to communicate with other devices or systems. In some implementations, the system200may receive the message M (e.g., including the payload P and the check value SSR) via the communication component206, and the communication component206may provide the message M the payload P, or the check value SSRto one or more other components of the system200. Processor208includes a processor capable of receiving the output A and the output B (e.g., from the comparator204aand the comparator204b, respectively), and detecting a hardware fault or evaluating safety and security associated with the message M based on the output A and the output B, as described above. In some implementations, the processor208includes a CPU, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. In some implementations, the processor208is implemented in hardware, firmware, or a combination of hardware and software. Memory210is a component to store and provide data to be processed by a component of system200, such as the cryptographic accelerator202, the comparator204(e.g., the comparator204a, the comparator204b), the processor208, or the like. In some implementations, memory210may include a RAM, a read only memory (ROM), and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). Bus212is a component that enables communication among the components of system200. For example, the bus212may enable communication between the cryptographic accelerator202and the comparator204, and may enable communication between the comparators204and the processor208. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. The number and arrangement of components shown inFIG.2are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.2. Furthermore, two or more components shown inFIG.2may be implemented within a single component, or a single component shown inFIG.2may be implemented as multiple, distributed components. Additionally, or alternatively, a set of components (e.g., one or more components) shown inFIG.2may perform one or more functions described as being performed by another set of components shown inFIG.2. FIG.3is a flowchart of an example process relating to the improved safety and security check in accordance with the present disclosure. In some implementations, one or more process blocks ofFIG.3may be performed by one or more components of a system (e.g., system200), such as a cryptographic accelerator (e.g., cryptographic accelerator202), a plurality of comparators (e.g., comparator204aand comparator204b), a communication component (e.g., communication component206), a processor (e.g., a processor208), a memory (e.g., the memory210), or the like. As shown inFIG.3, process300may include receiving a payload of a received message (block310). For example, the cryptographic accelerator may receive a payload P of a received message M, as described above. As further shown inFIG.3, process300may include generating a first check value based on the payload (block320). For example, the cryptographic accelerator may generate a check value SSGbased on the payload P, as described above. As further shown inFIG.3, process300may include obtaining the first check value generated by the cryptographic accelerator (block330). For example, each comparator of the plurality of comparators may obtain the first check value generated by the cryptographic accelerator, as described above. As further shown inFIG.3, process300may include obtaining a second check value, wherein the second check value is a check value included in the received message (block340). For example, each comparator of the plurality of comparators may obtain the check value SSR, wherein the check value SSRis a check value included in the received message M, as described above. As further shown inFIG.3, process300may include determining whether the first check value matches the second check value (block350). For example, each comparator of the plurality of comparators may determine whether the check value SSGmatches the check value SSR, as described above. As further shown inFIG.3, process300may include providing a respective output, of a plurality of outputs, wherein each output of the plurality of outputs is provided by a different comparator of the plurality of comparators (block360). For example, each comparator of the plurality of comparators may provide a respective output, of a plurality of outputs, wherein each output of the plurality of outputs is provided by a different comparator of the plurality of comparators, as described above. Process300may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, the cryptographic accelerator and the plurality of comparators are hardware components of a system. In a second implementation, alone or in combination with the first implementation, the first check value is generated using a key configured on the cryptographic accelerator, wherein safety of the key is provided by either a safe key write or a key check operation performed after a non-safe key write. In a third implementation, alone or in combination with one or more of the first and second implementations, determining whether the first check value matches the second check value and providing the plurality of outputs are performed by the plurality of comparators in lockstep. In a fourth implementation, alone or in combination with one or more of the first through third implementations, the first check value is a first MAC and the second check value is a second MAC. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, process300includes receiving the plurality of outputs and checking for a hardware fault associated with the cryptographic accelerator or the plurality of comparators. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, process300includes receiving the plurality of outputs and evaluating authenticity of the message and safety of the payload. AlthoughFIG.3shows example blocks of process300, in some implementations, process300may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.3. Additionally, or alternatively, two or more of the blocks of process300may be performed in parallel. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 27,758 |
11861047 | DETAILED DESCRIPTION Embodiments described herein provide a method and system for gate-level masking of secret data during a cryptographic process to prevent external power analysis from determining secret keys. In the context of power analysis countermeasures, the term “masking” can refer to strategies which divide a secret value into two or more shares, each of which can be independent of the original secret, i.e., an individual share is not indicative of the original secret. Masking can incorporate additional unpredictable (or random) data to accomplish the division into shares that are independent of the secret. According to an embodiment, a mask share and masked data values are determined, where a first portion of the mask share includes a first (e.g., “X”) number of zero-values and a second (e.g., “Y”) number of one-values, and a second portion of the mask share includes the first (“X”) number of one-values and the second (“Y”) number of zero-values. Masked data values and the first portion of the mask share are input into a first portion of masked gate logic, and the masked data values and the second portion of the mask share are input into a second portion of the masked gate logic. A first output from the first portion of the masked gate logic and a second output from the second portion of the masked gate logic are identified, wherein either the first output or the second output is a zero-value. A final output can be based on the first output from the first portion of the masked gate logic and the second output of the second portion of the masked gate logic. The final output cannot be analyzed by an attacker to determine the original secret value based only the masked data values. In some embodiments, the operations to generate the share can be temporally separated from the operations of determining the final output using the share to further prevent an attacker from inferring the original secret value. In some embodiments, the first output and the second output of the masked gate logic can undergo additional processing by other masked gate logic. The first portion of the masked gate logic can include a first four AND gates and a first OR gate, where the first OR gate receives outputs of the first four AND gates. The second portion of the masked gate logic can include a second four AND gates and a second OR gate, where the second OR gate receives outputs of the second four AND gates. For the AND gates, a high output results only if the inputs to the AND gate are all high. If one of the inputs to the AND gate is not high, then a low output results. In an embodiment, the output of exactly one of the eight AND gates (the eight AND gates comprising the first four AND gates and the second four AND gates) will rise (i.e., a high output on the AND gate), and the output of the OR gate receiving the output of that particular AND gate will also rise (i.e., a high output on the OR gate). In one embodiment, one or more of the AND gates and the OR gates are configured to receive a precharge signal as described herein. Each propagation path of an integrated circuit may emit a power profile on an output that an attacker attempts to detect via power analysis. In an embodiment, the masked gate structure described herein results in signals taking each possible propagation path with equal probability, reducing the amount of information that may be obtained through power analysis. According to some embodiments, the masked gate logic structure described herein can prevent glitches and early propagation of the output so as to mask the secret values so that they are not detectable by power analysis. Additionally, precomputation of a contribution of one share (e.g., the mask share) to the masking operation can reduce the number of distinct propagation paths that may exist in the circuit. Here, the precomputed contribution can be stored in registers in one clock cycle and the masking computation can be completed in a later clock cycle to further prevent the attacker from detecting the secret values. FIG.1is a block diagram of a cryptographic system100, illustrating a cryptographic device102including a cryptographic module104with masked gate logic110coupled to a power delivery network106, where power is supplied to the cryptographic device102from the power delivery network106, according to one embodiment.FIG.1also shows an external monitoring system150that can monitor power supply noise via path152. The cryptographic device102can be any device that performs operations on secret data during the use of the cryptographic device102. Examples of a cryptographic device can include, but are not limited to, a television set top box, a smart card, a network firewall, a mobile phone, a tablet computer, a laptop computer, a desktop computer, an embedded system, a server computer, an authentication device (e.g., a token), a telecommunications device, a component in public infrastructure (e.g., smart meters), an automotive system, a defense system, a printer cartridge, or the like. The cryptographic module104of the cryptographic device102can perform cryptographic algorithms for key generation, digital signatures, and message authentication codes (MACs), encryption, and decryption algorithms, such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), Elliptic Curve Cryptography, Rivest-Shamir-Adleman (RSA), etc. A secret key can be generated for use in encrypting and decrypting a message according to a particular algorithm to attempt to prevent others from determining the contents of the message, and hashing and/or signing a message to prevent others from duplicating, modifying, or counterfeiting a message. However, execution of the algorithm requires the cryptographic module104to perform certain mathematical operations, and the performance of each mathematical operation consumes a certain amount of power. In other words, a measurement of electrical current or other phenomena in the power delivery network106along path108may vary according to the mathematical operation being performed. For example, the shape of a waveform corresponding to a multiplying operation can be different from the shape of a waveform corresponding to a squaring operation. The external monitoring system150, e.g., operated by an attacker, can attempt to monitor electrical activity variations via path152, and gather information about electrical activity variations. Such variations can be detected by the external monitoring system150by, for example, using an antenna to monitor changes in the electromagnetic field near the cryptographic device102, or by attaching probes (e.g., oscilloscope probes) to the cryptographic device. The attacker could attempt to use information gathered by the external monitoring system150for analysis, e.g., by SPA or DPA, to determine the cryptographic keys used by the cryptographic device102. For example, the attacker could attempt to use recorded power supply variations over time to determine the mathematical operations being performed and to compute the secret key being used by the cryptographic device102. If the attacker determines the secret key, the attacker can intercept and decrypt messages (e.g., secret messages) being sent by the cryptographic module104that the user or manufacturer of the cryptographic device102does not want others to know. However, as described in various embodiments of the masked gate logic110, the masked gate logic110conceals the secret key such that it is more difficult for an attacker to determine the secret key through analysis of the electrical activity variations from the power delivery network106along path108, or through other techniques that measure small variations in electrical activity inside cryptographic device102. Masking can be applied at different levels of abstraction. For example, masking can be performed at a gate level. Applying masking at the gate level can be beneficial because existing designs can be modified by applying a relatively simple transformation. However, circuits can exhibit glitches and early propagation, which can interfere with the effectiveness of gate-level masking, even when the masking technique is mathematically correct. Two common masking strategies are additive and multiplicative. Additive represents the original data as the sum of two shares, each of which is unpredictable. Multiplicative represents the original data as the product of two shares. To illustrate masking techniques, the following description refers to Advanced Encryption Standard (AES). It should be noted that masking may be employed by embodiments of other cryptographic standards, as well. In AES, the non-linear operation required to compute an AES S-box transformation is inversion in GF(28). The S-box (substitution box) effects a permutation of a set of 8-bit values (i.e., of [0, 255]). Inversion in GF(28) can be computed by independent operations on two multiplicative shares. However, multiplicative masking may require treating a value of zero as a special case. An additive masking of secret data A can use two shares M (i.e., a mask share) and A⊕M (i.e., a masked data share). When the input to the AES S-box is represented in this manner, there likely is no simple mathematical way to perform the S-box transformation by operating on these shares independently. In other words, other than possibilities which require a lookup in a 256×8 table of masked values, it is not known what functions f and g would satisfy the criteria f(M)⊕g(A⊕M)=A−1. For reasons of brevity and clarity, the discussion that follows refers to two-input functions operating on secret data a and b and producing an output q. The techniques presented can be readily extended to functions of more than two inputs. Gate-level masking strategies can be used to mask data using standard Boolean logic gates (e.g., AND, OR, or the like). For example, given a two-input Boolean function f: a, b→q, two common masked versions off are: g(a⊕ma, b⊕mb, ma, mb, ma)=f(a, b)⊕mqh(a⊕m, b⊕m, m)=f(a, b)⊕m The former is appropriate for “re-masking” the data with a fresh mask after each Boolean function, while the latter is appropriate for using a single mask throughout. The masked gates, as described herein, can be used for computing Boolean functions without leaking the original values of the secret data a, b, and q. The masked gates can be used as building blocks to logic that performs more complex functions. Alternatively, given a cryptographic circuit that uses standard Boolean gates already, the existing gates in that circuit can be swapped for equivalent masked gates to yield a masked implementation of the existing circuit. In order to illustrate the advantages of the present disclosure, deficiencies of other masking techniques will now be discussed in more detail. For a masking technique to be effective, the masked implementation (for example, logic implementing masked functions g or h above) must not leak information about the secret data a, b, and q. As an example, an implementation which removes masking from the inputs, applies f, and then reapplies masking to the output, would leak information about the secret data. One example of a masked gate that may leak information includes four AND gates and four XOR gates. This masked gate implements the masked function g for f(a, b)=a & b. The gate computes ma& mb, (a⊕ma) & mb, ma& (b⊕mb), and (a⊕ma) & (b⊕mb), then XORs all of these values along with mqin a specific order. When viewed as a sequence of mathematical operations, none of the intermediate values in this circuit are correlated with (i.e., leak information about) the secret data. However, when implemented in hardware, the inputs to the gate will arrive at different times. This will expose other intermediate values that do not appear in the mathematical model of the gate. Here, the leakage may be due to glitches. In another example of a conventional masking technique, the masked gate structure, which also leaks information, implements the masked function h using a pair of Majority gates, again for f(a, b)=a & b. This gate can have significant leakage due to early propagation, which refers to the possibility that the value of the gate's output may be fully determined by a subset of the inputs. In the case of a masked AND gate using this technique, a masking value m and unmasked values amand bmare input to a Majority gate. If a masking value, m, and either of the unmasked data values, amor bm, are both zero or both one, then the value of the third input does not matter. This condition occurs if the unmasked value of the corresponding input is zero. The masked gate of this example also uses a single mask bit for the entire circuit. The single mask bit does not provide sufficient randomness to effectively conceal the secret data being operated upon. As described herein, an attacker can analyze the electrical behavior of logic gates in silicon. The electrical behavior of logic gates in silicon can depend on numerous factors, some of which can be readily predicted and modeled in simulation. For example, one modeling strategy can describe the behavior of a gate in terms of two measurements. The first, propagation delay through the gate, may be measured as the time from when the input crosses the voltage level that is 50% of the operating voltage to the time when the output does so. The second, transition time, may be measured as the interval from when the output of the gate reaches 10% of the operating voltage to the time when it reaches 90% in the case of a rising transition, and may be the opposite in the case of a falling transition. The value of these measurements for a switching event can depend on many factors, for example, the transition time at the input of a gate, output load (e.g., wire capacitance) of the gate, and the state of other inputs (including non-switching inputs) of the gate. Any variation in these factors (e.g. a difference in the transition time at the output of a gate) that does not occur with equal probability regardless of the value taken by a secret, may allow an external monitoring system such as external monitoring system150to obtain information about the secret. A conventional masking strategy might seek to ensure that the probability of a masked gate output having a value of one (vs. zero) at the end of a clock cycle is the same regardless of the value of a secret. However, if there are multiple electrical signal paths associated with the masked gate output having a final value of one, an external monitoring system may be able to obtain information about a secret by exploiting differences among the signal paths. A masking strategy employing an activity image metric might seek to ensure that each electrical signal path in the masked gate is excited with the same probability regardless of the value of a secret. Here, “activity image” refers to some or all of the states and transitions in the masked gate and connected logic that may influence the electrical behavior of the masked gate output. Some embodiments of gate-level masking may “precharge” circuit nodes. During a precharge event, the circuit nodes are driven to an electrical potential (voltage) that is independent of data values operated upon by the gate. For example, the circuit nodes may be precharged to the ground potential. Precharge events may occur between each useful operation (or “evaluation”) performed by the masked gate. The precharge step serves to reduce interaction between successive evaluations, and to even out the power consumed upon each evaluation. Precharge may be accomplished, for example, by activating transistors dedicated to such purpose, or by applying a special input vector which is known to cause the circuit to settle at the desired precharge voltage(s). The previously mentioned masked gate using Majority primitives typically incorporates a precharge step. For a three-input majority gate, there are eight possible input vectors. At the transition from precharge phase to evaluate phase, each of the three inputs may either be low and stable, or may rise. The analysis for the transition from the evaluate phase to the precharge phase can be the same, other than the substitution of falling edges for rising edges. The output of the majority gate computes the function (A & B)|(A & C)|(B & C). Here, “&” represents the Boolean operation AND, and “|” represents the Boolean operation OR. An analysis of activity images for this gate might consist, in part, of the following table. LikelihoodLikelihoodA&BA&CB&COutputwhen A {circumflex over ( )} C = 0when A {circumflex over ( )} C = 100000.50.500RiseRise010Rise0Rise10Rise00Rise01RiseRiseRiseRise10 In this analysis, A and B are the masked inputs, and C is the mask. A XOR C, also referred to herein as A{circumflex over ( )}C, where “{circumflex over ( )}” can be defined as “exclusive or” (XOR), is the unmasked value of one of the secret inputs. To avoid leakage, the activity in the circuit should be independent of this unmasked value. As seen in the table, the likelihood of observing a rising transition at the output of each of the AND gates is not independent of the secret value A{circumflex over ( )}C, even though the likelihood of observing a rising transition at the final output is independent of A{circumflex over ( )}C. Aspects of the present invention address deficiencies of conventional masking techniques discussed above by avoiding glitches and early propagation and by substantially balancing an activity image leakage metric. Further, aspects of the present disclosure can precompute a contribution of a mask share to the output to reduce the number of distinct propagation paths that may exist in the circuit. As described herein, the precomputed contribution can be stored in registers in one clock cycle and the masking computation can be completed in a later clock cycle. In one embodiment, a mask share is determined, where a first portion of the mask share includes a first (e.g., X) number of zero-values and a second (e.g., Y) number of one-values, and a second portion of the mask share includes the first (e.g., X) number of one-values and the second (e.g., Y) number of zero-values. Masked data values and the first portion of the mask share are input into a first portion of masked gate logic, and the masked data values and the second portion of the mask share are input into a second portion of the masked gate logic. A first output from the first portion of the masked gate logic and a second output from the second portion of the masked gate logic are identified, where either the first output or the second output is a zero-value. A final output can be based on the first output and the second output. The final output cannot be analyzed by an attacker to identify the original secret value based only on the masked data values. In some embodiments, the operations to generate a mask share can be temporally separated from the operations of determining the final output using the mask share to further prevent an attacker from inferring the original secret value. In some embodiments, the first output and the second output of the masked gate logic can undergo additional processing by other masked gate logic. FIG.2Ais a block diagram illustrating a general structure of an embodiment of masked gate logic270(e.g., masked gate logic110ofFIG.1), wherein a first portion272and a second portion274can receive a precharge signal. The first portion272can also receive a first portion of a mask share and masked data values, and the second portion274can also receive a second portion of a mask share and masked data values. The first portion272can output a first output, and the second portion274can output a second output. In some embodiments, the precharge signal may be omitted. The precharge signal may be omitted, for example, because the precharge signal is effected by the presence of a certain state on the masked data values or the mask share inputs, or because no precharge signal is used. FIG.2Bis a block diagram illustrating masked gate logic200(e.g., masked gate logic270ofFIG.2A) using AND and OR gates according to one embodiment. In this example, masked gate logic200includes a first portion210(e.g., first portion272ofFIG.2A) including AND gates212,214,216, and218, and OR gate220, and a second portion250(e.g., second portion274ofFIG.2A) including AND gates252,254,256, and258, and OR gate260. Masked data values represent a portion, or all, of secret data along with additional data, referred to herein as masking values (e.g., m, ma, mb, and mq). In one embodiment, the masked data values can be derived by performing a Boolean operation between the cipher input and the masking value. As illustrated inFIG.2B, the masked data values of the logic gate200are represented in a one-hot encoding, with a pair of complementary wires for each bit of data. A one-hot encoding may, for example, represent a logical value of zero by driving a first wire to a zero state and a second wire to a one state, and represent a logical value of one by driving the first wire to a one state and the second wire to a zero state. As will be discussed later, the one-hot encoding allows for a precharge mechanism. As illustrated inFIG.2B, the wires representing the masked data can have the values a{circumflex over ( )}ma, ˜a{circumflex over ( )}ma, b{circumflex over ( )}mb, and ˜b{circumflex over ( )}mb, where “{circumflex over ( )}” can be defined as “exclusive or” (XOR), and “˜” can be defined as the complement (or inverted signal). The complement of a signal in dual rail logic can also be indicated with “′”. A mask share can include multiple portions. For an n-input masked gate, the number of bits in each portion of the mask share is 2n. As illustrated inFIG.2B, a first portion of a mask share includes mask share values t7, t6, t5, and t4, and a second portion of the mask share includes mask share values t3, t2, t1, and t0. Here, the first portion of the mask share corresponds to the first portion210of the masked data logic200, and the second portion of the mask share corresponds to the second portion250of the masked data logic200. These mask share values may be stored in a storage element, such as a look up table (LUT), registers, random access memory (RAM), first-in-first-out (FIFO) buffer (which can be implemented in memory, registers, or other storage mechanisms), or may be immediately presented to the masked gate logic. As illustrated inFIG.2B, the AND gates each receive one of the mask share values. These mask share values can be computed using various Boolean operations upon multiple masking values, ma, mb, mq. In one embodiment, the mask share values tnfor a masked gate computing f(a, b)=a AND b can be computed as follows: t0=mb&ma&mq|(~mb|~ma)& ~mqt1=mb& ~ma&mq|(~mb|ma)& ~mqt2=~mb&ma&mq|(mb|~ma)& ~mqt3=~mb& ~ma&mq|(mb|ma)& ~mqt4=~t0t5=~t1t6=~t2t7=~t3 Here, maand mbare input masking values and mqis the output masking value. In one embodiment, the first portion (e.g., t0-t3above) of the mask share contains three zeros and a one, and the second portion (e.g., t7-t4above) of the mask shares is its complement. The current values of the masked data shares (a{circumflex over ( )}ma, b{circumflex over ( )}mband their complements) are combined with the mask share values as input into the AND gates. In one embodiment, a single random bit can be used for generation of each set of mask share values ti. Here, the input masks (e.g. maand mb) can be either all-zero or all-one, and mqcan have the same values as the input masks, which could be useful when only a limited amount of unpredictable data can be obtained for masking. Other masked gate logic, including masked gate logic having n>2 inputs, can be implemented by changing the mask share values tiappropriately. In general, mask share values t2nto t2n+1−1for performing a masked computation of an n-input function f (x) given an n-bit input mask m and a 1-bit output mask mqcan be computed as: ti+2n=f(i⊕m)⊕mq Mask share values t0to t2n−1are the complements of entries t2nto t2n+−1. When implementing a masked gate with more than two inputs, an embodiment might have 2nAND gates in each of the first portion and the second portion of the masked gate logic. In one example, ⊕ is the Boolean operation “exclusive or” or “XOR”. In one embodiment, mask shares can also be generated for other types of masking (e.g., using one of the input masks as the output mask, or restricting all the input masks to have the same Boolean value). In one embodiment, switching is possible between different types of masking during the operation of the masked gate logic, depending on the degree of side-channel attack resistance needed for each cryptographic operation. The variable masking strategy can trade off the cost of random bits for masking against the amount of DPA resistance obtained. When more DPA resistance is desired despite greater cost, the mask share may be generated with n+1 random bits, and when less DPA resistance is needed and it is desirable to reduce the number of random bits used, a single random bit may be replicated to create the n-bit input mask m, and that random bit may also be used for the output mask mq. In an embodiment, a circuit implementation of the masked gate logic can be driven to a precharge state between each evaluation, for example, by applying an all-zero input vector. Assuming this is done, then in each evaluation, the output of exactly one of the eight AND gates rises, and the output of the OR gate driven by that AND gate also rises. The precharging can occur prior to inputting the masked data values, the first portion of the mask share, and the second portion of the mask share. Alternatively, precharging can occur subsequent to inputting the masked data values, or precharging may not occur at all. Returning toFIG.2B, inputs to AND gate212include masked data share a{circumflex over ( )}ma, masked data share b{circumflex over ( )}mb, and mask share t7. Inputs to AND gate214include masked data share ˜a{circumflex over ( )}ma, masked data share b{circumflex over ( )}mb, and mask share t6. Inputs to AND gate216include masked data share a{circumflex over ( )}ma, masked data share ˜b{circumflex over ( )}mb, and mask share t5. Inputs to AND gate218include masked data share ˜a{circumflex over ( )}ma, masked data share ˜b{circumflex over ( )}mb, and mask share t4. Inputs to AND gate252include masked data share a{circumflex over ( )}ma, masked data share b{circumflex over ( )}mb, and mask share t3. Inputs to AND gate254include masked data share ˜a{circumflex over ( )}ma, masked data share b{circumflex over ( )}mb, and mask share t2. Inputs to AND gate256include masked data share a{circumflex over ( )}ma, masked data share ˜b{circumflex over ( )}mb, and mask share t1. Inputs to AND gate258include masked data share ˜a{circumflex over ( )}ma, masked data share ˜b{circumflex over ( )}mb, and mask share t0. Inputs to OR gate220include outputs from AND gates212,214,216, and218. Inputs to OR gate260include outputs from AND gates252,254,256, and258. The output of OR gate220can undergo further operations. In an embodiment, the output of OR gate220during the evaluation phase can have the value (a&b){circumflex over ( )}mq, where {circumflex over ( )} represents the XOR operation. Similarly, the output of OR gate260during the evaluation phase can have the value ˜(a&b){circumflex over ( )}mq. A final output can be based on the output from the first portion210and the output from the second portion250. The final output may not be determinable based only on the masked data values. In one embodiment, the mask share can be determined in a clock cycle that is temporally separated from the clock cycle where the final output is determined, which will be discussed below in greater detail. According to one embodiment, the output from the first portion210and the output from the second portion250can undergo additional processing by other masked gate logic. FIG.2Bshows one possible embodiment of the masked gate logic. Other combinations of gates may be used to implement the masked gate while still minimizing or eliminating glitches and early propagation, and substantially balancing an activity image metric. In another possible embodiment, each of gates212,214,216,218,252,254,256,258,220, and260may instead be NAND gates. When implementing the masked gate, the circuit should be verified to be free of logic hazards, for example, by constructing a Karnaugh Map. Logic hazards may manifest as glitches on the output of the masked gate. Early propagation may be avoided by ensuring that an invalid state on an input propagates to an invalid state on an output. In the embodiment shown inFIG.2B, the AND/OR structure ensures that when either masked data input pair has two zero values, those values will propagate to cause the outputs to both be zero. Other embodiments may use an OR/AND structure in which an all-ones input propagates to an all-ones output. The AND/OR and OR/AND structure are offered here as illustrations, however the purpose of selecting among the mask share values according to a masked data value can be accomplished using other structures. In other possible embodiments, each portion of the masked gate may be mapped to one or more LUT primitives in an FPGA. For example, each of gates212,214,216,218,220,252,254,256,258, and260may be implemented in a separate LUT. Other embodiments may implement the function of gates212and214in a first LUT, the function of gates216and218in a second LUT, and combine the output of the first LUT and the second LUT in a third LUT, thus computing the same value that would be computed by OR gate220. In an embodiment, the masked gate may be implemented in semi-custom logic or fully-custom logic. Devices using semi-custom logic and fully-custom logic can be more expensive to develop (e.g. due to the extra care needed when working at the transistor level), but can also use less silicon area, thus reducing manufacturing costs, or can consume less power. An example embodiment using custom logic using pass transistors is described below with respect toFIG.6. Embodiments of the masked gate described herein need not utilize complementary metal-oxide-semiconductor (CMOS) logic. The masked gate may be implemented using, for example, transistor-transistor logic (TTL) or emitter-coupled logic (ECL). The masked gate may also utilize multiple-gate field-effect transistors. FIG.3is a block diagram illustrating a cipher implementation300incorporating masked gate logic according to one embodiment. The cipher implementation300can be included in cryptographic module104ofFIG.1. Cipher implementation300includes mask generator302, table generator304, mask share FIFO buffer306, and masked gate logic308. Masked gate logic308can be masked gate logic200ofFIG.2B. Here, mask generator302generates a masking value (e.g., ma, mb, and mq) to be used by mask share logic and table generation304to generate a mask share t, including a first portion and a second portion. Mask generator302may generate masking values, for example, by using a pseudo-random number generator or by using a true-random number generator. Mask generator302may also receive masking values as input, for example, from a different component of cryptographic module104or cryptographic device102. Mask generator may also generate masking values using logic functions. In embodiments where the output of a first masked gate is connected to the input of a second masked gate, mask generator302may set the input mask for the second masked gate (e.g. ma2) to equal the output mask for the first gate (e.g. mq1). In embodiments where the outputs of multiple masked gates are processed by other logic (e.g. the masked XOR described below), mask generator302may set the input mask for a third masked gate to a function of the outputs of a first and a second masked gate (e.g. ma3=mq1{circumflex over ( )}mq2). In one embodiment, the mask share t is stored in a first-in-first-out (FIFO) buffer (mask share FIFO buffer306) until a later time when the masked gate logic operates on the mask share and masked data values. The masking value can also be used to mask a cipher input to determine masked data values, e.g., via a Boolean operation310such as XOR. The masked gate logic308can receive the masked data values, along with the mask share from the mask share FIFO buffer306. The masked gate logic308then determines a first output, based on a first portion of the mask share and the masked data values, and a second output, based on a second portion of the mask share and the masked data values. The first output and the second output can be used to determine a final output, or the first output and the second output can be separately received by one or more other gates or devices. InFIG.3, a Boolean operation312, e.g., XOR, can be performed on the output of the data share logic308and a masking value generated by mask generator302. For purposes of resistance to higher order DPA, the operations on the mask share can be temporally separated from the operations on the associated masked data values. In one possible embodiment, the mask share operations and table generation are performed first, and the generated tables are buffered in a FIFO buffer until needed for use in the masked gate logic308. In one embodiment, a FIFO buffer can also be present between the cipher input and the masked gate logic308. FIG.4is a diagram illustrating a precharge-evaluate sequence over time according to one embodiment. For example, the charging sequence can be a charging sequence applied in cipher implementation300. In the precharge state, inputs may, for example, all be set to zero. For an embodiment using one-hot masked data value input pairs, the zero/zero state is an invalid state, meaning it does not correspond to a masked data value of either zero or one. Placing the input pairs in an invalid state during the precharge state helps to avoid early propagation. The mask share is loaded in a mask share evaluation stage, which occurs after the precharge state and prior to applying the other inputs, according to one embodiment. In the mask share evaluation stage, precomputed values are applied to the masked gate logic at time402. In other embodiments, the mask share is not loaded prior to applying other inputs. In the evaluation stage, each input, masked data values A, B, A′, and B′, transitions at times404,406,408, and410, respectively, into a “0” or “1” state in the masked gate logic. However, these transitions can occur at varied times, as shown. In one example, each input is precharged to a “0” value. In the evaluation stage, each input can either stay at a “0” value or transition to a “1” value. In the output state, when all inputs are available, outputs, Out and Out′, are determined and output at time412and414, respectively. For example, a valid final output can be determined. In another embodiment, Out and Out′ can be separately input to one or more other gates or devices. Here, loading of the mask share, evaluation of the masked data values, and determination of the output are temporally separated as a countermeasure to power analysis attack. FIG.5is a diagram illustrating timing of operations according to one embodiment. In other words, operations are shown along a timeline. For example, the operations can be performed by cipher implementation300ofFIG.3. Clock502is a signal that oscillates between a high and a low state and is utilized to coordinate actions of circuits. For example, clock502can be produced by a clock generator. In one embodiment, clock502approximates a square wave with a 50% duty cycle with a regular frequency. Circuits using the clock502for synchronization may become active at either the rising edge or falling edge. Table generation504(e.g., mask share generation) runs in advance of table use506(e.g., masked gate evaluation). For example, one table (e.g., Table A, Table B, Table C, etc.) can be generated in each clock cycle of clock502, as shown, or multiple tables can be produced in each clock cycle. Also, tables can be generated every certain number of clock cycles (e.g., every other clock cycle). An arbitrary amount of time may pass between table generation and table use. Table generation may be performed immediately prior to table use. However, table generation performed immediately prior to table use may be less resistant to higher order DPA. Table use506shows that no table use may be performed for a certain period (i.e., Idle state). Each masked gate can be precharged to a certain value (e.g., “0”) between evaluations. Here, the precharge occurs in alternating clock cycles. If a circuit instantiates a single masked gate, then each evaluation cycle can consume one table. If the circuit instantiates multiple masked gates (not shown), then multiple tables may be consumed in each evaluation. In one example, after a precharge is performed in clock cycle Prch, Table A (which was generated a certain number of clock cycles previously) can be evaluated in clock cycle Eval A. In one embodiment, the table generation logic does not need to be precharged. In the implementation shown, where table generation and table consumption each have a rate of one table per clock, the table generation logic can have idle cycles in the steady state. The idle cycles are shown concurrent with the masked gate evaluation cycles, however, this is not essential. The table generation idle cycles could also be concurrent with the precharge cycles, or not synchronized with the evaluation sequence at all. The table generation may also be performed on a different clock508from the masked gate evaluation. In one embodiment, a logic function does not require any circuit modification to accommodate masked data generated by the masked gate logic. For example, in the case of Boolean masking, the exclusive or (XOR) operation is linear with respect to the masking, so does not require modification. However, when incorporating such operations among the masked gates, care must be taken to maintain the glitch- and early-propagation-free characteristics of the signals in the circuit. One possible glitch- and early-propagation-free implementation of an XOR operation is as follows: i0=AND(am′,bm′)i1=AND(am,bm)i2=AND(am,bm′)i3=AND(am′,bm)om=OR(i2,i3)om′=OR(i0,i1) Another operation that does not require modification to work on masked data is logical inversion (NOT). A NOT operation among the masked gates, can be accomplished by swapping the wires of a complementary pair, rather than by using inverters. FIG.6is circuit diagram illustrating masked gate logic600according to one embodiment. For example, masked gate logic600can be an implementation of masked gate logic200ofFIG.2B. Masked gate logic600includes a first portion601that includes pass transistors602,604,606,608,610,612,614,616, and buffer640, and a second portion621that includes pass transistors622,624,626,628,630,632,634,636, and buffer650. In first portion601, mask share value to and masked data value B′ are input to pass transistor602, and the output of pass transistor602and masked data value A′ are input to pass transistor610. Mask share value t1and masked data value B′ are input to pass transistor604, and the output of pass transistor604and masked data value A are input to pass transistor612. Mask share value t2and masked data value B are input to pass transistor606, and the output of pass transistor606and masked data value A′ are input to pass transistor614. Mask share value t3and masked data value B are input to pass transistor608, and the output of pass transistor608and masked data value A are input to pass transistor616. The output of pass transistors610,612,614, and616are input to buffer640, which has an output Q′. In second portion621, mask share value t4and masked data value B′ are input to pass transistor622, and the output of pass transistor622and masked data value A′ are input to pass transistor630. Mask share value t5and masked data value B′ are input to pass transistor624, and the output of pass transistor624and masked data value A are input to pass transistor632. Mask share value t6and masked data value B are input to pass transistor626, and the output of pass transistor626and masked data value A′ are input to pass transistor634. Mask share value t7and masked data value B are input to pass transistor628, and the output of pass transistor628and masked data value A are input to pass transistor636. The output of pass transistors630,632,634, and636are input to buffer650, which has an output Q. FIG.7illustrates a method700for a countermeasure to side channel analysis attacks according to one embodiment. For example, the method700can be performed via masked gate logic200ofFIG.2B. Though the operations are shown in a particular order, the operations of method700can be performed in a different order, more or fewer operations can be performed, and operations can be performed in the same or different clock cycles. At block701, a mask share including a first portion and a second portion is determined. At block703, masked data values and the first portion of the mask share (e.g., from a FIFO buffer) are input in a first portion of masked gate logic. Also, masked data values and the second portion of the mask share (e.g., from a FIFO buffer) are input in a second portion of the masked gate logic. At block705, a first output from the first portion of the masked gate logic is identified, and a second output from the second portion of the masked gate logic is identified. At block707, whether the output of the masked gate logic is needed at another gate is determined. At block709, if the output of the masked gate logic is needed at another gate, then the first and second portions are routed as separate wires to the other gate. At block711, if the output of the masked gate logic is not needed at another gate, then a final output is determined based on the first output and the second output. Use of method700provides a countermeasure to side channel analysis attacks because an attacker is less likely to be able to successfully use side channel analysis to determine a secret key or other secret information being used by the cryptographic module. As those of ordinary skill in the art will appreciate, the techniques described above are not limited to particular host environments or form factors. Rather, they can be used in a wide variety of applications, including without limitation: application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems on chip (SoC), microprocessors, secure processors, secure network devices, cryptographic smartcards of all kinds (including without limitation smartcards substantially compliant with ISO 7816-1, ISO 7816-2, and ISO 7816-3 (“ISO 7816-compliant smartcards”)); contactless and proximity-based smartcards and cryptographic tokens (including without limitation smartcards substantially compliant with ISO 14443); stored value cards and systems; cryptographically secured credit and debit cards; customer loyalty cards and systems; cryptographically authenticated credit cards; cryptographic accelerators; gambling and wagering systems; secure cryptographic chips; tamper-resistant microprocessors; software programs (including without limitation to programs for use on personal computers, servers, etc. and programs that can be loaded onto or embedded within cryptographic devices); key management devices; banking key management systems; secure web servers; defense systems; electronic payment systems; micropayment systems and meters; prepaid telephone cards; cryptographic identification cards and other identity verification systems; systems for electronic funds transfer; automatic teller machines; point of sale terminals; certificate issuance systems; electronic badges; door entry systems; physical locks of all kinds using cryptographic keys; systems for decrypting television signals (including without limitation, broadcast television, satellite television, and cable television); systems for decrypting enciphered music and other audio content (including music distributed over computer networks); systems for protecting video signals of all kinds; content protection and copy protection systems (such as those used to prevent unauthorized copying or use of movies, audio content, computer programs, video games, images, text, databases, etc.); cellular telephone scrambling and authentication systems (including telephone authentication smartcards); secure telephones (including key storage devices for such telephones); cryptographic PCMCIA cards; portable cryptographic tokens; and cryptographic data auditing systems. In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that embodiments of the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description. Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “encrypting,” “decrypting,” “providing,” “receiving,” “generating,” or the like, refer to the actions and processes of a computing device that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage, transmission or display devices. The words “example” or “exemplary” are used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” throughout is not intended to mean the same embodiment unless described as such. The above description sets forth numerous specific details such as examples of specific systems, components, methods and so forth, in order to provide a good understanding of several embodiments of the present invention. It will be apparent to one skilled in the art, however, that at least some embodiments of the present invention may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present invention. Thus, the specific details set forth above are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present invention. It is to be understood that the above description is intended to be illustrative and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 49,235 |
11861048 | Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of examples. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the examples, instances, and aspects illustrated so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION OF THE INVENTION A converged wireless communication device (for example, a converged communication device) is a device capable of communicating within multiple communication systems implementing different communication modalities. For example, a converged device may communicate simultaneously in both a Land Mobile Radio (LMR) communication system and a cellular communication system. Some converged devices incorporate multiple subsystems of different types, for example, a cellular subsystem and an LMR subsystem. In some devices, the LMR subsystem can be connected to an entity's private, secure LMR network, while the cellular subsystem may connect to the entity's private cellular system or a public cellular system, which has access to, among other things, the public Internet. To mitigate information leakage risks, some converged communication devices are configured to operate in independent secure and non-secure operation modes. Different data partitions, using different encryption keys or methods, are used with each mode. Access to public networks is restricted while operating in the secure operation mode. The operation mode is selected upon device powerup, and only secure or non-secure modules are loaded and activated, depending on the operation mode selected. For certain types of users (for example, public safety personnel), land-mobile communications are a critical aspect of their use of the communication device. Accordingly, an LMR subsystem boots in a relatively short period of time (for example, three to seven seconds). However, a cellular subsystem may take a relatively long time to boot up (for example, thirty seconds or longer). As a consequence, it may be possible for the LMR subsystem to be capable of secure or privileged communications with a private LMR network before it has been established whether the cellular subsystem will operate in a secure or non-secure operation mode. Accordingly, examples described herein provide, among other things, a converged communication device, which allows for the manual or automatic selection of an operation mode for a cellular subsystem while providing operation mode synchronization with an LMR subsystem. Using examples provided herein, a converged communication device is capable of automatically enabling and disabling communication modalities based on the last operation mode for the device. In one example, the LMR subsystem will powerup and operate to communicate based on the last operation mode, and, if necessary, alter its operation based on a mode notification from the cellular subsystem. Using such examples, users are able to access both cellular and LMR communications, while maintaining desired security levels. One example provides a converged communication device including a first subsystem and a second subsystem. The first subsystem includes a first electronic processor and a first communication interface configured to communicate wirelessly using a first communication modality. The second subsystem includes a second electronic processor and a second communication interface configured to communicate wirelessly using a second communication modality. The first electronic processor is coupled to the first communication interface and configured, during a startup sequence of the first subsystem, to determine a last operation mode for the first subsystem. The first electronic processor is configured to detect whether a subscriber identity module is installed in the converged communication device. The first electronic processor is configured to, responsive to detecting that a subscriber identity module is installed in the converged communication device, determine a network type for the subscriber identity module. The first electronic processor is configured to control the first communication interface based on the network type and the last operation mode. The first electronic processor is configured to, responsive to detecting that a subscriber identity module is not installed in the converged communication device, control the first communication interface to not communicate wirelessly. The second electronic processor is coupled to the second communication interface and configured, during the startup sequence of the second subsystem, to determine the last operation mode for the first subsystem. The second electronic processor is configured to control the second communication interface based on the last operation mode. Another example provides a method for operating a converged communication device including a first subsystem and a second subsystem. The method includes determining, during a startup sequence of the first subsystem and with a first electronic processor of the first subsystem, a last operation mode for the first subsystem. The method includes detecting whether a subscriber identity module is installed in the converged communication device. The method includes determining, responsive to detecting that a subscriber identity module is installed in the converged communication device, a network type for the subscriber identity module. The method includes controlling, with the first electronic processor, a first communication interface based on the network type and the last operation mode. The method includes determining, during a startup sequence of the second subsystem and with a second electronic processor of the second subsystem, the last operation mode for the first subsystem. The method includes controlling, with the second electronic processor, a second communication interface based on the last operation mode. For ease of description, each of the example systems presented herein may be illustrated with a single exemplar of each of its component parts. Some examples may not describe or illustrate all components of the systems. Actual applications of the example systems described herein may include more or fewer of each of the illustrated components, may combine some components, or may include additional or alternative components. FIG.1illustrates a converged communication device102according to one example. In the example illustrated, the converged communication device102includes two subsystems, a cellular subsystem104and a land mobile radio (LMR) subsystem106. In the example shown, the cellular subsystem104and the land mobile radio (LMR) subsystem106are communicatively coupled to one another via an inter processor link (IPC)108. As described herein, the cellular subsystem104is configured to connect to and wirelessly communicate using a first private network110or a public network112, based on an operation mode for the cellular subsystem104, while the LMR subsystem106is configured to connect to and wirelessly communicate using a second private network114based on the operation mode for the cellular subsystem104.FIG.1illustrates a single converged communication device102configured to communicate wirelessly using the illustrated example networks. This is provided as a non-limiting example. In other instances, the converged communication device102may communicate using multiple different private and public networks. AlthoughFIG.1illustrates a single converged communication device102, the methods described herein are applicable to instances where multiple networks of differing types operate to provide communications for tens, hundreds, or thousands of converged communication devices. The first private network110is an example communication network, which includes wireless connections, wired connections, or combinations of both, operating according to an industry standard cellular protocol, for example, the Long Term Evolution (LTE) (including LTE-Advanced or LTE-Advanced Pro compliant with, for example, the 3GPP TS 36 specification series), or the 5G (including a network architecture compliant with, for example, the 3GPP TS 23 specification series and a new radio (NR) air interface compliant with the 3GPP TS 38 specification series) standard, among other possibilities, and over which, among other things, an open mobile alliance (OMA) push to talk (PTT) over cellular (OMA-PoC), a voice over IP (VoIP), or a PTT over IP (PoIP) application may be implemented. The first private network110is, for example, a corporate or government network, which provides access only to authorized users of particular organizations or agencies. Consumer cellular devices are not allowed to authenticate to, roam on, or otherwise access the first private network110. The public network112is an example communication network operating according to cellular protocols, as described above with respect to the private network110. The public network112is provided by a carrier, which sells access to ordinary consumers (for example, private citizens). Consumer cellular devices from other public networks may be able to roam on or otherwise access the public network112. The second private network114is a land mobile radio network, which includes wireless connections, wired connections, or combinations of both, operating according to the Project 25 (P25) standard defined by the Association of Public Safety Communications Officials International (APCO), the Terrestrial Trunked Radio (TETRA) standard defined by the European Telecommunication Standards Institute (ETSI), the Digital Private Mobile Radio (dPMR) standard also defined by the ETSI, among other possibilities, and over which multimedia broadcast multicast services (MBMS), single site point-to-multipoint (SC-PTM) services, or Mission Critical Push-to-talk (MCPTT) services may be provided. The second private network114may operate using talkgroups, which are virtual radio channels used to provide communication for groups of converged communication devices and other types of LMR subscriber units. The second private network114is, for example, a corporate or government network, which provides access only to authorized users of particular organizations or agencies. In some instances, the first private network110and the second private network114are operated by or for the same entity, for example, a law enforcement agency. The converged communication device102may include other components, for example, one or more antennas, a land-mobile radio modem, a baseband modem, a microphone, a speaker, and other processors and chipsets (not shown). In the illustrated example, the cellular subsystem104includes an electronic processor205, a memory210, an input/output interface215, a firmware220, a communication interface225, and a subscriber identity module230. The illustrated components, along with other various modules and components are coupled to each other by or through one or more control and/or data buses that enable communication therebetween (for example, a communication bus235). The electronic processor205obtains and provides information (for example, from the memory210, the input/output interface215, the firmware220, and combinations thereof), and processes the information by executing one or more software instructions or modules, capable of being stored, for example, in a random access memory (“RAM”) area of the memory210or a read only memory (“ROM”) of the memory210, the firmware220, or another non-transitory computer readable medium (not shown). The software can include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions. The electronic processor205is configured to retrieve, from the memory210and the firmware220, and execute, among other things, software related to the control processes and methods described herein. The memory210can include one or more non-transitory computer-readable media and includes a program storage area and a data storage area. The program storage area and the data storage area can include combinations of different types of memory, as described herein. In the example illustrated, the memory210stores, among other things, a secure partition240and a non-secure partition245. As described herein, the electronic processor205boots the cellular subsystem104using either the secure partition240or the non-secure partition245based on an operation mode for the cellular subsystem104. The secure partition240is used to store and allow to access data (for example, the user data250) and applications (for example, the secure applications255) securely on the converged communication device102or in a remote environment (for example, a cloud-based secure computing environment accessible via the first private network110). The secure partition240is, for example, an authenticated, encrypted area of the memory210, which can be used to insulate sensitive information from non-secure partition245. The secure partition240allows a user of the converged communication device102to access the secured data, applications, or remote environments, but only allows authorized functions or applications on the converged communication device102to access data, applications or other functions inside the secure partition240or the remote environment. For example, the LMR subsystem106may be able to access or store data from the secure partition240. Similarly, applications running on the secure partition240(for example, a computer aided dispatch client) may be able to operate the LMR subsystem106to communicate via the second private network114. The non-secure partition245is used to store and allow to access data (for example, the user data260) and applications (for example, the applications265) on the converged communication device102or in a remote environment (for example, a cloud-based computing environment accessible via the public network112or the Internet). In some instances, the non-secure partition245may be used to provide a user access to smart telephone functions and applications, for example, when the private network110or another private cellular network is unavailable. In other instances, the non-secure partition245may provide an official user with a non-official personal device persona to use while not operating in an official capacity, for example, rather than operating a bring your own device (BYOD) environment. Regardless of its purpose, the non-secure partition245is not allowed access to the private networks to which the secure partition240is allowed access. Similarly, the non-secure partition245is not allowed to access the functions of the LMR subsystem106. In this description, the terms “secure” and “non-secure” are used to distinguish, in a general way, between how data and applications in those partitions may be secured from unauthorized access, for example, through the use of different authentication mechanisms, encryption mechanisms, network security mechanisms, and the like. The terms, however, are not meant to imply that anything so labeled is superior or inferior. “Secure” partitions utilize mechanisms that provide a relatively higher security level relative to “non-secure” partitions. The converse is also true. Partitions labeled “non-secure” do not lack all security, but rather utilize mechanisms that provide a relatively lower security level relative to “secure” partitions. The input/output interface215is configured to receive input and to provide system output. The input/output interface215obtains information and signals from, and provides information and signals to, (for example, over one or more wired and/or wireless connections) devices both internal and external to the converged communication device102. The input/output interface215may include one or more human machine interfaces that enable a user to interact with and control the cellular subsystem104and other aspects of the converged communication device102. For example, the input/output interface215may include a display (for example, a liquid crystal display (LCD) touch screen, an organic light-emitting diode (OLED) touch screen, and the like) and suitable physical or virtual selection mechanisms (for example, buttons, keys, knobs, switches, and the like). In some instances, the input/output interface215implements a graphical user interface (GUI) (for example, generated by the electronic processor205, from instructions and data stored in the memory210, and presented on a suitable display), that enables a user to interact with the converged communication device102. In one instance, the firmware220is a non-volatile, electrically-rewritable computer storage medium, which includes a bootloader270and an operating system275. In some instances, all or part of the bootloader270, the operating system275, or both may be stored in a read only memory of the memory210or in another suitable electronic memory. The communication interface225includes components operable to communicate wirelessly with the first private network110, the public network112, and other networks using a cellular communication modality, as described herein. The communication interface225may include, for example, one or more baseband processors, transceivers, antennas, as well as various other digital and analog components, which for brevity are not described herein and which may be implemented in hardware, software, or a combination of both. Some instances may include separate transmitting and receiving components, for example, a transmitter and a receiver, instead of or in addition to a combined transceiver. The subscriber identity module (SIM)230includes various subscription profiles, access credentials, and configuration information (for example, the network type280) used by the cellular subsystem104to authenticate to and communicate via the first private network110, the public network112, and other networks using a cellular communication modality. In some instances, the SIM230is removable from the converged communication device102. In one example, the SIM230is a universal integrated circuit card (UICC). In one example, the electronic processor205is configured to, upon powerup or reboot of the converged communication device102, execute the bootloader270. The bootloader270is configured to initiate start-up of the cellular subsystem104by retrieving the operating system275from the firmware220and placing it into memory210. As described herein, the electronic processor205reads the operating system275from the memory210and boots the operating system275using either the secure partition240or the non-secure partition245, based on a selected operation mode for the cellular subsystem104. In some instances, the operating system275remains in, and is executed from, the firmware220. In one example, the bootloader270operates to read and write data to and from the cellular subsystem104and the LMR subsystem106via the inter-processor communication link108. The operating system275is, for example, a Unix operating system variant such as Android™. Before the cellular subsystem104can be used (for example, but executing applications stored on one of the partitions), it must boot. The boot time for the operating system275(that is, the time between power up and when the operating system275is ready for operation) is, for certain operating systems, for example, thirty seconds or longer. The LMR subsystem106includes an electronic processor305, a memory310, an input/output interface315, a firmware320, and a communication interface325. The illustrated components, along with other various modules and components are coupled to each other by or through one or more control and/or data buses that enable communication therebetween (for example, a communication bus330). The electronic processor305, memory310, input/output interface315, and firmware320are similar and operate similarly to their respective counterparts in the cellular subsystem104. In one example, the input/output interface315includes a push-to-talk (PTT) button for activating components of the communication interface325to transmit voice or other communications (not shown). The PTT button may be implemented, for example, as a physical switch or by using a soft key or icon in the graphical user interface on a display of the input/output interface315or, as noted above, the input/output interface215. The communication interface325includes components operable to communicate wirelessly with the second private network114and other networks using a land mobile radio communication modality, as described herein (for example, using or according to the LMR data335and other LMR applications340). The communication interface325may include, for example, one or more baseband processors, transceivers, antennas, as well as various other digital and analog components, which for brevity are not described herein and which may be implemented in hardware, software, or a combination of both. Some instances may include separate transmitting and receiving components, for example, a transmitter and a receiver, instead of or in addition to a combined transceiver. In one example, the electronic processor305is configured to, upon powerup or reboot of the converged communication device102, execute the bootloader345. The bootloader345is configured to initiate start-up of the LMR subsystem106by retrieving the operating system350from the firmware320and placing it into memory310. In one example, the operating system350contains or executes software for communicating over a land mobile radio network (for example, using the LMR data335and other LMR applications340). In one example, the operating system350is a real-time operating system (RTOS). Similar to the cellular subsystem104, before the LMR subsystem106can be used, it must boot. The boot time for the LMR operating system350may be much shorter than the boot time for cellular operating system275. In some instances, for example, the boot time for the operating system350may be between three and seven seconds. When the converged communication device102is powered up or rebooted, both the cellular subsystem104and the LMR subsystem106begin their startup routines (also referred to herein as startup sequences). Before either of the subsystems and its respective functions can be used, it must complete start-up. The startup routines for the subsystems include the booting of the processor and the loading of their respective operating systems. As noted above, the LMR subsystem106may boot up significantly faster than the cellular subsystem104. In one example, the start-up for the LMR subsystem106completes well before the start-up for the cellular subsystem104. As a consequence, it may be possible for the LMR subsystem106to be capable of secure or privileged communications with the second private network114before it has been established whether the cellular subsystem104will operate in a secure or non-secure operation mode. FIG.2illustrates one example method400for operating a converged communication device. The method400is described as being executed by the electronic processors205and305(also referred to herein as the first and second electronic processors). However, in some examples, aspects of the method400may be performed by other components of the converged communication device102. For example, some or all of the method400may be performed by the electronic processors in conjunction with their respective bootloaders. As an example, the method400is described in terms of a first subsystem (the cellular subsystem104) and a second subsystem (the LMR subsystem106). At block402, the converged communication device102boots. The bootup may be the result of a powerup from powered off state or the result of a reboot initiated during a previous powered on state. As illustrated inFIG.2, the first and second subsystems perform their respective powerup sequences as a result of the converged communication device102booting. At block404, the electronic processor205determines a last operation mode for the first subsystem (for example, the cellular subsystem104). An operation mode of the first subsystem may be, for example, a secure mode and a non-secure mode. For example, when the first subsystem is operating in a secure mode, it has been booted using the secure partition240, and when the first subsystem is operating in a non-secure mode, it has been booted using the non-secure partition245. The last operation mode identifies the operation mode of the first subsystem immediately prior to the current powerup sequence. For example, if the cellular subsystem104is operating in a secure mode (that is, from the secure partition240) and is powered down, the last operating mode when it powers up is the secure mode. Similarly, if the cellular subsystem104is operating in a non-secure mode (that is, from the non-secure partition245) and is rebooted, the last operating mode when it begins booting up is the non-secure mode. In one example, the electronic processor205determines the last operation mode by reading from a non-volatile memory of the cellular subsystem104. For example, a sequence of bits identifying the current operation mode may be written to a boot sector of the memory210while the cellular subsystem104is operating. Upon a subsequent bootup, the current operation mode that was saved in the boot sector represents the last operation mode of the cellular subsystem104. In another example, the last operation mode is selected by the user of the converged communication device prior to device reboot and the electronic processor205determines the last operation mode by retrieving the user-selected operation mode from a memory of the converged communication device (for example, the memory210). For example, as described herein, a power menu of the device may allow for a user to select the operation mode prior to a reboot. In such instances, the “last operation mode” may not refer to the last operation mode in which the device was operating, but rather the last operation mode selected by the user. At block406, the electronic processor205detects whether a subscriber identity module is installed in the converged communication device102. For example, the electronic processor205may put out a query on the bus235and listen for a response from the SIM230. In another example, the SIM230may trigger a signal to the electronic processor205when present in the device. At block408, the electronic processor205, responsive to detecting that a subscriber identity module is installed in the converged communication device (at block406), determines a network type for the subscriber identity module. For example, the electronic processor205may query the SIM230and receive the network type280in reply. In another example, the powerup sequence may include a step where the SIM230provides, among other things, the network type280to the electronic processor205. At block410, the electronic processor205controls the first communication interface (for example, the communication interface225) based on the network type and the last operation mode. For example, where the last operation mode was a secure mode and the network type is private, the electronic processor205may control the first communication interface to communicate wirelessly with a private communication network (for example, the first private network110) using a first communication modality (for example, an LTE cellular protocol). Other examples of controlling the first communication interface based on the network type and the last operation mode are described below with respect toFIG.3andFIG.4. At block412, the electronic processor205, responsive to detecting that a subscriber identity module is not installed in the converged communication device (at block406), controls the first communication interface (for example, the communication interface225) to not communicate wirelessly. For example, the electronic processor205may issue a command to the communication interface225to enter an “airplane mode” (that is, to cease transmitting and receiving). At block414, as the converged communication device is booting (at block402), the electronic processor305, during the powerup sequence, determines the last operation mode for the first subsystem. As noted, upon bootup, the LMR subsystem106is not able to access the cellular subsystem104(and vice versa). Accordingly, the last operation mode is stored in the LMR subsystem106. For example, the electronic processor305may retrieve the last operation mode information from a non-volatile memory of the LMR subsystem106(for example, the memory310). In one example, the last operation mode stored in the memory310was received (as the then-current operation mode) from the cellular subsystem104during the prior operation cycle. In another example, as described herein, the last operation mode is selected by a user of the converged communication device102and stored in the memory310. Because, as described herein, the last operation mode may be user selected, from the standpoint of the second subsystem106, the last operation mode for the first subsystem104is the last known operation mode, and may not be the actual operation mode, in which the first subsystem104was last operating. At block416, the electronic processor305controls the second communication interface (for example, the communication interface325) based on the last operation mode. For example, where the last operation mode was a secure mode, the electronic processor305may control the second communication interface to communicate wirelessly with a second private communication network (for example, the second private network114) using a second communication modality (for example, a digital LMR protocol). Other examples of controlling the second communication interface based on the last operation mode are described below with respect toFIG.3andFIG.4. FIG.3illustrates another example method500for operating a converged communication device. The method500is described as being executed by the converged communication device102and, in particular, by the electronic processors205and305. However, in some examples, aspects of the method500may be performed by other components of the converged communication device102. For example, some or all of the method500may be performed by the electronic processors in conjunction with their respective bootloaders. As an example, the method500is described in terms of a first subsystem (the cellular subsystem104) and a second subsystem (the LMR subsystem106). At block502, the converged communication device102boots. The bootup may be the result of a powerup from powered off state or the result of a reboot initiated during a previous powered on state. The first and second subsystems perform their respective powerup sequences as a result of the converged communication device102booting. At block504, the converged communication device102determines the last operation mode for the cellular subsystem104. As described above with respect toFIG.2, in some instances, this determination is made by both electronic processors205and305. In some instances, as described herein, the last operation mode is selected by a user of the converged communication device102. At block506, when the last operation mode was a secure mode, the electronic processor205detects a SIM and, if detected, determines a network type (for example, public or private). At block508, responsive to determining that the last operation mode for the first subsystem was a secure mode and that the network type is private, the electronic processor205initiates a boot up sequence for a secure data partition of the first subsystem and controls the first communication interface to communicate wirelessly with a private communication network using the first communication modality. For example, the electronic processor205may load and execute the bootloader270to launch the operating system275using the secure partition240and, when the operating system275launch has progressed sufficiently, control the communication interface225to communicate wirelessly with the first private network110using an LTE cellular protocol. In addition, in one example, responsive to determining that the last operation mode for the first subsystem was the secure mode, the electronic processor305controls the second communication interface to communicate wirelessly with a second private communication network (for example, the second private network114) using a second communication modality (for example, a digital LMR protocol), and initiates a secure inter-processor communication link with the first subsystem (as described with respect toFIG.4). At block510, when the last operation mode was a secure mode, the electronic processor205detects a SIM and, if detected, determines a network type (for example, public or private). At block512, responsive to determining that the last operation mode for the first subsystem was a non-secure mode and that either the network type is commercial (that is, public) or that there is no SIM installed, the electronic processor205initiates a boot up sequence for a non-secure data partition of the first subsystem and controls the first communication interface to communicate wirelessly with a public communication network using the first communication modality. For example, the electronic processor205may load and execute the bootloader270to launch the operating system275using the non-secure partition245and, when the operating system275launch has progressed sufficiently, control the communication interface225to communicate wirelessly with the public network112using an LTE cellular protocol. In addition, in one example, responsive to determining that the last operation mode for the first subsystem was the non-secure mode, the electronic processor305controls the second communication interface to not communicate wirelessly and initiates a secure inter-processor communication link with the first subsystem (as described with respect toFIG.4). In some examples, the electronic processor205is configured to retrieve a user boot mode selection and control the communication interface225based on the network type, the last operation mode, and the user boot mode selection. The user boot mode determines whether a user of the converged communication device102can select the operation mode for the device. In one example, the user boot mode may be set to manual (that is, the user is allowed to switch operation modes) or automatic (that is, the operation mode is set based on the type of SIM inserted into the device). In some instances, the user boot mode also determines how a power menu for the converged communication device is presented. For example, when the user boot mode is manual, a power menu may display the current operation mode (that is, secure or non-secure) and graphical user interface control elements that allow the user to select between restarting in the current operation mode or switching from the current operation mode to another available operation mode and restarting the device. In another example, when the user boot mode is automatic, a power menu may display the current operation mode (that is, secure or non-secure) and a graphical user interface control element that allows the user to restart in the current operation mode. Returning toFIG.3, responsive to determining that the last operation mode for the first subsystem was a secure mode (at block504), determining that the user boot mode selection is manual (at514), and determining that either the network type is commercial (that is, public) or that there is no SIM installed (at block506), the electronic processor205(at block516) initiates a boot up sequence for a secure data partition of the first subsystem, and controls the first communication interface to not communicate wirelessly. At block518, responsive to determining that the last operation mode for the first subsystem was a secure mode (at block504), determining that the user boot mode selection is automatic (at514) and determining that the network type is commercial (that is, public), the electronic processor205sets the last operation mode for the first subsystem to indicate a non-secure mode (for example, by writing a value to the memory210), provides a mode notification indicating the non-secure mode to the second subsystem via an inter-processor communication link (for example, as described with respect toFIG.4), and initiates a reboot sequence for the first subsystem (at block520). In one example, the electronic processor205provides the mode notification using a suitable electronic messaging protocol. In another example, the inter-processor communication link is a connection between general-purpose input/outputs (GPIOs) of the electronic processor205and the electronic processor305and the electronic processor205provides the mode notification by application of logic levels, which are predetermined to indicate particular operation modes. Upon receipt of such notification, the electronic processor305writes the operation mode to a non-volatile memory. In some examples, a reboot is not initiated and ordinary device operations resume. In alternative examples, responsive to determining that the last operation mode for the first subsystem was a secure mode (at block504), determining that the user boot mode selection is automatic (at514) and that there is no SIM installed (at block506), the electronic processor205sets the last operation mode for the first subsystem to indicate a non-secure mode (for example, by writing a value to the memory210), provides a mode notification indicating the non-secure mode to the second subsystem via an inter-processor communication link (for example, as described with respect toFIG.4), and initiates a reboot sequence for the first subsystem (at block520). At block524, responsive to determining that that the last operation mode for the first subsystem was a non-secure mode (at block504), determining that the user boot mode selection is manual (at block522), and determining that the network type is private (at block510), the electronic processor205initiates a boot up sequence for a non-secure data partition of the first subsystem, and controls the first communication interface to not communicate wirelessly. The electronic processor305, responsive to determining that the last operation mode for the first subsystem was a non-secure mode (at block504), controls the second communication interface to not communicate wirelessly. At block526, responsive to determining that that the last operation mode for the first subsystem was a non-secure mode (at block504), determining that the user boot mode selection is automatic (at block522), and determining that the network type is private (at block510), the electronic processor205sets the last operation mode for the first subsystem to indicate a secure mode (for example, by writing a value to the memory210), provides a mode notification indicating the secure mode to the second subsystem via an inter-processor communication link (for example, as described with respect toFIG.4), and initiates a reboot sequence for the first subsystem (at block520). In some examples, a reboot is not initiated and ordinary device operations resume. FIG.4illustrates another example method600for operating a converged communication device. The method600is described as being executed by the converged communication device102and, in particular, by the electronic processors205and305. However, in some examples, aspects of the method600may be performed by other components of the converged communication device102. For example, some or all of the method600may be performed by the electronic processors in conjunction with their respective bootloaders. As an example, the method600is described in terms of a first subsystem (the cellular subsystem104) and a second subsystem (the LMR subsystem106). At block602, the converged communication device102boots. The bootup may be the result of a powerup from powered off state or the result of a reboot initiated during a previous powered on state (for example, as described with respect toFIG.3). The first and second subsystems perform their respective startup sequences as a result of the converged communication device102booting. At block604, the electronic processor205selects an operational mode for the cellular (first) subsystem104. In one example, the selection is performed according to the method500, as described herein (that is, automatically or manually). At block606, the electronic processor205provides a mode notification indicating the selected mode to the LMR (second) subsystem via an inter-processor communication link, as described herein. At block608, the electronic processor205mounts either the secure partition240or the non-secure partition245based on the mode selection (at block604) and continues with device boot (at block610). As noted, the LMR subsystem106typically boots faster than the cellular subsystem104. In such instances, the LMR subsystem106will powerup according to the sequence612, based on the last operation mode, and, if necessary, alter its operation based on the mode notification from the cellular subsystem104, as described below. At block614, the electronic processor305determines the last operation mode, as described herein. At block616, responsive to determining that the last operation mode for the first subsystem was the secure mode, the electronic processor305controls the second communication interface to communicate wirelessly with a second private communication network (for example, the second private network114) using a second communication modality (for example, an LMR protocol). In some instances, the electronic processor305retrieves from the firmware320a talkgroup identifier and controls the second communication interface to join the identified talkgroup upon joining the LMR network. As described herein, it is possible, in some instances, for the cellular subsystem to boot into a non-secure operation mode after having operated in a secure operation mode. Because the LMR subsystem106typically boots faster than the cellular subsystem104, it will allow secure LMR communications based on the knowledge of the last operation mode. However, to prevent a non-secure operation of the cellular subsystem104from accessing the LMR systems, the electronic processor, at block618, initiates a secure inter-processor communication link with the first subsystem. Such a communication link only allows the electronic processor305to receive mode notification messages and prevents access to other aspects of the LMR subsystem106by the cellular subsystem104. As illustrated inFIG.4, when the last operation mode is a non-secure mode, the electronic processor305does not enable the communication interface325, but instead initiates a secure inter-processor communication link with the first subsystem. At block620, the electronic processor305waits to receive the mode selection notification from the cellular subsystem104. At block624, responsive receiving the secure mode notification via the inter-processor communication link with the first subsystem (at block622), controls the second communication interface to communicate wirelessly with a second private communication network using the second communication modality, enabling it in the case where it had yet to be started (at block616) and continues to operate in the secure mode (with LMR communications active) at block626. At block628, responsive to receiving the non-secure mode notification via the inter-processor communication link with the first subsystem (at block622), the electronic processor305controls the second communication interface to not communicate wirelessly, disabling it in the case where it was already started (at block616) and continues to operate in the non-secure mode (with LMR communications turned off) at block630. In the description above, the terms “cellular” and “land mobile radio” or “LMR” are used to distinguish between components included in a converged communication device that implement different communication modalities, for example, the long term evolution (LTE) cellular protocol and the Terrestrial Trunked Radio (TETRA) land mobile radio protocol. In addition, the terms “first” and “second” are used, in some instances, in place of the terms “cellular” and “LMR.” These terms, however, are not meant to imply that any of the components so labeled are superior or inferior to other components or arranged in a particular order. Nonetheless, in some of the foregoing examples, the “second” LMR subsystem106is subordinate to the “first” cellular subsystem104in the sense that the cellular subsystem104may include software and hardware for controlling certain aspects of the LMR subsystem106or of the converged communication device102, upon which the LMR subsystem106depends. The systems and methods described herein, although described in terms of a converged communication device, are not limited in their applicability to a converged communication device. In view of the description above, a person of ordinary skill in the art could implement the examples described in many different types of electronic devices that include multiple subsystems where one subsystem is capable of booting multiple partition types and another subsystem is not. For example, a device capable of dual communications, including an LMR subsystem and a broadband (though not necessarily cellular) capable subsystem could operate using the methods described herein. In the foregoing specification, specific examples have been described. However, one of ordinary skill in the art appreciates that various modifications and changes may be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover, in this document relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has,” “having,” “includes,” “including,” “contains,” “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a,” “has . . . a,” “includes . . . a,” or “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially,” “essentially,” “approximately,” “about,” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting example the term is defined to be within 10%, in another example within 5%, in another example within 1% and in another example within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way but may also be configured in ways that are not listed. It will be appreciated that some examples may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Moreover, an example may be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (for example, comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it may be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 51,467 |
11861049 | DETAILED DESCRIPTION The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that when an element such as a layer, region, or substrate is referred to as being “on” or extending “onto” another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or extending “directly onto” another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being “over” or extending “over” another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly over” or extending “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Relative terms such as “below” or “above” or “upper” or “lower” or “horizontal” or “vertical” may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. A system and method for defense against cache timing channel attacks using cache management hardware is provided. Sensitive information leakage is a growing security concern exacerbated by shared hardware structures in computer processors. Recent studies have shown how adversaries can exploit cache timing channel attacks to exfiltrate secret information. To effectively guard computing systems against such attacks, embodiments disclosed herein provide practical defense techniques that are readily deployable and introduce only minimal performance overhead. In this regard, a new protection framework is provided herein that makes use of commercial off-the-shelf (COTS) hardware to identify and thwart cache timing channels. It is observed herein that cache block replacements by adversaries in cache timing channels lead to a distinctive pattern in cache occupancy profiles. Such patterns are a strong indicator of the presence of timing channels. Embodiments disclosed herein leverage cache monitoring (e.g., Intel's Cache Monitoring Technology (CMT), available in recent server-class processors) to perform fine-grained monitoring of cache (e.g., last level cache (LLC)) occupancy for individual application domains. Suspicious application domains are identified, such as by applying signal processing techniques that characterize the communication strength of spy processes in cache timing channels. In some examples, cache way allocation (e.g., Intel's Cache Allocation Technology) is repurposed as a secure cache manager to dynamically partition the cache for suspicious application domains and disband any timing channel activity. This approach avoids preemptively separating application domains and consequently does not result in high performance overheads to benign application domains. I. Timing Channel Attacks The term “timing channel” is used herein to denote a class of attacks that rely on timing modulation using a shared resource (e.g., a cache, such as LLC). Cache timing channels can manifest either as side or covert channels. There are typically two processes involved in cache timing channels: a trojan and spy in covert channels, and victim and spy in side channels. The term “trojan,” as used herein, refers generally to trojans in covert channels as well as victims in side channels unless otherwise noted. Since direct communication between these pairs is explicitly prohibited by the underlying system security policy, the spy process turns to infer secrets by observing the modulated latencies during cache accesses, as further explained below with reference toFIGS.1A-1D. Cache timing channel protocols can be categorized along two dimensions: time and space. In the time dimension, (1) serial protocols operate by time-interleaving the cache accesses by the trojan and spy in a round-robin fashion (note that such serial protocols are more conducive to covert channels where the trojan can explicitly control synchronization); and (2) parallel protocols do not enforce any strict ordering of cache accesses between the trojan and spy, and let the spy decode the bits in parallel (observed more commonly in side channels). The spy generally takes multiple measurements to eliminate bit errors due to concurrent accesses. In the space dimension, the attacks can be classified based on the encoding scheme used to communicate secrets: (1) On-off encoding works by manipulating the cache access latencies of a single group of cache sets; and (2) pulse position encoding uses multiple groups of cache sets. Both encoding schemes, using a prime+probe approach, are demonstrated inFIGS.1A-1D. These encoding schemes can operate simply by creating cache conflict misses with their own (private) data blocks. FIG.1Ais a graphical representation of a cache timing channel attack with on-off encoding, illustrating a cache miss profile of applications over time.FIG.1Bis a graphical representation of a cache timing channel attack with on-off encoding, illustrating a cache hit. In cache timing channels with on-off encoding, the trojan and spy contend on a single group of cache sets (e.g., the first 4 blocks inFIGS.1A and1B). During the prime phase, the spy fills cache sets with its own data (blocks with horizontal lines). The trojan either 1) accesses the same group of cache sets to fill them with its own data (illustrated inFIG.1Ausing blocks with vertical lines), or 2) remains idle and the spy's contents are left intact (illustrated inFIG.1B). The spy probes these cache blocks and measures access latencies. Longer latency values indicate cache conflict misses (marked as m inFIG.1A), while shorter latencies indicate cache hits (marked as h inFIG.1B). Secret bits are deciphered based on these cache latencies. FIG.1Cis a graphical representation of a cache timing channel attack with pulse-position encoding using odd cache sets.FIG.1Dis a graphical representation of a cache timing channel attack with pulse-position encoding using even cache sets. In cache timing channels with pulse-position encoding, the trojan and spy exploit two distinct groups of cache sets to communicate the bits. Initially, the spy primes both groups of cache sets by filling all of the ways with its own data. The trojan may either replace contents in the first (odd, illustrated inFIG.1C) or second (even, illustrated inFIG.1D) group of cache sets. The spy probes both groups of cache sets, and depending on the group with higher cache access latency, the secret bits are decoded. This encoding scheme can be generalized to multi-bit symbols when multiple groups of cache sets are chosen for communication. FIG.2Ais a graphical representation of cache occupancy changes for the cache timing channel attack ofFIG.1A. In on-off encoding, when the trojan accesses the cache (e.g., LLC), the trojan's cache occupancy should first increase (due to the trojan fetching its cache blocks) and then decrease (during the spy's probe phase when trojan-owned blocks are replaced). Similarly, the spy's cache footprint would first decrease (due to the trojan filling in the cache blocks) and then increase (when the spy probes and fills the cache with its own data).FIG.2Bis a graphical representation of cache occupancy changes for the cache timing channel attack ofFIG.1B. When the trojan doesn't access the cache, neither of the processes change their respective LLC occupancies. FIG.2Cis a graphical representation of cache occupancy changes in odd sets for the cache timing channel attack ofFIG.1C.FIG.2Dis a graphical representation of cache occupancy changes in even sets for the cache timing channel attack ofFIG.1D. Under pulse-position encoding, regardless of the trojan's activity, a seesaw (swing) pattern is observed in their LLC occupancies. FIG.3Ais a graphical representation of an exemplary LLC occupancy rate of change for a trojan and spy pair.FIG.3Ashows a representative window capturing the rate of change in LLC occupancy over time (illustrated in number of cycles). Due to the timing channel, the trojan's cache occupancy gains in proportion to the spy's loss, and vice versa. Besides timing channel variants in the space dimension, note that this phenomenon exists along the time dimension as well. In a parallel protocol, since the spy decodes a single bit with multiple measurements, there will be a cluster of such swing patterns during every bit transmission, whereas serial protocols will likely show a single swing pattern. FIG.3Bis a graphical representation of an exemplary LLC occupancy rate of change for a benign application pair.FIG.3Acan be contrasted withFIG.3B, which illustrates regular applications that have no known timing channels. A representative benign application pair is shown from SPEC2006 benchmarks with relatively high LLC activity, namely lbm and gobmk. These application pairs do not usually show any repetitive pulses or negative correlation in their occupancy rates. The occupancy patterns are rarely correlated (e.g., no obvious swing pattern). As such, there are time periods when both applications have unaligned negative dips, or one application's LLC occupancy fluctuates while the other remains unchanged, or the two LLC occupancies almost change in the same direction. Based on the discussion above, the following key observation is made: Timing channels in caches fundamentally rely on conflict misses (that influence the spy's timing) and create repetitive swing patterns in cache occupancy regardless of the specific timing channel protocols. By analyzing these correlated swing patterns, there is a potential to uncover the communication strength in such attacks. It should be noted that merely tracking cache misses on an adversary will not be sufficient, as an attacker may inflate cache misses (through issuing additional cache loads that create self-conflicts) on purpose to evade detection. In contrast, cache occupancy cannot be easily obfuscated by an attacker on its own. Manipulation of cache occupancy will require collusion with an external process (that may, in turn, reveal swing patterns in cache occupancies with the attacker) or through using a clflush instruction (that may be monitored easily). Addressing such approaches is discussed further below with respect to Section V. II. System Design FIG.4is a schematic diagram of an exemplary embodiment of a system for defense against timing channel attacks, referred to herein as COTSknight10. COTSknight10comprises three main components: an LLC occupancy monitor12(e.g., cache occupancy monitor), an occupancy pattern analyzer14, and a way allocation manager16. The LLC occupancy monitor12creates LLC occupancy data, which can include traces of LLC occupancy patterns among mutually distrusting application domains18. The occupancy pattern analyzer14identifies suspicious pairs of the application domains18that are very likely to be involved in timing channel-based communication. The way allocation manager16dynamically partitions cache ways among suspicious application domains18(e.g., using a CAT interface20) to prevent information leakage through the cache (e.g., LLC). Embodiments of COTSknight10make use of processor hardware, such as illustrated inFIGS.5A and5B, to assist in monitoring cache occupancy and to provide secure cache management to thwart cache timing channels. This section further discusses cache occupancy monitoring with reference toFIGS.6A and6B. Occupancy trace analysis is further discussed with reference toFIGS.7A-7D. The way allocation mechanism is then discussed, which dynamically partitions the cache to prevent potential information leakage. COTSknight10is discussed herein with particular reference to implementation on an LLC of a processor. This is due to the shared nature of the LLC in multi-core processors, as well as the larger area of attack such that the LLC is a more likely target of timing channel attacks. It should be understood that other embodiments may implement COTSknight10on other cache levels (e.g., L1 cache, L2 cache, L3 cache, etc.) per design and security needs. A. Processor Hardware Cache monitoring resources (e.g., Intel's CMT) in a processor allow for uniquely identifying each logical core (e.g., hardware thread) with a specific resource monitoring identification (RMID). Each unique RMID can be used to track the corresponding LLC usage by periodically reading from a corresponding register (e.g., model specific register (MSR)). It is possible for multiple application threads to share the same RMID, allowing for their LLC usage to be tracked together. Such a capability enables flexible monitoring at user-desired domain granularity such as a core, a multi-threaded application, or a virtual machine. Additionally, cache way allocation (e.g., Intel's CAT) enables an agile way for partitioning the LLC ways in a processor. With cache way allocation, the LLC can be configured to have several different partitions on cache ways, called class(es) of service (CLOS). A hardware context that is restricted to certain ways can still read the data from other ways where the data resides; however, it can only allocate new cache lines in its designated ways. Accordingly, evicting cache lines from another CLOS is not possible. The default for all applications is CLOS0, where all cache ways are accessible. FIG.5Ais a block schematic diagram of application domains18to RMID and CLOS mapping in an exemplary processor22. In this example, a first application24is mapped to a first RMID26and a first CLOS28. A second application30is mapped to a second RMID32and a second CLOS34(e.g., separate from the first CLOS28). A third application36and a fourth application38are mapped to a shared third RMID40and a shared third CLOS42(e.g., separate from the first CLOS28and the second CLOS34). In this manner, the third application36and the fourth application38may not be mutually suspicious, while the first application24and the second application30can be monitored separately. FIG.5Billustrates an exemplary configuration of CLOS in the exemplary processor22ofFIG.5A. As shown inFIG.5B, IA32_L3_MASK_n_MSRs are set to configure the specific ways to a certain CLOS partition. By writing to the per-logical core IA32_PQR_ASSOC_MSR, each application can be associated with a certain RMID and CLOS. Note that both the cache monitoring resources (e.g., CMT) and the cache way allocation (e.g., CAT) can be reconfigured at runtime without affecting the existing application domains18. Also, not all pairs of application domains18need to monitored, and can be limited to mutually distrusting or suspicious ones. B. LLC Occupancy Monitor With continuing reference toFIGS.4and5A, from the architecture perspective, the finest granularity for the LLC occupancy monitor12is at the level of logical cores that can be readily setup with the cache monitoring resources of the processor22(e.g., CMT or another built-in cache monitoring infrastructure of the processor22normally used for observing performance and/or improving application runtime). However, this requires every thread migration between cores to be manually bookmarked. To counter this problem, application-level and virtual machine (VM) level monitoring are available, that can automatically manage remapping of RMIDs (e.g.,26,32,40) when applications or VM guests swap in or out of logical cores. Also, in some examples, the cache monitoring resources of the processor22integrate a query-based model where any core in a processor package can query the LLC occupancy of other cores. Certain embodiments of COTSknight10capitalize this capability and use a separate, non-intrusive thread to collect LLC occupancy traces for all of the currently running application domains18. FIG.6Ais a graphical representation of an exemplary LLC occupancy trace for a trojan and spy pair in a covert channel with a serial protocol and on-off encoding.FIG.6Bis a graphical representation of an exemplary LLC occupancy trace for a victim and spy pair in a side channel with a parallel protocol and pulse-position encoding. The LLC occupancy monitor12produces occupancy data, which can include the occupancy traces illustrated inFIGS.6A and6B, for analysis by the occupancy pattern analyzer14. C. Occupancy Pattern Analyzer With continuing reference toFIGS.4,6A, and6B, once LLC traces are gathered, the occupancy pattern analyzer14checks for any potential timing channel activity. Note that the timing channel attacks can happen within a certain period during the span of entire program execution. Accordingly, embodiments of the occupancy pattern analyzer14adopt a window-based analysis of LLC occupancy traces. The window size can be chosen by a system administrator based on needs (e.g., swiftness of defense vs. runtime overhead trade-offs). FIGS.7A-7Dillustrate exemplary results of LLC trace analysis by the occupancy pattern analyzer14. In this regard, it can be assumed that there are n windows (indexed by i) of raw LLC occupancy traces for a pair of application domains18(D1, D2). xiand yi(0≤i≤n−1) are the LLC occupancy sample vectors obtained by reading LLC occupancy MSRs periodically within the ithwindow for application domains18D1and D2, respectively (assuming that there are p+1 samples within each window). The time-differentiated cache occupancy traces are computed to extract the information on LLC occupancy changes: Δxi,j=xi,j+1−xi,j Δyi,j=yi,j+1−yi,jEquation 1 where xi,jand yi,jare the jthMSR samples (0≤j≤p−1) in the ithwindow for application domains18D1and D2. Exemplary time-differentiated LLC occupancy traces for covert and side channels are illustrated inFIGS.6A and6B, as discussed further above. As the second step, the occupancy pattern analyzer14focuses on finding mirror images of pulses in the two time-differentiated cache occupancy traces. As discussed above with respect toFIGS.1A-1D and2A-2D, the spy and trojan communicate by growing their own cache space through taking away the corresponding cache space from each other to create conflict misses that alter cache access timing for the spy. To filter the noise effects from surrounding cache activity, embodiments of the occupancy pattern analyzer14takes the product (z1) of Δxi,jand Δyi,jand zeroes out all non-negative values that do not correspond to gain-loss swing patterns in LLC occupancy: zi,j={Δxi,j·Δyi,j,Δxi,j·Δyi,j<00,Δxi,j·Δyi,j≥0Equation2 The above equation elegantly captures the swing pattern and cancels noise from other background processes. When cache occupancy of one process changes while the other one remains stationary, the product at that point would be zero. When two processes are both influenced by a third-party process, their cache occupancy might change in the same direction, so that the product of two time-differentiated occupancy trace points would be positive. Negative values occur when the cache occupancy patterns of the two processes move in opposite directions due to mutual cache conflicts. In effect, the series zicontains information about mutual eviction behavior between the two processes. The occupancy pattern analyzer14can then check if the z series contains repeating patterns that may be caused by intentional eviction over a longer period of time (denoting illegal communication activity). For every window, the occupancy pattern analyzer14computes autocorrelation function rifor zi: ri(m)={∑j=0p-m-1zi,j·zi,j+m,m≥0ri(-m),m<0Equation3 where m (samples) is the lag of series ziand m∈[−p+1, p−1]. The autocorrelation function is normalized to detect the linear relationship between Δxiand Δyi. The normalized autocorrelation function ri′ is defined as: ri′(m)=ri(m)(∑j=0p-1Δxi,j4)·(∑j=0p-1Δyi,j4)Equation4 According to the Cauchy-Schwarz Inequality, if the time-differentiated curves Δxiand Δyiare strictly linearly dependent, ri′ (0) would be equal to 1. Conversely, the lack of linear dependency between Δx and Δy would be indicated by ri′ (0) being close to 0. Note that benign applications may also exhibit short swing patterns on LLC occupancy, but are highly unlikely to repeat them over a longer period. To cancel noise from such short swings, embodiments of the occupancy pattern analyzer14take an average of all autocorrelation functions riover n windows. The mean autocorrelation function is defined as: r′(m)=1n∑i=0n-1ri′(m)Equation5 With increase in lag value (m), the eviction pattern would begin to mismatch more heavily. Consequently, normalized autocorrelation at lag m, ri′ (m) would begin to decrease. When the lag m equals to length of the complete pattern (wavelength, mw), some of the patterns would rematch and the r′(mw) would rise back to higher values. Note that there still might exist a small offset in the repetitive pattern, and this may cause r′(mw) to be not as high as r′(0). However, r′(mw) is extremely likely to be a local maximum in the presence of timing channel activity. As m increases further, the local maxima caused by rematched patterns would begin to appear repeatedly. FIG.7Ais a graphical representation of a normalized autocorrelation function of the LLC occupancy trace for the trojan and spy pair ofFIG.6A(e.g., a covert channel). In this example, r′(0) is very close to one, so the two time-differentiated LLC occupancies are linearly dependent. Fourier transform is a powerful tool to extract the repetitive patterns in signals. Embodiments of the occupancy pattern analyzer14further compute discrete Fourier transform of the autocorrelation function r′: R(k)=Σm=−p+1p-1r′(m)·W2p-1m·kEquation 6 where W2p-1=e−2πi/(2p-1)and i is the imaginary constant (i2=−1). Here R is the power spectrum of z. The presence of a single or equally-spaced multiple spikes with concentrated (very high) signal power outside of frequency 0 in R indicates a repetitive pattern in the underlying sequence. Note that this is a typical characteristic of timing channels. FIG.7Bis a graphical representation of a power spectrum of the LLC occupancy trace ofFIG.6A. Repeated occurrence of local maxima and a sharp peak around a frequency of 150 in the power spectrum can be visually observed, which indicates timing channel activity. Similarly,FIG.7Cis a graphical representation of a normalized autocorrelation function of the LLC occupancy trace for the victim and spy pair ofFIG.6B(e.g., a side channel).FIG.7Dis a graphical representation of a power spectrum of the LLC occupancy trace ofFIG.6B. In this example, r′(0) is very close to one (as depicted inFIG.7C), indicating linear dependency, and a sharp peak is observed in the power spectrum (as depicted inFIG.7D) around a frequency of 290. Using such analysis techniques, the occupancy pattern analyzer14identifies a potential timing attack involving a pair of processes (e.g., the victim and spy pair). The occupancy pattern analyzer14may further provide RMIDs for the pair of processes (e.g., application domains18) involved in the potential timing attack for cache access segregation or another action to disband the timing channel. In principle, using advanced communication protocols, it is possible for the trojan and spy to pseudo-randomize the intervals between two consecutive bits to obscure the periodicity in the channel. However, in practice, cache timing channels with randomized bit intervals are very hard to synchronize at these random times in a real system environment amidst noise stemming from hardware, OS and external processes. As such, these attacks can be subject to a severely reduced bit-rate and high transmission errors. No such cache attacks with pseudo-random intervals are reported in the literature. Even in such hypothetical cases, the repetitive swing pattern can be recovered with proper signal filtering (discussed further below with respect to Section V). D. Way Allocation Manager With continuing reference toFIG.4, after the way allocation manager16receives RMIDs of identified suspicious application domains18from the occupancy pattern analyzer14, it will configure LLC ways to fully or partially isolate the suspicious pairs. Note that all of the newly created application domains18(e.g., newly spawned processes) may be initially set to a default CLOS (e.g., CLOS0) with access to all LLC ways. Consider a newly discovered suspicious pair (D1, D2). The way allocation manager16can simply create two non-overlapping CLOS (e.g., CLOS1 and CLOS2, which are separate and disjoint) for assignment to D1and D2. In this manner, COTSknight10heuristically assigns ways to each application domain18(e.g., due to each CLOS having a predefined cache ways accessible to its corresponding application process(es)) based on their ratio of LLC occupancy sizes during the last observation period. To avoid starvation, in some examples a partition policy of the way allocation manager16sets a minimum number of ways for any application domain18(e.g., the minimum can be set to four, which works reasonably well as demonstrated in Section IV below). The way allocation manager16can apply different allocation policies to manage the partitioned application domains18at runtime. Two exemplary allocation policies are discussed: 1) an aggressive policy that partitions the two suspicious application domains18and keeps them separated until one of them finishes execution. This policy guarantees the highest level of security, and removes the need to track already separated application domains18. 2) A jail policy that partitions the two application domains18for a period of time, and then allows access to all of the LLC partitions upon timeout. This policy provides the flexibility to accommodate benign application pairs that need to be partitioned tentatively. It should be understood that other embodiments of the way allocation manager16may implement other policies, such as a combination of the jail policy and the aggressive policy based on repetition of suspected timing channels and/or degree of certainty of timing channel activity. For long-running applications, restricting the cache ways over time may not be desirable, and the way allocation manager16may instead implement a policy for migrating suspected spy processes to other processors. This may be a better option, especially for victims in side channels. III. Implementation FIG.8is a schematic diagram of an exemplary implementation of COTSknight10in a computer system44. The computer system44comprises any computing or electronic device capable of including firmware, hardware, and/or executing software instructions that could be used to perform any of the methods or functions described above, such as identifying (and guarding against) a cache timing channel attack. In this regard, the computer system44may be a circuit or circuits included in an electronic board card, such as a printed circuit board (PCB), a server, a personal computer, a desktop computer, a laptop computer, an array of computers, a personal digital assistant (PDA), a computing pad, a mobile device, or any other device, and may represent, for example, a server or a user's computer. The computer system44in this embodiment includes a processing device or processor22and a system memory46which may be connected by a system bus (not shown). The system memory46may include non-volatile memory (e.g., read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM)) and volatile memory (e.g., random-access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM)). The computer system44may be implemented with a user space48and an operating system50, each of which may reside in the system memory46and interact with the processor22. One or more application domains18reside in the user space48and represent a wide array of computer-executable instructions corresponding to programs, applications, functions, and the like, which are executed by the processor22. However, the user space48interfaces with the operating system50, and the operating system50interfaces with the processor22, such that application domains18access the processor22via the operating system50. Accordingly, in an exemplary aspect, some or all of the COTSknight10resides on the operating system50to facilitate monitoring, analyzing, and guarding against potential cache timing channel attacks. The processor22represents one or more commercially available or proprietary general-purpose processing devices, such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processor22may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or other processors implementing a combination of instruction sets. The processor22is configured to execute processing logic instructions for performing the operations and steps discussed herein. In an exemplary aspect, the processor22includes two or more processor cores52,54for executing instructions in parallel. In this regard, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with the processor22, which may be a microprocessor, field programmable gate array (FPGA), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. The processor22may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). In an exemplary implementation used for evaluating an embodiment of the COTSknight10(described further below in Section IV), the processor22is an Intel Xeon E5-2698 v4 processor. The operating system50is Centos 7.0 with Linux kernel 4.10.12. However, it should be understood that this is illustrative in nature, and in other embodiments the processor22may be any conventional or custom processor, controller, microcontroller, or state machine, and the operating system50may be any conventional or custom operating system. As illustrated inFIG.8, COTSknight10is deployed as an operating system50level service that has two major modules, the LLC occupancy monitor12and a COTSknight kernel56. That is, the cache occupancy monitor can be deployed on a combination of firmware and management layers operating on a processor, such as the operating system50. LLC Occupancy Monitor12. The LLC occupancy monitor12dynamically traces the LLC occupancy for a watch-list of application domains18. It designates newly created application domains18(e.g., VMs, applications) with RMIDs, and also performs recycling of RMIDs. By default, all running application domains18are monitored separately. The LLC occupancy monitor12can export interface to a system administrator58(e.g., resident in the user space48) to override domain configurations. For instance, multiple application domains18belonging to the same user can be grouped together. In an exemplary aspect, the LLC occupancy monitor12periodically queries the LLC occupancy MSRs in the processor22(e.g., via a CMT interface60, which may include or be separate from the CAT interface20) at a configurable sampling rate (setup by the system administrator58). The cache occupancy data for all the monitored application domains18are stored in a memory buffer62, which may be a first-in-first-out (FIFO) buffer resident on the system memory46. When the memory buffer62is full (or has gathered sufficient cache occupancy data), the LLC occupancy monitor12notifies the COTSknight kernel56for signal analysis. In some examples, when the LLC occupancy monitor12receives notification from the COTSknight kernel56about partitioned application domains18, the LLC occupancy monitor12can remove the partitioned application domains18(e.g., with access to disjoint cache sets) from its watch-list (e.g., temporarily or for the duration of application runtime, per COTSknight10policy). In other examples, the LLC occupancy monitor12can continue to monitor all application domains18. COTSknight Kernel56. The COTSknight kernel56module combines the occupancy pattern analyzer14and the way allocation manager16. It periodically empties the memory buffer62by reading the LLC occupancy traces for the monitored application domains18, and performs signal analysis based on the approach discussed above in Section II-C. Once newly suspicious application domains18are recognized, the COTSknight kernel56generates a domain to CLOS mapping so that these application domains18will be isolated and potential timing channels can be annulled. The COTSknight kernel56can flexibly manage the partitioned application domains18based on the partition policy inputs provided by the system administrator58(discussed above in Section II-D). IV. Evaluation An embodiment of the COTSknight10is evaluated using the implementation described above with respect toFIG.8, wherein the processor22is an Intel Xeon V4 with 16 CLOS and 20 LLC slices, and each LLC slice has 20×2048 64-byte blocks. By default, all logical cores are assigned a RMID0 (the default RMID), and the associated CLOS configuration MSR is set to 0xFFFFF. This means that all application domains18can use all LLC ways initially. COTSknight10initializes the memory buffer to accumulate LLC MSR readings sampled at 1,000 per second (maximum stable rate supported by the current hardware). The occupancy pattern analyzer14processes n consecutive windows of occupancy samples, where n is set to 5 and window size is equal to 500 ms. Attack scenarios are evaluated along both time and space dimensions, as detailed in Table I. Each variant is configured to perform the prime+probe attack using a specific number of cache sets (32˜128). For serial-onoff and para-onoff, all target cache sets are treated as one group, and for serial-pp and para-pp, two equally-sized groups of cache sets are generated. TABLE ICache timing attack classes evaluatedAbbreviationEncodingTimingpara-onoffOn-offParallelserial-onoffOn-offSerialpara-ppPulse-positionParallelserial-ppPulse-positionSerial Each attack variant shown in Table I is set up to run for 90 seconds (s) on the Intel Xeon v4 server. To emulate real system environment, two SPEC2006 benchmarks are co-scheduled alongside the trojan and spy. Each attack variant is run multiple times with different co-scheduled process pairs and numbers of target sets. The occupancy pattern analyzer14performs pair-wise normalized autocorrelation on time-differentiated LLC occupancy traces for six combination pairs of application domains18. In all cases, the trojan-spy pair consistently had the highest autocorrelation 0-lag (≥0.93), which is much higher than the other pairs of application domains18(<0.5). FIGS.9A-9Dshow results of the occupancy pattern analyzer14on representative windows for trojan-spy pairs.FIG.9Ais a graphical representation of a power spectrum of LLC occupancy data for a serial-onoff timing channel attack variant. In this example, a high normalized autocorrelation (0-lag) value of 0.93 is observed. An isolated, sharp peak in the corresponding frequency domain at310denotes concentrated power corresponding to transmission activity. FIG.9Dis a graphical representation of a power spectrum of LLC occupancy data for a para-pp timing channel attack variant. A similar behavior is observed in this example, where the signal power is even higher due to a larger number of repetitive swing patterns in LLC occupancy between trojan-spy. FIG.9Bis a graphical representation of a power spectrum of LLC occupancy data for a serial-pp timing channel attack variant.FIG.9Cis a graphical representation of a power spectrum of LLC occupancy data for a para-onoff timing channel attack variant. Interestingly, in the attack variants illustrated inFIGS.9B and9C, there exist two sharp peaks. This can be explained as follows: In some cache timing channels, there are usually two repetitive sets of behaviors at different frequency levels—1) prime+probe operations by the spy, and 2) cache accesses by the trojan. For example, in serial-pp, the spy performs cache evictions during prime+probe periodically and the trojan activity can create variations in eviction patterns. This creates two different frequencies that are observed as two separate peaks in the power spectrum ofFIG.9B. Similarly, in para-onoff, for every trojan operation, the spy performs repeated multiple probes and during each probe, it causes repetitive cache set evictions. These two aspects are represented as periodic signals with two frequencies in the power spectrum ofFIG.9C. In addition, the embodiment of COTSknight10is evaluated on benign workloads using two sets of benchmarks, namely SPEC2006 and CloudSuite (video streaming and memcached). Combinations of SPEC2006 benchmarks are run with reference inputs that exhibit various level of cache intensiveness. The two CloudSuite benchmarks are both cache-intensive workloads that are used for virtualized environments. To generate benign workloads, SPEC2006 benchmarks are first classified into two groups: 1) H-Group, that has cache-sensitive applications with high rate of misses per kilo instructions (MPKI) and LLC accesses (including GemsFDTD, leslie3d, mcf, lbm, milc, soplex, bwaves, omnetpp, bzip2); and 2) L-Group, that contains the rest of the applications with relatively low cache-sensitivity. Workloads are generated with three levels of cache sensitivity from these two groups: (i) highly cache-intensive workloads (hh-wd) where all four applications are assembled from within H-Group; (ii) medium cache-intensive workloads (hl-wd) with two applications randomly selected from H-Group and the other two from L-Group; (iii) low cache-intensive workloads (II-wd) where all four applications are chosen from L-Group. FIGS.10A-10Dillustrate results of the occupancy pattern analyzer14on representative windows for benign workloads. Sixty benign multi-program workloads are run (20 in each sensitivity level) where each application is an individual application domain18. The results show that a vast majority of domain pairs (79%) in benign workloads have very low normalized autocorrelation (0-lag) for the time-differentiated LLC occupancy traces. FIG.10Ais a graphical representation of a power spectrum of LLC occupancy data for a benign ll-wd workload (cal, hmm, gob, lib).FIG.10Bis a graphical representation of a power spectrum of LLC occupancy data for a benign hl-wd workload (Gem, hmm, xal, bwa).FIG.10Cis a graphical representation of a power spectrum of LLC occupancy data for a benign hh-wd workload (lbm, mil, sop, Gem). The power spectrums in these examples show no observable peaks. FIG.10Dis a graphical representation of a power spectrum of LLC occupancy data for another benign hh-wd workload (Gem, mcf, bzi, bwa). This example shows an interesting hh-wd workload where there is a high normalized autocorrelation (0-lag) and a number of small peaks in the frequency domain, (corresponding to GemsFDTD and mcf). However, note that the peaks are simply numerous (unlike timing channels) and their relative signal strengths are weak (<20). It was found that the high autocorrelation (0-lag) results from a series of swing pulses due to cache interference between GemsFDTD and mcf, and the cache timing modulation is simply too chaotic (at many different frequencies) for any real communication to take place. FIG.11is a graphical representation of a cumulative distribution function of peak signal power among benign workloads. The cumulative distribution function (CDF) is shown in thousands of analysis window samples (2.5 s) during execution of workloads. The peak signal power is observed to be less than 5 about 80% of the time, and higher than 50 for only about 2% of the time. This shows that a vast majority of benign workload samples do not exhibit high signal power, and are significantly less than any known timing channels (which usually have signal strength at well above 100). Effectiveness of the embodiment of COTSknight10is evaluated on two aspects: 1) ability to counter cache timing channels, and 2) partition trigger rate and performance impact on benign workloads. To minimize performance impact on the victim in side channels, it is noted that migrating the spy to a different server may be also considered as an alternative mitigation strategy. Defeating LLC Timing Channels. Multiple instances of cache timing channel attack variants were run with different background processes, as well as with varying numbers of target cache sets. It is observed that the power peaks are well above 100 a vast majority of time in all timing channels. There are a few windows during the attack setup phase where the peak values drop slightly below 100. To avoid any false negatives on real attacks, a very conservative signal power threshold of 50 was chosen to trigger LLC partitioning. Evaluation results show that COTSknight10identifies all of the trojan-spy domain pairs within five consecutive analysis windows (500 ms each) after they start execution. Under stronger security constraints, the analysis window sizes can be set to lower values. Partition Trigger Rate and Performance Impact for Benign Workloads. On benign workloads in ll-wd category, LLC partitioning was never triggered during their entire execution. Among all workloads with low to high cache intensiveness, only 6% of the domain pair population had LLC partitioning—these benchmarks covered 2% of the analysis window samples. FIG.12Ais a graphical representation of performance impact on benign workloads where COTSknight10triggers an LLC partition under an aggressive policy.FIG.12Bis a graphical representation of performance impact on benign workloads where COTSknight10triggers an LLC partition under a jail policy. Performance impact is represented as normalized instructions per cycle (IPC) for the workloads that were partitioned at runtime. LLC partitioning minimally impacts most of the applications (less than 5% slowdown), and interestingly, a performance boost is observed for many of them (up to 9.2% performance speedup). The overall impact on all the applications that ran with partitioned LLC was positive (about 0.4% speedup). This is because even benign applications can suffer from significant cache contention and LLC partitioning can be beneficial (e.g., soplex and omnetpp). The results show that the aggressive policy ofFIG.12A(that fully partitions suspicious pairs) shows higher variations in both performance gains and losses, while the jail policy ofFIG.12B(that partitions tentatively for 30 s until timeout) incurs lesser performance penalties (as well as lesser performance gains). Runtime Overhead. COTSknight10implements the non-intrusive LLC occupancy monitoring for only mutually distrusting application domains18identified by the system administrator58. The time lag to perform the autocorrelation and power spectrum analysis for the domain pairs is 25 ms, which means that COTSknight10offers rapid response to cache timing channel attacks. Overall, COTSknight10incurs less than 4% CPU utilization with 4 active mutually-distrusting application domains18. Note that the runtime overhead of COTSknight10does not necessarily scale quadratically with the number of application domains18in the system since not all domains would have active LLC traces in each analysis window and only mutually-distrusting domain pairs would need to be analyzed. FIG.13is a graphical representation of peak signal power for one hour of system operation, illustrating launch of an attack followed by COTSknight10mitigation through way allocation. This example implements a para-onoff attack that works cross-VM. For this, four KVM VMs were set up where the trojan and spy run on two of the VMs, and simultaneously, two other VMs co-run representative cloud benchmarks, namely video streaming (stream) and memcached (memcd) from CloudSuite, both of which are highly cache-intensive. Each VM instance runs Ubuntu-14.04 with 4 logical cores and 2 GB DRAM. A single RMID is assigned to each VM instance that runs for an hour. The trojan/spy pair is set to start the para-onoff attack at a random time between 0 and 300 s. In this example, COTSknight10is configured to use the aggressive policy to demonstrate the effectiveness of LLC partitioning. As illustrated inFIG.13, the trojan and spy start to build communication at around 188 s (when increasing signal power is observed). The peak signal power between the trojan and spy domain pair quickly climbs up to 126 at time 192.5 s, which indicates a strong presence of timing channel activity in the current analysis window. This quickly triggers the way allocation manager16of COTSknight10, which splits the LLC ways between trojan and spy VMs. Consequently, the maximum signal power drops back to nearly zero for the rest of execution, effectively preventing any further timing channels. Note that during the one hour experiment, the peak signal power values for the other domain pairs (involving CloudSuite applications) remained flat at values <3. V. Sophisticated Adversaries COTSknight10offers a new framework that builds on COTS hardware and uses powerful signal filtering techniques to eliminate noise, randomness or distortion to unveil timing channel activity. Filtering non-negatively correlated patterns and window-based averaging techniques to eliminate short swings were discussed above. This section discusses additional monitoring support and signal processing to detect sophisticated adversaries. A. Transmission at Random Intervals In theory, sophisticated adversaries may use randomized interval times between bit transmissions. For example, a trojan and spy can be imagined which set up a pre-determined pseudo-random number generator to decide the next waiting period before bit transmission. It should be noted that there does not exist any such demonstrated cache attack in the literature, and such an attack would be hard to synchronize under real system settings. Nevertheless, even if such attacks were feasible, COTSknight10can be adapted to recognize the attack through a signal pre-processing procedure called time warping that removes irrelevant segments from the occupancy traces (for which Δx, Δy are 0 in Equation 1) and aligns the swing patterns. After this step, the periodic patterns are reconstructed, and the cadence of cache accesses from adversaries can be recovered.FIGS.14A,14B, and15demonstrate detection of this attack scenario by COTSknight10. FIG.14Ais a graphical representation of an exemplary LLC occupancy trace for timing channel with transmission at random intervals. For illustration, this futuristic attack is implemented by setting up the trojan and spy as two threads within the same process, with the main thread configured to control the synchronization. As shown inFIG.14A, the LLC occupancy trace for this attack has random distances between the swing pulses. FIG.14Bis a graphical representation of the LLC occupancy trace ofFIG.14Aafter time-warping.FIG.15is a graphical representation of a power spectrum of the LLC occupancy trace ofFIG.14B. With time warping, high signal power peaks are observed. Additionally, when this signal compression pre-processing step is applied on benign workloads, no increase in partition trigger rate is observed. It should be noted that other heuristic-based filtering, such as rate of swing patterns per second, may also be used to reduce false triggering on benign applications (if needed). B. Other Potential Evasion Scenarios and Counter-Measures Attackers may also attempt to distort swing patterns in other ways. While these are hypothetical cases (often difficult to implement practically), they are discussed here to emphasize the robustness of COTSknight10even under extreme cases. Using clflush to Deflate LLC Occupancy. An adversary may try to compensate the increase in its own cache occupancy by issuing a clflush instruction. To handle such scenarios, clflush commands by suspicious application domains18may be tracked and the associated memory sizes can be accounted back to the issuing core, thus restoring original occupancy data for analysis. Using External Processes to Deflate LLC Occupancy. A spy may deflate its LLC occupancy changes by involving another helper process. Note that the suspect swing patterns in LLC occupancy will essentially reflect in a trojan-helper pair instead of a trojan-spy pair. When COTSknight10isolates the helper, the trojan-spy pair will begin to show swing patterns in LLC occupancy. Self-Deflation of LLC Occupancy. Theoretically, another way to distort swings in LLC occupancy is to have the trojan and spy have shadow cache sets and perform the opposite operations to the ones performed on the transmission cache sets. However, completely eliminating the swing patterns requires the strong assumption that the spy (being the receiver) will know the change of occupancy patterns ahead of actual communication, which obviates the need for communication in the first place. On the other hand, if the trojan and spy fail to perform the perfect compensation, they will actually create a superposition of two swing patterns, which will also be a swing patterns. Note that, for side channels, it is impossible for the spy to enact this evasion method with a non-colluding victim. Creating Irregular Swing Patterns. The trojan/spy pair may hypothetically create irregular swings by working with an arbitrary number of cache sets at irregular intervals (hardest to be a practical attack). To handle such cases, signal quantization techniques abstract out the specific shape of the swing pulse through rounding and truncation may be used. After this step, the repetitive swing patterns will be recovered. Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow. | 53,364 |
11861050 | DETAILED DESCRIPTION Physical Unclonable Functions (PUFs) have emerged as a promising solution to identify and authenticate integrated circuits (ICs). Generally, a physical unclonable function acts as one-way function that maps certain stable inputs (challenges) to pre-specified outputs (responses) in a semiconductor device. In accordance with the present disclosure, a Set/Reset (SR) Flip-Flop (FF) based PUF can generate challenge-response pairs within a design resulting from the manufacturing process variations. For example, an SR-FF can store a 1-bit signal depending on the valid input signals applied to its inputs. However, for invalid signals, SR-FF can output a valid signal due to relative timing differences created by manufacturing variations. Accordingly, in accordance with various embodiments, the present disclosure presents a novel NAND-based Set-Reset (SR) Flip-flop (FF) PUF design, such as for security enclosures of the area- and power-constrained Internet-of-Things (IoT) edge node, among other devices. An exemplary SR-FF based PUF is constructed during a unique race condition that is (normally) avoided due to inconsistency. The present disclosure shows, when both inputs (S and R) are logic high (‘1’) and followed by logic zero (‘0’), the outputs Q andQcan settle down to either 0 or 1 or vice-versa depending on statistical delay variations in cross-coupled paths. During experimental testing, the process variations were incorporated during SPICE-level simulations to leverage the capability of SR-FF in generating the unique identifier of an IC. Experimental results for 32 nm, 45 nm, and 90 nm process nodes show the robustness of SR-FF based PUF responses in terms of uniqueness, randomness, uniformity, and bit(s) biases. Furthermore, physical synthesis was performed to evaluate the applicability of an SR-FF based PUF on five designs from OpenCores in three design corners (best-case, typical-case, and worst-case). The estimated overhead for power, timing, and area in these three design corners are negligible. An exemplary embodiment of the SR-FF based PUF circuit presented herein can be employed in a resource constrained IoT (Internet of Things) device to perform secure authentication. Without any deployment of additional circuitry, an embodiment of the SR-FF based PUF can act as a frontier for systems that use Non-Volatile Memory (NVM) and computation-intensive cryptographic protocols. Given a design with memory elements implemented with SR-FF(s), an exemplary method of the present disclosure utilizes variations in transistor length and threshold voltage to generate PUF responses, in one embodiment. Such a method does not introduce any new circuit elements. Instead, it selectively chooses the response (output voltage) of the SR-FF(s) when a set of input signals are applied to it, in one embodiment. Comparatively, in recent years, a wide variety of PUF architectures have been investigated that can transform device properties (e.g. threshold voltage, temperature, gate length, oxide thickness, edge roughness) to a unique identifier or key of a certain length. In general, a PUF is a digital fingerprint that serves as a unique identity to silicon ICs and characterized by inter-chip and intra-chip variations. Inter-chip offers the uniqueness of a PUF that helps to conclude that the key or unique identifier produced for a die is different from other keys. Intra-chip determines the reliability of the key produced that should not change for multiple iterations on the same die. For a signal, metastability occurs when the specifications for setup and hold time are not met and unpredictable random value appears at the output. Although metastable is an unstable condition, due to process variations, such metastability generates a stable but random state (either ‘0’ or‘1’), which is not known apriori. In previous works, metastability in cross-coupled paths has been exploited to design a PUF with a SR latch and Ring Oscillator (RO) circuitry. Although latch-based PUF designs offer unique signatures to ICs, they suffer from signal skew and delay imbalance in signal routing paths. Thus, additional hardware, such as Error Correction Code (ECC) circuitry, is commonly employed to post-process the instable PUF responses. For example, in a publication titled “Register PUF with No Power-Up Restrictions,” in 2018 IEEE ISCAS, pages 1-5 (May 2018), Su et al. presented cross-coupled logic gates to create a digital ID based on threshold voltage, in which the architecture was composed of a latch followed by a quantizer and a readout circuit to produce the PUF ID. However, a readout circuit is generally expensive and limits its application to a low-power device. FPGA-based SR-latch PUF was presented in a Habib et al. publication titled “Implementation of Efficient SR-Latch PUF on FPGA and SOC Devices,” in Microprocessors and Microsystems, 53:92-105 (2017), and an Ardakani et al publication titled “A Secure and Area-Efficient FPGA-Based SR-Latch PUF,” in 2016 IST, pages 94-99 (September 2016). Due to temporal operating conditions, ECC was employed to reliably map a one-to-one challenge-response pair in both approaches. To alleviate power-up values from a memory-based PUF, registers based on edge-triggered D-FF were proposed in the Su et al. publication. Here, the authors suggested to include an expensive synchronizer in Clock Domain Signal (CDC) signals to get a stable PUF response. A framework of ‘body-bias’ adjusted voltage on SR-latch timing using FD-SOI (Fully Depleted Silicon on Insulator) technology was presented in a Danger et al. publication titled “Analysis of Mixed PUF-TRNG Circuit Based on SR Latches in FD-SOI Technology,” in 2018 DSD, pages 508-515 (August 2018). To get a correct PUF response, authors employed buffers along the track at a top and bottom of latches that suffer from responses biasedness. Transient Effect Ring Oscillator (TERO) PUF, as described in L. Bossuet et al, “A PUF Based on a Transient Effect Ring Oscillator and Insensitive to Locking Phenomenon,” IEEE TETC, 2(1):30-36 (2014), utilized metastability to generate the responses with a binary counter, accumulator, and shift register. Although the architecture was scalable, it required large hardware resources. Thus, a TERO-PUF in the Bossuet publication incurred significant area overhead that included a counter, an accumulator, and a shift register. The foregoing deficiencies can be overcome by harvesting deep-metastability in bi-stable memory with SR-FF to design a low-cost PUF and high quality challenge-response pairs (CRPs) in accordance with the embodiments of the present disclosure. While the majority of works utilizing metastability to design PUF employ additional hardware to count the oscillation frequency, the present disclosure is unlike these previous studies in that it (a) employs SR-FF (without additional hardware) to construct a low-cost PUF and (b) reuses the SR-FF already in the original intellectual property (IP) circuitry by varying channel length and threshold voltage to account for intra- and inter-chip variations. Accordingly, the present disclosure designs and analyzes an embodiment of a novel SR-FF based PUF. For a NAND gate based SR-FF, the input condition for S(Set)=‘1’ and R(Reset)=‘1’ must be avoided as it produces an inconsistent condition. In particular, when S=R=‘1’ is applied followed by S=R=‘0’, the outputs Q andQwill undergo a race condition. Due to manufacturing variations, the state due to the race condition will settle as either ‘0’ or ‘1’. Further, due to intrachip process variations, some flip flops in a chip will settle in a ‘0’ state, while others will settle in a ‘1’ state, and, due to inter-chip variations, such a signature will be different across the chips. The present disclosure presents a PUF design that relies on the cross-coupled path in an SR-FF configuration. Each bit of a PUF response can be extracted from a metastability induced random value in the output (Q) due to a particular input sequence at SR-FF. This random value will eventually evaluate to a stable logic due to process variability. A clock enabled cross-coupled NAND-based SR-FF construction is shown inFIG.1which does not require an additional synchronizer to control the input conditions. Set-Reset (SR) Latch has the forbidden input combination, namely, S=R=1 which results in both Q andQequal to 1. After S=R=1 input, if both inputs are lowered (S=R=0), there is a race condition between the two cross-coupled NAND gates (ND1 and ND2) making Q andQto linger around a Vdd2 value. Although such a race condition is prohibited during normal or regular circuit operation, it can influence the output to generate a state determined by the mismatch in the underlying device parameters (such as transistor length, threshold voltage, etc.). An analysis of the race behavior is seemingly dependent on the precise phase relation between clock and input data. Such an input-referred event sequence can be exploited to generate a PUF response in accordance with the present disclosure. Next,FIG.2shows a transistor level schematic of a SR-FF for device variability analysis. As the input stimulus (logic ‘1’) is applied to M2-M3, PMOS devices of ND1 will be turned on and will produce logic ‘1’ on OUT3. If the next set of input signals being low appear within the active edge of CLK, a random binary value (e.g. high impedance) would appear in both OUT3 and OUT4. Hence, to reduce the possibility of a race condition, the transistor length or threshold voltage of one of the output NAND gates can be varied to increase the delay variability and generate a stable response. For example, to propagate the inputs from ND2 to OUT4 quickly, the transistors' (M12-M15) length can be sampled. Therefore, device parameters mismatch in a set of transistors can aid in evaluating a state faster and those transistors (M8-M11) with smaller mismatch will fall behind in the race. Hence, the precise tuning of gate length not only helps to generate a PUF response but also helps to recover from metastability. For analysis and comparison purposes withFIG.4(below),FIG.3shows the architecture of dual-mode n-bit array SR-FFs with an input multiplexer (MUX) to select either a PUF mode or a regular mode. As each SR-FF would generate a single bit key, a PUF signature of the maximum size of FF instances can be obtained. However, the PUF signature suffers from a multiplexer output that has to be sufficiently long to reach all SR-FF instances. The depicted architecture also increases the delay to produce random output at Q depending on the longest distance from MUX output to an SR-FF instance. As a result, both higher wire length from the MUX output and the maximum transition time due to metastability will decrease the timing performance of an SR-FF based PUF during a regular operation. Furthermore, such architecture may be susceptible to a key-guessing attack under a single clock pulse. Hence, the architecture inFIG.3is biased towards variations in the connecting wire length and width. This, in turn, reduces the impact of the transistors' local variation. In short, the higher the depth of PUF timing paths, the less its response will depend on the transistors' behavior. Next,FIG.4shows a centroid architecture of 16-bit SR-FFs (e.g., a 4×4 grid) built upon the architecture ofFIG.3with additional MUXs to improve (or reduce) the delay and thwart any potential key-guessing attack. The architecture ofFIG.4also results in improved bit distribution by preventing edge-effects. In the figure, each multiplexer has a three selector bit, of which, two are used to select an SR-FF in a grid and the remaining bit is used for determining mode (PUF (non-regular)) or normal (regular)) selection. In various embodiments, a controller is embedded in the architecture to aid in the signal extraction process. Depending on the number of controllable MUXs, the size of the partitions or grids can increase or decrease. During analysis and testing, delay variations are investigated in NAND gates of the feedback path that most affect the gate delay. The disclosed concepts are validated with SPICE-level simulations for 32 nm, 45 nm, and 90 nm process nodes to establish the robustness of the proposed PUF responses for 16-, 32-, 64-, and 128-bit responses. In particular, Monte Carlo (MC) simulations of SR-FF PUF at SPICE level are performed using Synopsys HSPICE for three CMOS processes (32 nm, 45 nm, and 90 nm). MC can perform device variability analysis within six-sigma limit, hence the Challenge-Response Pairs (CRPs) collected using MC is comparable to CRPs from manufactured ICs. The PUF structure is simulated for 1000 iterations, analogous to 1000 different dies on a 300 mm wafer at nominal voltage (1V). Several works, such as D. Lim et al, “Extracting Secret Keys from Integrated Circuits, IEEE TVLSI, 13(10):1200-1205 (October 2005), G. E. Suh and S. Devadas, “Physical Unclonable Functions for Device Authentication and Secret Key Generation,” in 2007 44th ACM/IEEE DAC, pages 9-14 (June 2007), and U. Rhrmair et al, “PUF Modeling Attacks on Simulated and Silicon Data,” IEEE TIFS, 8(11):1876-1891 (November 2013), in the literature have validated PUF design through SPICE level simulations. PUF responses are then evaluated according to parameters proposed by a Maiti et al. publication titled “A Systematic Method to Evaluate and Compare the Performance of Physical Unclonable Functions” (2011) which include uniqueness, reliability, uniformity/randomness, and bit aliasing/response collision. Although process variations impact the channel length, length variability is maintained within (intra-die) 15% and across (inter-die) 33% of nominal value to generate CRPs. The performance overhead of physical synthesis is also analyzed for five register-transfer-level (RTL) designs with centroid architecture. As discussed above, PUF responses may be evaluated in terms of uniqueness, reliability, uniformity/randomness, and bit aliasing/response collision. Uniqueness provides a measurement of interchip variation. The uniqueness can be measured by calculating Hamming Distance (HD) of two pair-wise dies. Ideally, two dies (chips) show a distinguishable response (HD˜50%) to a common challenge.FIGS.5A-5Cshows inter-chip HD of four different key sequences. For all keys, two thousand comparisons were made to verify uniqueness. One can see that the average HD for all key-lengths are close to 49%. Next, the reliability can be measured from Bit Error Rate (BER) of PUF responses for intra-chip variation. Ideally, a PUF should maintain the same response (100% reliable or 0% variation) on different environmental variations (supply voltage, temperature) under the same challenge.FIGS.5D-5Fshow the intra-die HD for five key length in three process nodes at a different temperature (0° C. to 80° C.). The reliability (HD=0) for 16-, 32-, 64-, and 128-bit registers are 92.3%, 92.2%, 90.7%, and 92.7% respectively. For uniformity/randomness, uniformity measures the ability of a PUF to generate uncorrelated ‘0’s and ‘1’s in the response. Ideally, PUF should generate ‘0’s and ‘1’s with equal probability in a response. This ensures the resilience of guessing PUF response from a known challenge. The probability of zero is bound within 0.5 and 0.7 for four different key lengths inFIGS.6A-6C. Although the ideal value of uniform probability should be 0.5, variability in gate delay due to process variability impacts the even distribution of ‘0’s and ‘1’s. To evaluate the bit aliasing, the same set of responses in uniqueness are used, in which the average probability of collision is less than 30%, as shown inFIGS.7A-7C. As the reference response is chosen randomly and compared to the rest of the responses, an adversary can guess, on average, less than 30% of the correct responses. Hence, the generated responses are resistant to a key-guessing attack. For physical synthesis analysis, Table I (see below) lists the required resistance and capacitance (routing and parasitic) values during cell characterization for achieving metastable state in one embodiment being tested for three design corners (best-case, typical-case, and worst-case). Accordingly, the inter-transistor routing across all wire load models are presented in Table II (see below). For this analysis, input voltages (0.7V-1.32V) are varied with on-chip variation enabled during synthesis. The number of bits in Table III (see below) represent the possible key length of design. Across different wire load models of a particular design corner, more delay and power variations are observed due to variable resistance and capacitance. For an 8-bit microprocessor (μP), the centroid architecture is adjacent to high-activity logic; hence, increased PPA (power, performance, area) overhead is seen. In the remaining designs, best-case minimizes the area and delay overhead and during worst-case, a reduction in power overhead is seen. TABLE IWireloadBestTypicalWorstModelCap.Res.Cap.Res.Cap.Res.80000.000281.42E−030.0003121.57E−030.0003431.73E−03160000.0005121.15E−030.0005691.28E−030.0006251.41E−03350000.0002431.07E−030.000271.19E−030.0002971.31E−03700000.0001289.00E−030.0001431.00E−020.0001571.10E−02 TABLE IIWire Width(0.45, 0.9, 1.35, 1.8, 2.25)Wire Spacing(0.45, 0.9, 1.35, 1.8, 2.25, 2.7, 3.15, 3.6,4.05, 4.5, 4.95, 5.4, 5.85, 6.3, 6.75, 7.2) TABLE IIINo.Best-CaseTypical-CaseWorst-CaseofAreaPowerDelayAreaPowerDelayAreaPowerDelayDesignBits(%)(%)(%)(%)(%)(%)(%)(%)(%)AES12810720.0094.0652.6220.0171.3013.8360.3200.4736.671DES18270.0220.9630.8240.0370.391.9230.6040.0583.968Triple DES20830.0100.6980.7810.0350.5101.8580.7110.0673.0958-bit uP3860.5844.88402.512.6580.1754.1850.0216.096Cannyedge20270.1091.5871.3542.4870.1645.8123.5850.1016.676DetectorAverage0.1462.4391.1161.0171.0042.7201.8810.1445.301 In general, embodiments of the present disclosure use the existing SR flip-flop device in a new SR-FF based PUF design to quantify its race condition for PUF implementation. In various embodiments, the present disclosure embeds a centroid architecture with SR-FFs so that PUF responses conform to local transistor variations only. The generated responses exhibit better uniqueness, randomness, reliability and reduced bit-aliasing compared to other metastability-based PUFs. In various embodiments, the present disclosure also performs layout-level simulation with foundry data on multiple designs (e.g., 5 designs) that incorporate SR-FF and present their figures of merit (power, timing, and area) in the present disclosure. Accordingly, embodiments of a SR-FF based PUF device in accordance with the present disclosure utilizes SR-FFs already present in the register of a design without any ECC and helper data. The responses are free from multiple key establishments that can thwart a reliability based attack. Additionally, various embodiments of the SR-FF based PUF device can produce or generate an input dependent random yet stable binary sequence aided by unpredictable manufacturing variability. Depending on input challenges, only a fraction or subset of SR-FFs may be utilized to create a unique device signature. Therefore, by using a subset of available SR-FFs, it will increase the attacker reverse engineering effort to determine the exact location of such SR-FFs that participate in PUF responses generation. Additionally, various embodiments of the SR-FF PUF device are implemented having a centroid architecture such that surrounding transistor variations only affect PUF responses, and the associated overhead through layout-based synthesis can be evaluated. It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. | 20,222 |
11861051 | DETAILED DESCRIPTION OF THE EMBODIMENTS In an embodiment, a cryptographic accelerator (processor) retrieves data blocks for processing from a memory. These data blocks arrive and are stored in an input buffer in the order they were stored in memory (or other known order)—typically sequentially according to memory address (i.e., in-order.) The processor waits until a certain number of data blocks are available in the input buffer and then randomly selects blocks from the input buffer for processing. This randomizes the processing order of the data blocks. The processing order of data blocks may be randomized within sets of data blocks associated with a single read transaction, or across sets of data blocks associated with multiple read transactions. Randomizing the processing order of the data blocks provides resistance to side-channel analysis techniques—such as differential power analysis (DPA). This randomization of processing order provides resistance to DPA and related attacks by making it difficult for an attacker to match up the side-channel information collected with the precise block of data being processed. In an embodiment, the processed data blocks are written to memory sequentially (i.e., in-order.) In another embodiment, the processed data blocks are written to memory out-of-order. The processed data blocks written to memory out-of-order may be reordered by software. FIG.1is a block diagram of a cryptographic processing system. InFIG.1, cryptographic processing system100comprises cryptographic processor110and memory system160. Cryptographic processor110includes cryptographic engine111, memory request engine115, read buffer130, write buffer131, and block selector150. Block selector150includes random number generator151. Read buffer130is operatively coupled to block selector150, memory system160, and cryptographic engine111. Block selector150is operatively coupled to cryptographic engine111and read buffer130. Memory request engine115is operatively coupled to cryptographic engine111and memory system160. Write buffer131is operatively coupled to cryptographic engine111and memory system160. Cryptographic engine111can perform cryptographic processing on fixed length strings of bits. These fixed length strings of bits are referred to as blocks. The length of this bit string is the block size. For example, cryptographic engine111can perform cryptographic processing (e.g., cipher and decipher) that follows the Data Encryption Standard (DES) which uses a block size of 64 bits (8 bytes). In another example, cryptographic engine111can perform cryptographic processing that conforms to the Advanced Encryption Standard (AES) which uses a block size of 128 bits (16 bytes). Cryptographic engine111may also perform cryptographic processing according to other block cipher algorithms. Memory request engine115is responsive to cryptographic engine111. Memory request engine115is responsive to cryptographic engine111to generate read requests to retrieve data from memory system160for use by cryptographic engine111. Memory request engine115is responsive to cryptographic engine111to generate write requests to store data processed by cryptographic engine111into memory system160. Memory request engine115may use direct memory access (DMA) techniques and/or protocols to interface with memory system160. In response to read requests from memory request engine115, memory system160returns read data to cryptographic processor110. Read data returned from memory system160is written into read buffer130by cryptographic processor110. Data in read buffer130is stored in read buffer130until it is sent to cryptographic engine111for processing. Memory system160can comprise a memory controller, memory modules, and/or memory devices. Memory system160may include a memory controller and memory components that are integrated circuit type devices, such as are commonly referred to as “chips.” A memory controller manages the flow of data going to and from memory devices and/or memory modules. A memory controller may couple to multiple processing devices. For example, in addition to cryptographic processor110, memory system160may couple data going to and from memory devices to at least one additional processor. This processor may be referred to as a “compute engine,” “computing engine,” “graphics processor,” “rendering engine,” “processing unit,” “accelerator”, “offload engine,” and/or GPU. This processor may include and/or be a heterogeneous processing unit that includes the functions of one or more of a CPU, GPU, video processor, etc. This processor may include, or be, a serial-ATA (SATA), serial attached SCSI (SAS), eSATA, PATA, IEEE 1394, USB (all revisions), SCSI Ultra, FiberChannel, Infiniband, Thunderbolt, or other industry standard I/O interfaces (such as PCI-Express—PCIe). This processor may include, or be, a network processor unit (NPU) such as a TCP offload engine (TOE), a protocol translator (e.g., TCP over SATA, TCP over PCI-Express, accelerated SCSI interconnect, etc.), and/or a protocol packet translator. This processor may include, or be, a fixed function graphics processing unit, a digital signal processor (DSP), a signal path processor, a Fourier transform processor, an inverse Fourier transform processor, and/or a media format encoder/decoder (e.g., JPEG, DVX, AVI, MP2, MP3, MP4, Blu-ray, HD-DVD, DVD, etc.). Memory components may be standalone devices, or may include multiple memory integrated circuit dies—such as components of a multi-chip module. A memory controller can be a separate, standalone chip, or integrated into another chip. For example, a memory controller may be included on a single die with a microprocessor (and/or with a cryptographic processor—e.g., cryptographic processor110), or included as part of a more complex integrated circuit system such as a block of a system on a chip (SOC). Memory system160can include multiple memory devices coupled together to form a block of storage space. Memory system160can include, but is not limited to, SRAM, DDR3, DDR4, DDR5, XDR, XDR2, GDDR3, GDDR4, GDDR5, LPDDR, and/or LPDDR2 and successor memory standards and technologies. Memory system160can include a stack of devices such as a through-silicon-via (TSV) stack and/or a hybrid memory cube (HMC). Further information about HMC is available from the Hybrid Memory Cube Consortium (http://hybridmemorycube.org/). Read requests from request engine115may instruct memory system160to provide multiple data blocks. For example, a single read request from request engine115may instruct memory system160to provide 256 bits of data to be stored in read buffer130. This is equivalent to 4 blocks of 64 bits, which is the cipher block size of DES. Likewise, write requests from request engine115may instruct memory system160to write multiple data blocks stored in write buffer131. In an embodiment, block selector150randomly selects entries (data blocks) in read buffer130for cryptographic processing by cryptographic engine111. This random selection may be based on one or more random numbers generated by random number generator151. The random selection may also be based on random numbers provided by a random number generator external to cryptographic processor110. In this case, random number generator151may not be present. The data blocks selected by block selector150may be successively confined to a single defined group until all of the data blocks in the group are selected (processed). In other words, block selector150may randomly select data blocks among a group (set) of data blocks until all of the data blocks in the group have been selected. Once all of the data blocks in the group have been selected, block selector150may proceed to a second group and start randomly selecting from among that group. In this manner, all of the data blocks within a group are processed before cryptographic engine111starts processing data blocks from another group. In an embodiment, these data block groups correspond to sets of data blocks that are received in response to a single read request sent to memory system160. In another embodiment, the data blocks selected by block selector150may span multiple groups. In other words, block selector150may randomly select data blocks among multiple groups (sets) of data blocks until all of the data blocks in those groups have been selected. Once all of the data blocks in the multiple groups have been selected, block selector150may proceed to a second set of multiple groups and start randomly selecting from among the set that spans these multiple groups. In an embodiment, these multiple data block groups correspond to sets of data blocks that are received in response to a corresponding multiple number of single read requests sent to memory system160. In another embodiment, the data blocks selected by block selector150may comprise any unprocessed (i.e., yet to be selected) data blocks stored in read buffer130. In other words, block selector150may randomly select data blocks from any valid (i.e., unprocessed) location within read buffer130. As new data blocks arrive in read buffer130from memory system160, they become valid entries in read buffer130and available for random selection by block selector150. After processing by cryptographic engine111, the results (i.e., processed data) are stored in write buffer131. In an embodiment, the processed data stored to write buffer131may be stored in a random access fashion such that the location of the results in write buffer131corresponds to the order of locations of the associated input blocks of data in memory system160. In an embodiment, the processed data stored to write buffer131may be stored in write buffer131in the random order the associated data blocks were processed (i.e., selected from read buffer130.) When the processed data is stored in write buffer131in a random location order, the data may be written to memory system160in a random access fashion such that the order of the locations of the results in memory system160corresponds to the order of associated read data in memory system160. Alternatively, when the processed data is stored in write buffer131in a random location order, the data may be written to memory system160in that random location order and then reordered by software. FIG.2Ais an illustration of randomized cryptographic processing within read sets of blocks. The operations illustrated inFIG.2Amay be performed by one or more elements of cryptographic processing system100. InFIG.2A, blocks of data A-H are stored sequentially in memory system260starting at location zero (0). In other words, data block “A” is stored at memory location “0”, data block “B” is stored at memory location “1”, data block “C” is stored at memory location “2”, and so on. Also illustrated inFIG.2A, processed blocks of data AC-HCare stored sequentially in memory system260starting at location N. In other words, data block “AC” is stored at memory location “N”, data block “BC” is stored at memory location “N+1”, data block “CC” is stored at memory location “N+2”, and so on. Each data block A-H and processed data block AC-HCrepresent blocks of data that correspond in size to the block size of the cipher algorithm being performed. It should be understood that the use of the specific addresses “0” to “7” and “N” to “N+7” is for illustration purposes. The description given here can be extended to read and write areas of varying size and location. Additionally, the addresses used in memory request engine115may be logical addresses. The logical addresses may not directly identify physical storage elements. Physical storage locations may be identified by physical addresses, which may be obtained from the logical addresses by applying one or more translations. The translations may be performed in memory system160, or elsewhere. Data is read from memory system260and stored in read buffer230in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Aby the arrow labeled READ #1indicating the copying of blocks A to D from locations0to3, respectively, in memory system260to locations0to3in read buffer230. Thus, locations0to3in read buffer230are part of a read set of blocks that was retrieved in response to READ #1. This operation is also illustrated by the arrow labeled READ #2indicating the copying of blocks E to H from locations4to7, respectively, in memory system260to locations4to7in read buffer230. Thus, locations4to7in read buffer230are part of a read set of blocks that was retrieved in response to READ #2. Block selector250randomly selects blocks from a read set until all of the blocks in a read set are processed. In other words, block selector250first selects blocks randomly from among blocks A-D until all of blocks A-D are processed, then selects blocks randomly from among blocks E-H until all of blocks E-H are processed, and so on. In this manner, the blocks associated with READ #1are processed in an order that is a random permutation of the order they were copied into read buffer130(and/or were stored in memory system260.) Likewise, the blocks associated with READ #2are processed in an order that is a random permutation of the order they were copied into read buffer130(and/or were stored in memory system260.) The processing of the blocks associated with READ #1is illustrated inFIG.2Aby the arrows from blocks A to D leading through block selector250to processing order212. Processing order212illustrates an example random selection by block selector250where block C was processed first, block A second, block D third, and block B fourth. The processing of the blocks associated with READ #2is illustrated inFIG.2Aby the arrows from blocks E to H leading through block selector250to processing order212. Processing order212illustrates an example random selection by block selector250where block E was processed fifth, block H sixth, block F seventh, and block G eighth. It should be understood thatFIG.2Aillustrates one of many possible sequences for processing order212. In an embodiment, the sequences for processing order212may be determined by, for example, block selector150, using random numbers as discussed herein. The use of random numbers to help determine processing order212can, upon commencement or execution of a cipher operation by cryptographic processor110, make the order (i.e., processing order212) in which the blocks (e.g., blocks A-H) are processed unpredictable (or at least more difficult to predict than a predetermined order.) It should also be understood thatFIG.2Aillustrates an embodiment where multiple data blocks (i.e., set of blocks) are received (retrieved) from memory in response to a single request. These blocks are then cryptographically processed in a random order. The randomization of the processing order is limited to the blocks received in response to a single read transaction. In other words, the blocks associated with a first read (e.g., READ #1) are processed in a random order, but are all processed before the blocks associated with the next read (e.g., READ #2) are processed. Thus,FIG.2Ais an illustration of randomized cryptographic processing within read sets of blocks. After processing, cryptographically processed (e.g., encrypted or decrypted) versions of data blocks A-H are placed in write buffer231. The cryptographically processed versions of data blocks A-H are illustrated as processed data blocks AC-HC, respectively. The placement of processed data blocks AC-HCin write buffer231is illustrated inFIG.2Aby an arrow from processing order212to locations0to3of write buffer231. Processed blocks ACto DCare shown being placed in locations0to3, respectively, of write buffer231. It should be noted that processed blocks ACto DCare placed in write buffer231in the same location order that the corresponding unprocessed blocks A to D were placed in read buffer230—even though blocks A to D were processed in a random order (by, for example, cryptographic engine111) into cryptographically processed versions ACto DC. Data is written to memory system260from write buffer231in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Aby the arrow labeled WRITE #1indicating the copying of processed blocks ACto DCfrom locations0to3, respectively, in write buffer231to locations N to N+3 of memory system260. Thus, locations0to3in write buffer231are part of a write set of blocks that is written in response to WRITE #1. This operation is also illustrated inFIG.2Aby the arrow labeled WRITE #2indicating the copying of processed blocks ECto HCfrom locations4to7, respectively, in write buffer231to locations N+4 to N+7 of memory system260. Thus, locations4to7in write buffer231are part of a write set of blocks that is written in response to WRITE #2. Thus, it should be apparent that after being processed in a random order (at least within sets of blocks), the memory location order (in memory system260) of blocks A-H corresponds to the memory location order (in memory system260) of processed blocks AC-HC. FIG.2Bis an illustration of randomized cryptographic processing across read sets of blocks and using ordered write sets. The operations illustrated inFIG.2Bmay be performed by one or more elements of cryptographic processing system100. InFIG.2B, blocks of data A-H are stored sequentially in memory system260starting at location zero (0). In other words, data block “A” is stored at memory location “0”, data block “B” is stored at memory location “1”, data block “C” is stored at memory location “2”, and so on. Also illustrated inFIG.2B, processed blocks of data AC-HCare stored sequentially in memory system260starting at location N. In other words, data block “AC” is stored at memory location “N”, data block “BC” is stored at memory location “N+1”, data block “CC” is stored at memory location “N+2”, and so on. Each data block A-H and processed data block AC-HCrepresent blocks of data that correspond in size to the block size of the cipher algorithm being performed. Data is read from memory system260and stored in read buffer230in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Bby the arrow labeled READ #1indicating the copying of blocks A to D from locations0to3, respectively, in memory system260to locations0to3in read buffer230. Thus, locations0to3in read buffer230are part of a read set of blocks that was retrieved in response to READ #1. This is also illustrated by the arrow labeled READ #2indicating the copying of blocks E to H from locations4to7, respectively, in memory system260to locations4to7in read buffer230. Thus, locations4to7in read buffer230are part of a read set of blocks that was retrieved in response to READ #2. Block selector250randomly selects blocks from read buffer230for processing. In an embodiment, block selector250randomly selects blocks from a plurality of read sets until all of the blocks in those read sets are processed. In other words, block selector250may first select blocks randomly from among blocks A-H until all of blocks A-H are processed, then selects blocks randomly from among other blocks in read buffer230, and so on. In another embodiment, block selector250may randomly select blocks from read buffer230without regard to which read request caused a particular block to be read from memory system260. A weighting or queueing scheme (e.g., random fair queueing, random early detection, weighted random early detection, and random early detection In/Out) to the random selection of blocks in read buffer230may be implemented to ensure blocks that have been in read buffer230are eventually selected within a reasonable period of time. In this manner, the blocks associated with READ #1and READ #2are processed in an order that is a random permutation of the order they were read into read buffer130(and/or were stored in memory system260.) The processing of the blocks in read buffer230is illustrated inFIG.2Bby the arrow from blocks A to H leading through block selector250to processing order212. Processing order212illustrates an example random selection where block C (from READ #1) was processed first, block G (from READ #2) second, block E third (from READ #2), block B (from READ #1) fourth, block A (from READ #1) fifth, block F (from READ #2) sixth, block D (from READ #1) seventh, and block H (from READ #2) eighth. Accordingly, it should be understood thatFIG.2Billustrates an embodiment where multiple data blocks (i.e., set of blocks) are received (retrieved) from memory in response to multiple requests. These blocks are then cryptographically processed in a random order. The randomization of the processing order is limited to the blocks already received and not processed, but is also not limited to those blocks received in response to a single read transaction. In other words, the blocks associated with a first read (e.g., READ #1) are processed in a random order randomly intermingled with the processing of randomly selected blocks associated with at least one other (e.g., the next—READ #2) read transaction. Thus,FIG.2Bis an illustration of cryptographic processing with a processing order randomized across read sets of blocks. After processing, cryptographically processed (e.g., encrypted or decrypted) versions of data blocks A-H are placed in write buffer231. The cryptographically processed versions of data blocks A-H are illustrated as processed data blocks AC-HC, respectively. The placement of processed data blocks AC-HCin write buffer231is illustrated inFIG.2Bby an arrow from processing order212to locations0to7of write buffer231. Processed blocks ACto HCare shown being placed in locations0to7, respectively, of write buffer231. It should be noted that processed blocks ACto HCare placed in write buffer231in the same location order that the corresponding unprocessed blocks A to H were placed in read buffer230—even though blocks A to H were processed in a random order (by, for example, cryptographic engine111) into cryptographically processed versions ACto HC. Data is written to memory system260from write buffer231in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Bby the arrow labeled WRITE #1indicating the copying of processed blocks ACto DCfrom locations0to3, respectively, in write buffer231to locations N to N+3 of memory system260. Thus, locations0to3in write buffer231are part of a write set of blocks that is written in response to WRITE #1. This is also illustrated inFIG.2Bby the arrow labeled WRITE #2indicating the copying of processed blocks ECto HCfrom locations4to7, respectively, in write buffer231to locations N+4 to N+7 of memory system260. Thus, locations4to7in write buffer231are part of a write set of blocks that is written in response to WRITE #2. Thus, it should be apparent that after being processed in a random order that encompasses multiple read sets, the memory location order (in memory system260) of blocks A-H corresponds to the memory location order (in memory system260) of processed blocks AC-HC. FIG.2Cis an illustration of randomized cryptographic processing across read sets of blocks with write transactions ordering the blocks in memory. The operations illustrated inFIG.2Cmay be performed by one or more elements of cryptographic processing system100. InFIG.2C, blocks of data A-H are stored sequentially in memory system260starting at location zero (0). In other words, data block “A” is stored at memory location “0”, data block “B” is stored at memory location “1”, data block “C” is stored at memory location “2”, and so on. Also illustrated inFIG.2C, processed blocks of data AC-HCare stored sequentially in memory system260starting at location N. In other words, data block “AC” is stored at memory location “N”, data block “BC” is stored at memory location “N+1”, data block “CC” is stored at memory location “N+2”, and so on. Each data block A-H and processed data block AC-HCrepresent blocks of data that correspond in size to the block size of the cipher algorithm being performed. Data is read from memory system260and stored in read buffer230in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Cby the arrow labeled READ #1indicating the copying of blocks A to D from locations0to3, respectively, in memory system260to locations0to3in read buffer230. Thus, locations0to3in read buffer230are part of a read set of blocks that was retrieved in response to READ #1. This is also illustrated by the arrow labeled READ #2indicating the copying of blocks E to H from locations4to7, respectively, in memory system260to locations4to7in read buffer230. Thus, locations4to7in read buffer230are part of a read set of blocks that was retrieved in response to READ #2. Block selector250randomly selects blocks from read buffer230for processing. In an embodiment, block selector250randomly selects blocks from a plurality of read sets until all of the blocks in those read sets are processed. In other words, block selector250may first select blocks randomly from among blocks A-H until all of blocks A-H are processed, then selects blocks randomly from among other blocks in read buffer230, and so on. In another embodiment, block selector250may randomly selects blocks from read buffer230without regard to which read request caused a particular block to be read from memory system260. A weighting or queueing scheme (e.g., random fair queueing, random early detection, weighted random early detection, and random early detection In/Out) to the random selection of blocks in read buffer230may be implemented to ensure blocks that have been in read buffer230are eventually selected within a reasonable period of time. In this manner, the blocks associated with READ #1and READ #2are processed in an order that is a random permutation of the order they were read into read buffer130(and/or were stored in memory system260.) The processing of the blocks in read buffer230is illustrated inFIG.2Bby the arrow from blocks A to H leading through block selector250to processing order212. Processing order212illustrates an example random selection where block C (from READ #1) was processed first, block G (from READ #2) second, block E third (from READ #2), block B (from READ #1) fourth, block A (from READ #1) fifth, block F (from READ #2) sixth, block D (from READ #1) seventh, and block H (from READ #2) eighth. Accordingly, it should be understood thatFIG.2Cillustrates an embodiment where multiple data blocks (i.e., set of blocks) are received (retrieved) from memory in response to multiple requests. These blocks are then cryptographically processed in a random order. The randomization of the processing order is limited to the block already received and not processed, but is not limited to those blocks received in response to a single read transaction. In other words, the blocks associated with a first read (e.g., READ #1) are processed in a random order randomly intermingled with the processing of randomly selected blocks associated with at least one other (e.g., the next—READ #2) read transaction. Thus,FIG.2Cis an illustration of at least randomized cryptographic processing across read sets of blocks. After processing, cryptographically processed (e.g., encrypted or decrypted) versions of data blocks A-H are placed in write buffer231in a location order that corresponds to the order they were processed. The cryptographically processed versions of data blocks A-H are illustrated as processed data blocks AC-HC, respectively. The placement of processed data blocks AC-HCin write buffer231is illustrated inFIG.2Cby an arrow from processing order212to locations0to7of write buffer231. InFIG.2C, block CC, which was processed first, is placed in location0of write buffer231; block GC, which was processed second, is placed in location1; block EC, which was processed third, is placed in location2; block BC, which was processed fourth, is placed in location3; block AC, which was processed fifth, is placed in location4; block FC, which was processed sixth, is placed in location5; block DC, which was processed seventh, is placed in location6; and, block HC, which was processed eighth, is placed in location7. It should be noted that processed blocks ACto HCare placed in write buffer231in a location order that corresponds to the random order blocks A-H were processed (by, for example, cryptographic engine111) into cryptographically processed versions ACto HC. Data is written to memory system260from write buffer231such that the memory location order (in memory system260) of blocks A-H corresponds to the memory location order (in memory system260) of processed blocks AC-HC. This may require write transactions that write less than a whole set of data blocks (e.g., writes of only one data block.) This is illustrated inFIG.2Cby the arrows labeled WRITE #1through WRITE #4indicating the copying of processed blocks CC, GC, DC, and BCfrom locations0to3, respectively, in write buffer231to locations N+2, N+6, N+4, and N+1 of memory system260. Thus, it should be apparent that after being processed in a random order that encompasses multiple read sets, the memory location order (in memory system260) of blocks A-H corresponds to the memory location order (in memory system260) of processed blocks AC-HC. FIG.2Dis an illustration of randomized cryptographic processing across read sets of blocks with randomly ordered block sets written to memory. The operations illustrated inFIG.2Dmay be performed by one or more elements of cryptographic processing system100. InFIG.2D, blocks of data A-H are stored sequentially in memory system260starting at location zero (0). In other words, data block “A” is stored at memory location “0”, data block “B” is stored at memory location “1”, data block “C” is stored at memory location “2”, and so on. Also illustrated inFIG.2D, processed blocks of data AC-HCare stored out-of-order in memory system260starting at location N. In other words, data block “CC” is stored at memory location “N”; data block “GC” is stored at memory location “N+1”; data block “EC” is stored at memory location “N+2”; data block “BC” is stored at memory location “N+3”; data block “AC” is stored at memory location “N+4”; data block “FC” is stored at memory location “N+5”; data block “DC” is stored at memory location “N+6”; and, data block “HC” is stored at memory location “N+7”. Each data block A-H and processed data block AC-HCrepresent blocks of data that correspond in size to the block size of the cipher algorithm being performed. Data is read from memory system260and stored in read buffer230in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Dby the arrow labeled READ #1indicating the copying of blocks A to D from locations0to3, respectively, in memory system260to locations0to3in read buffer230. Thus, locations0to3in read buffer230are part of a read set of blocks that was retrieved in response to READ #1. This is also illustrated by the arrow labeled READ #2indicating the copying of blocks E to H from locations4to7, respectively, in memory system260to locations4to7in read buffer230. Thus, locations4to7in read buffer230are part of a read set of blocks that was retrieved in response to READ #2. Block selector250randomly selects blocks from read buffer230for processing. In an embodiment, block selector250randomly selects blocks from a plurality of read sets until all of the blocks in those read sets are processed. In other words, block selector250may first select blocks randomly from among blocks A-H until all of blocks A-H are processed, then selects blocks randomly from among other blocks in read buffer230, and so on. In another embodiment, block selector250may randomly selects blocks from read buffer230without regard to which read request caused a particular block to be read from memory system260. A weighting or queueing scheme (e.g., random fair queueing, random early detection, weighted random early detection, and random early detection In/Out) to the random selection of blocks in read buffer230may be implemented to ensure blocks that have been in read buffer230are eventually selected within a reasonable period of time. In this manner, the blocks associated with READ #1and READ #2are processed in an order that is a random permutation of the order they were read into read buffer130(and/or were stored in memory system260.) The processing of the blocks in read buffer230is illustrated inFIG.2Dby the arrow from blocks A to H leading through block selector250to processing order212. Processing order212illustrates an example random selection where block C (from READ #1) was processed first, block G (from READ #2) second, block E third (from READ #2), block B (from READ #1) fourth, block A (from READ #1) fifth, block F (from READ #2) sixth, block D (from READ #1) seventh, and block H (from READ #2) eighth. Accordingly, it should be understood thatFIG.2Dillustrates an embodiment where multiple data blocks (i.e., set of blocks) are received (retrieved) from memory in response to multiple requests. These blocks are then cryptographically processed in a random order. The randomization of the processing order is limited to the blocks already received and not processed, but is also not limited to those blocks received in response to a single read transaction. In other words, the blocks associated with a first read (e.g., READ #1) are processed in a random order randomly intermingled with the processing of randomly selected blocks associated with at least one other (e.g., the next—READ #2) read transaction. Thus,FIG.2Dis an illustration of at least randomized cryptographic processing across read sets of blocks. After processing, cryptographically processed (e.g., encrypted or decrypted) versions of data blocks A-H are placed in write buffer231in a location order that corresponds to the order they were processed. The cryptographically processed versions of data blocks A-H are illustrated as processed data blocks AC-HC, respectively. The placement of processed data blocks AC-HCin write buffer231is illustrated inFIG.2Dby an arrow from processing order212to locations0to7of write buffer231. InFIG.2D, block CC, which was processed first, is placed in location0of write buffer231; block GC, which was processed second, is placed in location1; block EC, which was processed third, is placed in location2; block BC, which was processed fourth, is placed in location3; block AC, which was processed fifth, is placed in location4; block FC, which was processed sixth, is placed in location5; block DC, which was processed seventh, is placed in location6; and, block HC, which was processed eighth, is placed in location7. It should be noted that processed blocks ACto HCare placed in write buffer231in a location order that corresponds the random order blocks A-H were processed (by, for example, cryptographic engine111) into cryptographically processed versions ACto HC. Data is written to memory system260from write buffer231in sets of blocks that comprise multiple blocks of data. This is illustrated inFIG.2Dby the arrow labeled WRITE #1indicating the copying of processed blocks CC, GC, EC, and BCfrom locations0to3, respectively, in write buffer231to locations N to N+3 of memory system260. Thus, locations0to3in write buffer231are part of a write set of blocks that is written in response to the single transaction WRITE #1. This is also illustrated inFIG.2Dby the arrow labeled WRITE #2indicating the copying of processed blocks AC, FC, DC, and HCfrom locations4to7, respectively, in write buffer231to locations N+4 to N+7 of memory system260. Thus, locations4to7in write buffer231are part of a write set of blocks that is written in response to WRITE #2. Thus, it should be apparent that after being processed in a random order that encompasses multiple read sets, the memory location order (in memory system260) of processed blocks AC-HCcorresponds to the random processing order of blocks A-H. In an embodiment, tags (TG) are also written to write buffer231. These tags are written to memory system260. This is illustrated inFIG.2Dby the arrow labeled TAG WRITE indicating the copying of TG from write buffer231to memory system260. These tags convey information about the location ordering of processed blocks AC-HC. This information is sufficient for software (not shown inFIG.2D) to reorder processed blocks AC-HCin memory system260such that such that the memory location order (in memory system260) of blocks A-H corresponds to the memory location order (in memory system260) of the reordered processed blocks AC-HC. FIG.3is a flowchart illustrating cryptographic processing randomized within sets of data blocks. The steps illustrated inFIG.3may be performed by one or more elements of cryptographic processing system100. As a first set, an input set of data blocks are received in an input order (302). For example, read buffer130may receive, in response to a read request, a set of data blocks from memory system160. These data blocks may arrive in a sequential order that corresponds to the locations in memory160where they were stored. Each data block of the set may have a size that corresponds to the block size of the cryptographic processing to be performed on the data blocks of the set. Each of the input set of data blocks are cryptographically processed in a processing order that is a random permutation of the input order (304). For example, cryptographic engine111may repeatedly randomly select, for cryptographic processing, unprocessed data blocks from the set of data blocks until all of the data blocks received in response to the read request have been processed. This results in the data blocks in the set of data blocks being cryptographically processed in an order that is a random permutation of the order the data blocks were received (i.e., a random permutation of the sequential order in which the blocks were received/stored in memory160.) In an output order, as a second set, a processed set of data blocks that comprise cryptographically processed versions of the input set of data blocks are output (306). For example, cryptographically processed versions of the input set of data blocks (e.g., AC-HC) may be output for storage in memory system160. These cryptographically processed versions of the input set of data blocks may be output for storage in memory160in a sequential order that corresponds to the locations in memory160where the corresponding input data blocks were stored. These cryptographically processed versions of the input set of data blocks may be output for storage in memory160in a sequential order that corresponds to the order these cryptographically processed versions were generated. FIG.4is a flowchart illustrating cryptographic processing that is randomized across sets of data blocks. The steps illustrated inFIG.4may be performed by one or more elements of cryptographic processing system100. In response to a first memory request, a first set of input data blocks ordered in a first input order are received (402). For example, in response to a first memory request (e.g., READ #1), a first set of input data blocks (e.g., data blocks A-D) may be received in the sequential order corresponding to how they were stored in memory system160. In response to a second memory request, a second set of input data blocks ordered in a second input order are received (404). For example, in response to a first memory request (e.g., READ #2), a second set of input data blocks (e.g., data blocks E-H) may be received in the sequential order corresponding to how they were stored in memory system160. Each of the first set of input data blocks and the second set of input data blocks are cryptographically processed in a processing order that comprises a random permutation of a combination of the first input order and the second input order (406). For example, cryptographic processor111may process the first set of data blocks and the second set of data blocks by randomly selecting unprocessed data blocks from both the first set of data blocks and the second set of data blocks until all of the data blocks received in response to the first and second read requests have been processed. This results in the data blocks in the first and second sets of data blocks being cryptographically processed in an order that is a random permutation of the order the data blocks were received (i.e., a random permutation of the sequential order in which the first and second sets of data blocks were received/stored in memory160.) A processed set of data blocks that comprise cryptographically processed versions of the first input set of data blocks and the second input set of data blocks are output (408). For example, cryptographically processed versions of the first input set of data blocks (e.g., AC-DC) and cryptographically processed versions of the second input set of data blocks (e.g., EC-HC) may be output for storage in memory system160. The cryptographically processed versions of the first set of input set of data blocks may be output for storage in memory160in a sequential order that corresponds to the locations in memory160where the corresponding ones of the first set of input data blocks were stored. The cryptographically processed versions of the second set of input set of data blocks may be output for storage in memory160in a sequential order that corresponds to the locations in memory160where the corresponding ones of the second set of input data blocks were stored. The cryptographically processed versions of the first and second set of input set of data blocks may be output for storage in memory160in a random order that corresponds to the order (i.e., permutation) of the order the first and second set of input data blocks were processed. FIG.5is a flowchart illustrating a method of storing data blocks that were cryptographically processed in a random order. The steps illustrated inFIG.5may be performed by one or more elements of cryptographic processing system100. As a set, from a memory, and in a memory order, and input set of data blocks are received (502). For example, read buffer130may receive from memory system160an input set of data blocks (e.g., data blocks A-D). This input set of data blocks may be received in response to a read request. The data blocks of this input set may be received in the order that corresponds to the locations in which they were stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) In a processing order, a processed set of data blocks that comprise cryptographically processed versions of the input set of data blocks are generated (504). For example, in a random order, the blocks of the input set of data blocks (e.g., A-D) may be processed by cryptographic engine111to produce cryptographically processed versions (e.g., AC-DC.) As a set, to the memory, and in the memory order, the cryptographically processed version of the input set of data blocks are stored (506). For example, the cryptographically processed versions (e.g., AC-DC) of the input data blocks may be stored in memory system160. This stored set of data blocks may be stored by memory160in response to a write request. The cryptographically processed versions (e.g., AC-DC) of the input data blocks may be stored in memory system160in the order that corresponds to the locations in which the corresponding input data blocks were stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) FIG.6is a flowchart illustrating a method of storing data blocks in a random order that were cryptographically processed in a random order. The steps illustrated inFIG.6may be performed by one or more elements of cryptographic processing system100. As a set, from a memory, and in a first memory order, an input set of data blocks are received (602). For example, input buffer130may receive from memory system160an input set of data blocks (e.g., data blocks A-D). This input set of data blocks may be received in response to a read request. The data blocks of this input set may be received in the order that corresponds to the locations in which they were stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) In a processing order, a processed set of data blocks that comprise cryptographically processed versions of the input set of data blocks are generated (604). For example, in a random order, the blocks of the input set of data blocks (e.g., A-D) may be processed by cryptographic engine111to produce cryptographically processed versions (e.g., AC-DC.) As a set, to the memory, and in a second memory order, the cryptographically processed version of the input set of data blocks are stored (606). For example, the cryptographically processed versions (e.g., AC-DC) of the input data blocks may be stored in memory system160. This stored set of data blocks may be stored by memory160in response to a write request. The cryptographically processed versions (e.g., AC-DC) of the input data blocks may be stored in memory system160in an order that corresponds to the order that the cryptographically processed versions of the input set of data blocks were generated. FIG.7is a flowchart illustrating a method of randomizing the order of cryptographic processing while receiving and storing the data blocks in a memory order. The steps illustrated inFIG.7may be performed by one or more elements of cryptographic processing system100. As a first set, from a memory, and in a memory order, a first input set of data blocks are received (702). For example, input buffer130may receive, from memory system160, a first input set of data blocks (e.g., data blocks A-D). This first input set of data blocks may be received in response to a first read request (e.g., READ #1). The data blocks of this first input set may be received in the order that corresponds to the locations in which the first input set was stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) As a second set, from the memory, and in the memory order, a second input set of data blocks are received (704). For example, input buffer130may receive, from memory system160, a second input set of data blocks (e.g., data blocks E-H). This second input set of data blocks may be received in response to a second read request (e.g., READ #2). The data blocks of this second input set may be received in the order that corresponds to the locations in which the second input set was stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) The second input set may be received in the order that corresponds to where the second input set was stored in memory160relative to the first input set (e.g., sequentially—after the first set.) In a processing order that spans the first input set and the second input set of data blocks, a processed set of data blocks that comprise cryptographically processed versions of the first input set of data blocks and the second input set of data blocks is generated (706). For example, in a random order that includes the intermingling of blocks from both sets, the blocks of the first and second input sets of data blocks (e.g., A-H) may be processed by cryptographic engine111to produce cryptographically processed versions (e.g., AC-HC.) As a third set, to the memory, and in a third memory order, the cryptographically processed versions of the first input set of data blocks are stored (708). For example, the cryptographically processed versions (e.g., AC-DC) of the first set input data blocks may be stored in memory system160. This first set of stored data blocks may be stored by memory160in response to a first write request. The cryptographically processed versions (e.g., AC-DC) of the input data blocks may be stored in memory system160in an order that corresponds to the order that the input set of data blocks were stored in memory system160(e.g., sequentially from low memory address to high memory address or vice versa.). As a fourth set, to the memory, and in a fourth memory order, the cryptographically processed versions of the second input set of data blocks are stored (710). For example, the cryptographically processed versions (e.g., EC-HC) of the second set input data blocks may be stored in memory system160. This second set of stored set of data blocks may be stored by memory160in response to a second write request. The cryptographically processed versions (e.g., EC-HC) of the input data blocks may be stored in memory system160in an order that corresponds to the order that the input set of data blocks were stored in memory system160(e.g., sequentially from low memory address to high memory address or vice versa.). FIG.8is a flowchart illustrating a method of randomizing the order of cryptographic processing while storing the data blocks out of memory order. The steps illustrated inFIG.8may be performed by one or more elements of cryptographic processing system100. As a first set, from a memory, and in a memory order, a first input set of data blocks are received (802). For example, input buffer130may receive, from memory system160, a first input set of data blocks (e.g., data blocks A-D). This first input set of data blocks may be received in response to a first read request (e.g., READ #1). The data blocks of this first input set may be received in the order that corresponds to the locations in which the first input set was stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) As a second set, from the memory, and in the memory order, a second input set of data blocks are received (804). For example, input buffer130may receive, from memory system160, a second input set of data blocks (e.g., data blocks E-H). This second input set of data blocks may be received in response to a second read request (e.g., READ #2). The data blocks of this second input set may be received in the order that corresponds to the locations in which the second input set was stored in memory160(e.g., sequentially from low memory address to high memory address or vice versa.) The second input set may be received in the order that corresponds to where the second input set was stored in memory160relative to the first input set (e.g., sequentially—after the first set.) In a processing order that spans the first input set and the second input set of data blocks, a processed set of data blocks that comprise cryptographically processed versions of the first input set of data blocks and the second input set of data blocks is generated (806). For example, in a random order that includes the intermingling of blocks from both sets, the blocks of the first and second input sets of data blocks (e.g., A-H) may be processed by cryptographic engine111to produce cryptographically processed versions (e.g., AC-HC.) As a third set, to the memory, and not in the memory order, a subset comprising cryptographically processed versions of the first input set of data blocks and the second input set of data blocks are stored (808). For example, a subset of the cryptographically processed versions from the first set input data blocks and the second set of input data blocks (e.g., CC, GC, EC, and BC—as illustrated inFIG.2D) may be stored in memory system160. This subset of stored set of data blocks may be stored by memory160in response to a first write request. This subset of the cryptographically processed versions (e.g., CC, GC, EC, and BC) of the first and second sets input data blocks may be stored in memory system160in an order that corresponds to the random order that the cryptographically processed versions of the input set of data blocks were generated. As a fourth set, to the memory, and not in the memory order, a remainder subset comprising cryptographically processed versions of the first input set of data blocks and the second input set of data blocks are stored (810). For example, an unwritten subset of the cryptographically processed versions from the first set of input data blocks and the second set of input data blocks (e.g., AC, FC, DC, and HC—as illustrated inFIG.2D) may be stored in memory system160. This unwritten subset of stored set of data blocks may be stored by memory160in response to a second write request. This subset of the cryptographically processed versions (e.g., AC, FC, DC, and HC) of the first and second sets input data blocks may be stored in memory system160in an order that corresponds to the random order that the cryptographically processed versions of the input set of data blocks were generated. FIG.9is a method of processing out-of-order data blocks. The steps illustrated inFIG.9may be performed by one or more elements of cryptographic processing system100. As a set, in a memory and out-of-order, an input set of data blocks that has been cryptographically processed is received (902). For example, memory system160may receive from write buffer131, a cryptographically processed set of data blocks. These data blocks (e.g., CC, GC, EC, and BC—as illustrated inFIG.2D) may be received out-of-order. Reordering information about the input set of data blocks is received (904). For example, memory system160may receive tag information from write buffer131. This tag information may relate the received (or sent) order of the first input set of data blocks to a desired order (e.g., the memory order that corresponds to the unprocessed data blocks.) In the memory, the input set of data blocks are reordered based on the reordering information (906). For example, the data blocks (e.g., CC, GC, EC, and BC—as illustrated inFIG.2D) which were received by memory160out-of-order may be reordered by software. The data blocks which were received by memory160out-of-order may be reordered in memory160into the memory order that corresponded to the unprocessed data blocks that served as input to cryptographic engine111. The methods, systems and devices described above may be implemented in computer systems, or stored by computer systems. The methods described above may also be stored on a non-transitory computer readable medium. Devices, circuits, and systems described herein may be implemented using computer-aided design tools available in the art, and embodied by computer-readable files containing software descriptions of such circuits. This includes, but is not limited to one or more elements of cryptographic processing system100, and its components. These software descriptions may be: behavioral, register transfer, logic component, transistor, and layout geometry-level descriptions. Moreover, the software descriptions may be stored on storage media or communicated by carrier waves. Data formats in which such descriptions may be implemented include, but are not limited to: formats supporting behavioral languages like C, formats supporting register transfer level (RTL) languages like Verilog and VHDL, formats supporting geometry description languages (such as GDSII, GDSIII, GDSIV, CIF, and MEBES), and other suitable formats and languages. Moreover, data transfers of such files on machine-readable media may be done electronically over the diverse media on the Internet or, for example, via email. Note that physical files may be implemented on machine-readable media such as: 4 mm magnetic tape, 8 mm magnetic tape, 3½ inch floppy media, CDs, DVDs, and so on. FIG.10illustrates a block diagram of a computer system. Computer system1000includes communication interface1020, processing system1030, storage system1040, and user interface1060. Processing system1030is operatively coupled to storage system1040. Storage system1040stores software1050and data1070. Processing system1030is operatively coupled to communication interface1020and user interface1060. Computer system1000may comprise a programmed general-purpose computer. Computer system1000may include a microprocessor. Computer system1000may comprise programmable or special purpose circuitry. Computer system1000may be distributed among multiple devices, processors, storage, and/or interfaces that together comprise elements1020-1070. Communication interface1020may comprise a network interface, modem, port, bus, link, transceiver, or other communication device. Communication interface1020may be distributed among multiple communication devices. Processing system1030may comprise a microprocessor, microcontroller, logic circuit, or other processing device. Processing system1030may be distributed among multiple processing devices. User interface1060may comprise a keyboard, mouse, voice recognition interface, microphone and speakers, graphical display, touch screen, or other type of user interface device. User interface1060may be distributed among multiple interface devices. Storage system1040may comprise a disk, tape, integrated circuit, RAM, ROM, EEPROM, flash memory, network storage, server, or other memory function. Storage system1040may include computer readable medium. Storage system1040may be distributed among multiple memory devices. Processing system1030retrieves and executes software1050from storage system1040. Processing system1030may retrieve and store data1070. Processing system1030may also retrieve and store data via communication interface1020. Processing system1050may create or modify software1050or data1070to achieve a tangible result. Processing system may control communication interface1020or user interface1060to achieve a tangible result. Processing system1030may retrieve and execute remotely stored software via communication interface1020. Software1050and remotely stored software may comprise an operating system, utilities, drivers, networking software, and other software typically executed by a computer system. Software1050may comprise an application program, applet, firmware, or other form of machine-readable processing instructions typically executed by a computer system. When executed by processing system1030, software1050or remotely stored software may direct computer system1000to operate as described herein. The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and other modifications and variations may be possible in light of the above teachings. The embodiment was chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention except insofar as limited by the prior art. | 59,107 |
11861052 | DETAILED DESCRIPTION Described herein are, among other things, techniques, devices, and systems for determining whether an untrusted device is connected to a hardware port of a computing device. Also described herein are techniques, devices, and systems for determining whether a computing device is being used in an untrusted way and/or location. An action may be taken by the computing device and/or by a remote computing system if it is determined that an untrusted device has been connected to a hardware port of the computing device, and/or if it is determined that the computing device is being used in an untrusted way and/or location. The action performed by the computing device, for example, may include sending a notification to the remote computing system, among other possible actions described herein. Regardless of the type of action taken, the action is aimed at ensuring that the security of sensitive data, such as customer data, remains uncompromised, protected, and secure. To illustrate, a computing device may be issued to a customer service agent (CSA) who handles customer queries on behalf of a service provider. The CSA may utilize the computing device to communicate electronically with customers, such as by using the computing device to establish a secure, authenticated computing session with a remote computing system. The computing device may include one or more hardware ports that are available to the CSA to connect an external device, such as headphones, to the computing device. When the CSA establishes authenticates a computing session in order to interact with customers using the computing device, the computing device, while engaged in the authenticated computing session, is configured to determine if and when an untrusted device is connected to the hardware port. An “untrusted device,” in the context of the present disclosure, may include, without limitation, a recording device (e.g., an audio recording device, video recording device, etc.), a keyboard emulator, a mouse emulator, a key logger, or the like. If and when a connection of an untrusted device is detected, an action can be taken by the computing device, such as a remedial action that notifies a remote computing system, and/or that disables the computing device or a component thereof, such as disabling a communications interface to render the computing device incapable of communicating with the remote computing system any further. In this manner, sensitive customer data that is otherwise accessible to a user via a secure, authenticated computing session remains protected by taking remedial action in response to determining a connection of an untrusted device to a hardware port of the computing device, and/or in response to determining that the computing device is otherwise being used in an untrusted way and/or location. In some implementations, the computing device is equipped with one or more port meters. An individual port meter may be disposed within (or internal to) the computing device, such as by being mounted on a printed circuit board (PCB) that is internal to the computing device, or the individual port meter may be disposed within a hardware port (e.g., a female hardware port). An individual port meter is electrically connected to a corresponding hardware port, such as a universal serial bus (USB) port, of the computing device. Through this electrical connection, the port meter is configured to measure an electrical parameter(s) associated with the hardware port. For example, an individual port meter may be configured to measure an impedance parameter, a voltage parameter, and/or a current parameter associated with the corresponding hardware port. These types of electrical parameters will change if an external device is connected to the hardware port. Said another way, the particular value of the electrical parameter associated with the hardware port (which is measurable by the port meter) varies in response to different types of external devices being connected to the hardware port. For example, when a trusted device, such as a set of headphones, is connected to the hardware port, the electrical parameter(s) associated with hardware port resolves to a first value(s), and when an untrusted device, such as an illicit recording device, is connected to the hardware port, the electrical parameter(s) associated with the hardware port resolves to a second value(s), the second value(s) different than the first value(s). In this way, the value(s) of the electrical parameter(s) measured by the port meter is/are indicative of the type of external device that is connected to the hardware port. An operating system of the computing device receives the value(s) measured by the port meter(s), and processes (e.g., analyzes) the value(s) received from the port meter(s) to determine whether an untrusted device(s) is/are connected to the hardware port(s). If an untrusted device is connected to a hardware port, an action can be performed by the computing device, the action being aimed at protecting customer data, as described herein. Implementations of the techniques and systems described herein can improve existing technologies (e.g., data security technologies). In particular, the techniques and systems described herein allow for detecting connections of potentially malicious devices, which may be used by attackers to target customers and/or to target customer data associated with those customers and maintained by a service provider. The detection systems and techniques described herein ensure that the integrity and the security of customer data remains uncompromised, especially in a context where users are issued computing equipment that is used to perform a task(s) with respect to customers of a service provider, and where the use of that computing equipment to perform the assigned task(s) provides the users with access to sensitive customer data maintained by the backend system of the service provider. As computing devices with the described detection capabilities are deployed in the field, patterns and trends can also be identified in order to detect new types of untrusted devices that are being used by malicious actors. In this manner, connections of new types of untrusted devices to hardware ports of user computing devices can be detected, and the relevant parties can be alerted, among other possible actions that can be taken. In addition to these benefits, the security of customer data and/or resources is inherently improved by the techniques and systems described herein; namely, by detecting and thwarting potentially malicious device connections and/or detecting when a computing device is being used in an untrusted way or location before sensitive customer data can be accessed. In addition to the aforementioned benefits, computing resources, such as processing resources, memory resources, networking resources, power resources, and the like, may also be conserved by aspects of the techniques and systems described herein. Customer experience is also improved by the techniques and systems described herein by improving the security of customer data and/or resources, which gives customers of a service provider piece-of-mind that their data (e.g., phone numbers, email addresses, credit card numbers, etc.) is less likely to be compromised by a data breach. It should be appreciated that the subject matter presented herein can be implemented as a computer process, a computer-controlled apparatus, a computing system, or an article of manufacture, such as a computer-readable storage medium. While the subject matter described herein is presented in the general context of program modules that execute on one or more computing devices, those skilled in the art will recognize that other implementations can be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Those skilled in the art will also appreciate that aspects of the subject matter described herein can be practiced on or in conjunction with other computer system configurations beyond those described herein, including multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, handheld computers, personal digital assistants, e-readers, mobile telephone devices, tablet computing devices, special-purposed hardware devices, network appliances, and the like. The configurations described herein can be practiced in distributed computing environments, such as a service provider network, where tasks can be performed by remote computing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. In the following detailed description, references are made to the accompanying drawings that form a part hereof, and that show, by way of illustration, specific configurations or examples. The drawings herein are not drawn to scale. Like numerals represent like elements throughout the several figures (which might be referred to herein as a “FIG.” or “FIGS.”). FIG.1illustrates an example system100including an example computing device102configured to determine whether an untrusted device is connected to a hardware port of the computing device102, and a remote computing system104with which the computing device102may establish a secure, authenticated computing session106, according to some configurations. The computing device102shown inFIG.1(sometimes referred to herein as a “user computing device102,” a “computer device102,” or an “electronic device102”) may can be implemented as any type and/or any number of computing devices, including, without limitation, a personal computer (PC), a laptop computer, a desktop computer, a portable digital assistant (PDA), a mobile phone, tablet computer, a set-top box, a game console, a server computer, a wearable computer (e.g., a smart watch, headset, etc.), or any other electronic device that can transmit data to, and receive data from, other devices. In an illustrative example, a user108of the computing device102may represent a customer service agent (CSA) who handles queries from customers of a service provider. The service provider may own and/or operate the remote computing system104. The user108may be located at any suitable location (e.g., in a corporate office, in a home office, etc.) while using the computing device102to handle customer queries, such as by taking phone calls and/or video calls from customers, answering questions using an instant messaging service and/or electronic mail (e-mail) application, a social media platform, or any similar electronic messaging or communication service. As part of handling customer queries on behalf of the service provider, the user108may connect the computing device102to the remote computing system104to establish a secure, authenticated computing session106over any suitable network, such as a wide area communication network (“WAN”) (e.g., the Internet), a cellular network, an intranet or an Internet service provider (“ISP”) network or a combination of such networks. In some implementations, the authenticated computing session106may represent an encrypted session. In some implementations, a virtual private network (VPN) is utilized to establish the authenticated computing session106between the computing device102and the remote computing system104, but any suitable type of network access technology can be utilized to establish the session106. In some embodiments, the user108and/or the computing device102and/or another hardware authentication device connected to the computing device102provides security credentials (e.g., usernames, passwords, tokens, etc.) to authenticate the session106. In some implementations, Identity and Access Management (IAM)-based access policies are used to establish the authenticated computing session106, which may involve additional checks (e.g., checks regarding roles, permissions, etc.) before allowing the computing device102to access the remote computing system104, such as to access data, including sensitive data (e.g., customer data110), to field customer queries. As used herein, “sensitive data” means data that is to be protected against unwarranted disclosure, which may be for legal, ethical, proprietary or other reasons. Examples of sensitive data include, without limitation, customer data110(e.g., personal identifiable information (PII)), intellectual property and trade secret data, operational and inventory data, and the like. The customer data110shown inFIG.1may include, without limitation, phone numbers, email addresses, credit card numbers, account numbers, purchase histories, and the like. Thus, the customer data110is sensitive in nature. In the illustrated implementation, the computing device102includes one or more processors112, memory114(e.g., computer-readable media114), and one or more communications interfaces116. In some implementations, the processors(s)112may include a central processing unit (CPU)(s), a graphics processing unit (GPU)(s), both CPU(s) and GPU(s), a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Additionally, each of the processor(s)112may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems. The memory114may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk (CD)-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, redundant array of inexpensive disks (RAID) storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The memory114may be implemented as computer-readable storage media (CRSM), which may be any available physical media accessible by the processor(s)112to execute instructions stored on the memory114. In one basic implementation, CRSM may include RAM and Flash memory. In other implementations, CRSM may include, but is not limited to, ROM, EEPROM, or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s)112. The communication interface(s)116facilitates a connection to a network and/or to one or more remote computing systems, such as the remote computing system104. The communication interface(s)116may implement one or more of various wireless technologies, such as Wi-Fi, Bluetooth, radio frequency (RF), and so on. It is to be appreciated that the communication interface(s)116may additionally, or alternatively, include physical ports to facilitate a wired connection to a network, a connected peripheral device, or a plug-in network device that communicates with other wireless networks. In general, the computing device102may include logic (e.g., software, hardware, and/or firmware, etc.) that is configured to implement the techniques, functionality, and/or operations described herein. The memory114can include various modules, such as instructions, datastores, and so forth, which may be configured to execute on the processor(s)112for carrying out the techniques, functionality, and/or operations described herein. An example functional module in the form of an operating system(s)118is shown inFIG.1. The operating system(s)118may be configured to manage hardware within, and coupled to, the computing device102for the benefit of other modules. The operating system118may execute in kernel mode120(or kernel space120) of the computing device102. According to some implementations, the operating system(s)118comprises the Linux operating system. According to other implementations, the operating system(s)118comprises the Windows® operating system from Microsoft Corporation of Redmond, Washington. According to further implementations, the operating system(s)118comprises the Unix operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. Various applications may be executed in user mode122(or user space122) of the computing device102, such as word processing applications, messaging applications, and the like. The kernel mode120and the user mode122correspond to respective protection domains—also known as rings—that protect data and functionality of the computing device102from faults and malware. Typically, a user mode, such as the user mode122, is associated with the outermost ring and the least level of privileges to access memory and functionality. This ring is often referred to as “ring 3” and includes many application processes. A kernel mode, such as the kernel mode120, is associated with an inner ring (sometimes the innermost ring, although in modern computing devices there is sometimes an additional level of privilege, a “ring 1”) and a higher level of privileges to access memory and functionality. This ring is often referred to as “ring 0” and typically includes operating system118processes. The computing device102ofFIG.1is further shown as including hardware ports124(1) to124(N), where “N” is any suitable integer. Although multiple hardware ports124are depicted inFIG.1, it is to be appreciated that the computing device102may include a single hardware port124, in some implementations. The individual hardware ports124(sometimes referred to herein as “physical ports124”) are configured to receive a connector of an external device. As the name implies, an “external device,” in this context means a device that is external to the computing device102. Accordingly, the hardware ports124may be accessible via respective orifices defined in a housing of the computing device102, for example. In some implementations the hardware ports124, or a subset of the hardware ports124, represent one or more universal serial bus (USB) ports that are each configured to receive a connector of an external USB device. In some implementations the hardware ports124, or a subset of the hardware ports124, represent one or more high-definition multimedia interface (HDMI) ports that are each configured to receive a connector of an external HDMI device. These types of hardware ports are exemplary, and other types of hardware ports that use other technologies and interfaces known to a person having ordinary skill in the art are contemplated. Example types of hardware ports124include, without limitation, USB ports (USB Type-A, USB Type-B, USB Type-C, USB 2.0, USB 3.0, USB 3.1 Gen 1, USB 3.1 Gen 2, micro USB, mini USB, etc.), HDMI ports, Ethernet ports, audio ports (e.g., a 3.5 mm audio jack), DisplayPort/mini DisplayPort, digital visual interface (DVI) ports, micro Secure Digital (SD) card readers, SD card readers, Thunderbolt 3 ports, video graphics array (VGA) ports, serial ATA (SATA) ports, or any combination thereof. The computing device102may further include port meters126(1) to126(N). In some implementations, there may be one port meter126for every hardware port124that is being monitored to detect a connection of an untrusted device to that hardware port124. In other implementations, there may be one port meter126associated with multiple hardware ports124. For example, a single port meter126may utilize a switch and/or a multiplexer to scan multiple hardware ports124in series to measure an electrical parameter(s) associated with the multiple hardware ports124. For instance, a port meter126may be configured to measure the electrical parameter(s) associated with a first hardware port124(1), and then measure the electrical parameter(s) associated with a second hardware port124(2), and so on and so forth for any suitable number of hardware ports124(1) to124(N). Individual port meters126may be disposed within (or internal to) the computing device102in that they are at least substantially enclosed by the housing of the computing device102, or an individual port meter126may be disposed within a corresponding hardware port124(e.g., a female hardware port). Furthermore, individual port meters126are electrically connected to a corresponding hardware port124and configured to measure one or more electrical parameters associated with the hardware port124. The electrical parameter(s) measured by the port meters126may include, without limitation, an impedance parameter, a voltage parameter, and/or a current parameter. The operating (or measurement) ranges of the port meters126with respect to each type of electrical parameter may vary depending on the application of the port meter126and/or the type of hardware port (e.g., USB, HDMI, etc.). In some examples, the port meter126is configured to measure an impedance parameter (sometimes referred to herein as a “resistance parameter”) within a range of about 1 ohm (Ω) to 9999.9Ω. In some examples, the port meter126is configured to measure a voltage parameter within a range of about 3.7 volts (V) to 40 V. In some examples, the port meter126is configured to measure a current parameter within a range of about 0 amperes (A) to 4 A. The port meters126may be configured to take an individual measurement at any suitable time, such in response to an instruction (e.g., from the operating system118) or an event (e.g., a connection of an external device to the hardware port124), and/or at any suitable frequency or schedule. In some implementations, the port meters126are configured to periodically measure the electrical parameter(s) associated with a corresponding hardware port124to generate a series of values (e.g., impedance values, voltage values, and/or current values). In some implementations, this periodic measurement interval may be an interval of about 100 milliseconds (ms), meaning that the port meters126are configured to measure the electrical parameter(s) about every 100 ms. In some implementations, an individual port meter126is configured to measure (e.g., to start measuring on a periodic basis) the electrical parameter(s) in response to determining (e.g., detecting) that an external device is connected to the corresponding hardware port124. That is, the port meter126(1) may wait to measure (or refrain from measuring) the electrical parameter(s) associated with the hardware port124(1) until it is determined that an external device is connected to the hardware port124(1). In other implementations, the port meter126(1) may measure the electrical parameter(s) continually (e.g., periodically), but the port meter126(1) may take measurements at a different (e.g., lower/reduced) frequency prior to an external device being connected to the hardware port124(1), and after an external device is connected to the hardware port124(1), the frequency at which the port meter126(1) measures the electrical parameter(s) may increase (e.g., to 100 ms measurement intervals). In some implementations, the port meters126are configured to send the value(s) of the measured electrical parameter(s) to the operating system118. The value(s), or data indicative of the value(s), can be sent from the port meters126to the operating system118in real-time, such as by sending the data as the electrical parameter(s) is/are measured. Additionally, or alternatively, the data indicative of the value(s) of the electrical parameter(s) may be sent in batches (e.g., multiple sequentially-measured values at a time), or at any suitable frequency or schedule. The computing device102may include a hardware bus that connects the port meters126to the operating system118in order to send the output signals that carry the data indicative of the value(s) measured by the port meters126. The operating system118is configured to receive, via the hardware bus, the value(s) (e.g., the output signals, the data indicative of the value(s), etc.) of the electrical parameter(s) measured by the port meters126. In some implementations, the operating system118may log the values of the electrical parameter(s) it receives in memory114(e.g., in a data store) so that the values can be accessed at a later time. The operating system118, or any other suitable component of the computing device102, may send the logged values of the electrical parameter(s) to the remote computing system104. For example, the values may be sent to the remote computing system104in real-time (e.g., streamed to the remote computing system104), in batches, or at any suitable frequency or schedule. The operating system118may be further configured to process (e.g., analyze) the value(s) received from the port meter(s)126, such as to determine whether an untrusted device(s) is/are connected to a hardware port(s)124. In some examples, the remote computing system104determines whether an untrusted device(s) is/are connected to any of the hardware port(s)124based on a stream of real-time value(s) received from the computing device102. That is, the computing device102may be configured to send a value(s) of the electrical parameter(s) to the remote computing system104in real-time, and the remote computing system104may process the value(s) to determine whether an untrusted device(s) is/are connected to any of the hardware port(s)124based on the value(s), and the remote system104may send a response (e.g., an instruction) to the computing device102that informs the operating system118as to whether an untrusted device connection has been detected by the remote computing system104, or the remote system104may refrain from notifying the computing device102and may perform an action independently. Many of the examples described herein involve local processing of the electrical parameter value(s) by the operating system118, but it is to be appreciated that any of the logic of the computing device102described herein can be included in the remote computing system104for purposes of remotely processing the value(s) measured by the port meters126. The determination of an untrusted device connection based on the measured electrical parameter(s) can be made in various ways using various techniques or algorithms. For example, the operating system118may have access (e.g., in local memory114of the computing device102) to one or more predetermined ranges of values (sometimes referred to herein as “baselines”) to which the measured value(s) of the electrical parameter(s) can be compared, and, based on a result of the comparison (e.g., an amount of deviation, whether a value is within or outside of a predetermined range, etc.), the operating system118can determine whether an untrusted device(s) is/are connected to a hardware port(s)124. For example, a predetermined range of values may be associated with a trusted device, such as a set of headphones128issued by a service provider to the user108. The user108might use the headphones128to speak to customers over the phone. The set of headphones128may include a built-in microphone, and a cord or cable with a port connector130at one end of the cord, which is configured to be connected to a hardware port124of the computing device102. For example,FIG.1illustrates that the port connector130of the headphones128is configured to be connected to the hardware port124(1) of the computing device102, or to any of the hardware ports124for that matter. In some examples, the hardware port124(1) is a USB port and the port connector130is a USB connector. If the user108connects the port connector130of the headphones128to the hardware port124(1), the port meter126(1) may measure the electrical parameter(s) (e.g., the impedance parameter, the voltage parameter, and/or the current parameter), and may send a value(s) corresponding to the measured electrical parameter(s) to the operating system118. The operating system118may compare the received value(s) to one or more predetermined ranges of values associated with the trusted set of headphones128, and, based on the comparison, the operating system118may determine that the measured value(s) falls within the predetermined range(s) of values associated with the trusted set of headphones128. In this scenario, the operating system118determines that a trusted device is connected to the hardware port124(1). Other techniques for making this determination are contemplated, however, such as using a machine learning model(s), or comparing the measured electrical parameter value(s) to an electrical parameter value(s) previously measured by the port meter126(1) at an earlier point in time. The headphones128are shown inFIG.1as including a splitter with an additional connector132. As mentioned above, this additional connector132(e.g., a USB connector) may be used to connect an additional external device to the computing device102, such as an additional headset of a supervisor who can listen to customer calls while the supervisor's microphone is muted. Accordingly, the operating system118may have access to another predetermined range(s) of values (or baseline(s)) associated with the supervisor's headphones (which is another example of a trusted device), and if the supervisor's headphones are connected to the additional connector132, the port meter126(1) measures the electrical parameter(s) to generate a value(s) of the electrical parameter, and the operating system118receives the value(s) and compares the value(s) to the predetermined range(s) of values associated with the trusted headphones of the supervisor to determine that a trusted device is connected to the hardware port124(1). If an untrusted device is connected to a hardware port124, the operating system118may determine that such an untrusted device is connected based on the measured electrical parameter(s) value(s) falling outside of the predetermined value range(s) of the trusted device(s) known to the operating system118. For example, if an illicit recording device (e.g., an audio recording device) is connected to the additional connector132of the headphone splitter (or to the hardware port124(1) directly), the port meter126(1) measures the electrical parameter(s) to generate a value(s), and the operating system118determines that the value(s) of the electrical parameter(s) is/are not within a predetermined range(s) of values associated with a trusted device, such as the headphones128and/or the supervisor's headphones. It is to be appreciated that any hardware port124(e.g., all of the hardware ports124of the computing device102) can be configured in this way because an illicit and/or trusted device can be connected to any of the hardware ports124. In an illustrative example, the electrical parameter is an impedance parameter, and the port meter126(1) measures the impedance parameter associated with the hardware port124(1) to determine a value of the impedance parameter. The operating system118has access to a predetermined range of impedance values associated with a trusted device, such as the headphones128, and the operating system118determines that the value of the impedance parameter received from the port meter126(1) is not within that predetermined range of values to make the determination that an untrusted device (e.g., a recording device) is connected to the hardware port124(1). Additionally, or alternatively, this type of illicit recording device may be a known type of untrusted device, and the operating system118may have access to a predetermined range of values (e.g., impedance values) associated with this known type of untrusted device. In this scenario, the operating system118may determine that the untrusted recording device is connected to the hardware port124(1) if the value of the electrical parameter(s) received from the port meter126(1) is within that predetermined range of values. In this case, the operating system118may be able to determine the type of device that is connected to the hardware port124(1) because that type of untrusted device is a known type of untrusted device. Otherwise, the operating system118may deduce that an unknown type of untrusted device is connected to the hardware port124(1) if the value(s) of the measured electrical parameter(s) falls outside of a predetermined range(s) of values associated with a known trusted device. An illicit recording device (e.g., an audio recording device) is one type of untrusted device that may be determined to be connected to a hardware port124using the techniques described herein. Another type of illicit external device that may be detected is a USB keyboard emulator134, such as a USB Rubber Ducky sold by Hak5® LLC of San Francisco, CA. For example, the USB keyboard emulator134may be connected to the hardware port124(N) (e.g., a USB port), and the keyboard emulator134may execute a script to send keystrokes directly to the operating system118in an attempt to emulate a legitimate keyboard. A malicious attacker may use the keyboard emulator134in an attempt to exfiltrate sensitive data, such as the customer data110, after the user108authenticates a session106with the remote computing system104. Another type of illicit external device that may be detected is a USB mouse jiggler, which may emulate mouse movements to prevent an automatic screen lockout of the computing device102from occurring. This may allow malicious attackers to gain access to the remote computing system104when the user108walks away from the computing device102, even for a short period of time. Yet another type of illicit external device that may be detected is a key logger, which is a device that logs the keystrokes made by the user108and sends the logged keystrokes via a WiFi interface to another device, such as a device of a malicious attacker who can then see what the user108is typing. Some of these illicit devices may be connected to a hardware port124without a legitimate user of the computing device102even knowing that the illicit device has been connected (e.g., due to the small size of some of these devices). If the untrusted keyboard emulator134, for example, is connected to the hardware port124(N), the operating system118may determine that an untrusted device is connected to the hardware port124(N) based on the measured electrical parameter(s) value(s) associated with the hardware port124(N) falling outside of a predetermined value range(s) associated with a trusted device(s) known to the operating system118. For example, in response to determining that an external device is connected to the hardware port124(N), the port meter126(N) measures the electrical parameter(s) associated with the hardware port124(N) to generate a value(s), and the operating system118determines that the value(s) of the electrical parameter(s) is/are not within a predetermined range(s) of values associated with a trusted device, such as the headphones128and/or the supervisor's headphones. Again, in an illustrative example where the electrical parameter is an impedance parameter, the port meter126(N) measures the impedance parameter associated with the hardware port124(N) to determine a value of the impedance parameter. The operating system118has access to a predetermined range of impedance values associated with a trusted device, such as the headphones128, and the operating system118determines that the value of the impedance parameter received from the port meter126(N) is not within that predetermined range of values to make the determination that an untrusted device (e.g., the keyboard emulator134) is connected to the hardware port124(N). Additionally, or alternatively, this type of illicit keyboard emulator device may be a known type of untrusted device, and the operating system118may have access to a predetermined range of values (e.g., impedance values) associated with this known keyboard emulator134. In this scenario, the operating system118may determine that the untrusted keyboard emulator134is connected to the hardware port124(N) if the value of the electrical parameter(s) received from the port meter126(N) is within that predetermined range of values associated with the untrusted keyboard emulator134. In this case, the operating system118may be able to determine the type of device that is connected to the hardware port124(N) because that type of untrusted device is a known type of untrusted device. Otherwise, the operating system118may deduce that an unknown type of untrusted device is connected to the hardware port124(N) if the value(s) of the measured electrical parameter(s) falls outside of a predetermined range(s) of values associated with a known trusted device. Any suitable type of action aimed at protecting the customer data110can be performed (or taken) by the computing device102(e.g., by the operating system118) and/or the remote computing system104in response to determining that an untrusted device is connected to a hardware port124. For example, the computing device102may send, to the remote computing system104, a notification indicative of a connection of the untrusted device to the hardware port124. This notification, when received by the remote computing system104, may allow for a remedial action to be taken, and/or it may apprise relevant personnel about the connection event. In some implementations, the user108may be contacted by the service provider about the connection event to determine whether there has been a false positive detection of an untrusted device. In some implementations, the service provider may flag it as a risk signal and continue monitoring the computing device102. In some implementations, the action performed in response to detecting a connection of an untrusted device to a hardware port124may be to disable the computing device102. For example, the operating system118may reboot the computing device102into a mode of operation where it cannot be used by the user108for handling customer queries. As another example, the operating system118may shut down (e.g., power off) the device102and may not allow the user108to establish an authenticated computing session106on a subsequent boot attempt. In some implementations, the action performed in response to detecting a connection of an untrusted device to a hardware port124may be to disable a component(s) of the computing device102, such as the communications interface(s)116, thereby preventing incoming and/or outgoing traffic to and/or from the computing device102. In this manner, disabling the communications interface(s)116may prevent further access to the customer data110by the computing device102. As depicted inFIG.1, the computing device102may further include one or more sensors136that is/are configured to sense one or more parameters associated with the computing device102. In some examples, the value(s) of the sensed parameter(s) may be indicative of a way in which, and/or a location at which, the computing device102is being used. To illustrate, the user108may represent a CSA who is issued the computing device102to use at a home office to work from home. The temperature inside the home office may remain relatively constant, such as within a range of about 65° Fahrenheit (F) to about 75° F. Accordingly, the sensor(s)136may represent a temperature sensor (e.g., a thermistor sensor) that is configured to sense the temperature of the environment surrounding the computing device102. The sensor136can sense the temperature of the environment and send a value(s) of the measured temperature to the operating system118via the hardware bus of the computing device102. The sensor136may be configured to sense the temperature at any suitable time, such in response to any suitable instruction (e.g., from the operating system118) or event, and/or at any suitable frequency or schedule. In some implementations, the sensor136is configured to periodically sense the temperature to generate a series of values (e.g., temperature values). If the operating system118determines that the sensed temperature value(s) falls outside of a predetermined range of values (e.g., about 65° F. to about 75° F.), the operating system118may determine that the computing device102has likely been moved to another environment, such as outside of the user's108home office. This may be a risk signal that is treated as a supplementary signal to the detection of an untrusted device102being connected to a hardware port124, or it may be used independently as a signal that indicates the computing device is likely being used in an untrusted location. In some examples, the sensor136is configured to measure a temperature parameter within a range of about −10° Celsius (C) to 50° C. (or 14° F. to 122° F.). In some implementations, the sensor136is configured to sense the temperature within the computing device102(e.g., inside the housing of the computing device102), the temperature of an electronic component (e.g., a temperature of a processor, such as a CPU), and/or the temperature of, or near, a hardware port124to help detect a connection of an untrusted device to the hardware port124. For instance, based on a connection of a device to a hardware port124, the sensor136may sense a temperature within the device102(e.g., a temperature of, or near, the hardware port124) to generate a temperature value, and the operating system118may determine that the temperature value is outside of a predetermined range of temperature values to determine that an untrusted device has been connected to the hardware port124. For instance, a connection of an untrusted device to a hardware port124may cause the internal temperature of the device102(and/or the temperature an electronic component) to change/deviate from a baseline (e.g., change to a temperature outside of a predetermined range of temperature values). In some implementations, the sensed temperature within the device102(e.g., of, or near, the hardware port124) is used as a corroborating signal to corroborate an out-of-range electrical parameter measured by the port meter126, and the operating system118may determine, with higher confidence, that an untrusted device is connected to the hardware port124if it detects both (i) an out-of-range electrical parameter value and (ii) an out-of-range temperature parameter value. As another example, the relative humidity inside the home office of the user108may also remain relatively constant. Accordingly, the sensor(s)136may represent a humidity sensor that is configured to sense the humidity of the environment surrounding the computing device102. The sensor136can sense the humidity and send a value(s) of the measured humidity to the operating system118via the hardware bus of the computing device102. The sensor136may be configured to sense the humidity at any suitable time, such in response to any suitable instruction (e.g., from the operating system118) or event, and/or at any suitable frequency or schedule. In some implementations, the sensor136is configured to periodically sense the humidity to generate a series of values (e.g., humidity values). If the operating system118determines that the sensed humidity value(s) falls outside of a predetermined range of values, the operating system118may determine that the computing device102has likely been moved to another environment, such as outside of the user's108home office. This too may be a risk signal that is treated as a supplementary signal to the detection of an untrusted device being connected to a hardware port124, or it may be used independently as a signal that indicates the computing device102is likely being used in an untrusted location. In some examples, the sensor136is configured to measure a humidity parameter within a range of about 20% Relative Humidity (RH) to 90% RH. In some implementations, the sensor136is configured to sense the humidity within the computing device102(e.g., inside the housing of the computing device102), and/or the humidity near a hardware port124to help detect a connection of an untrusted device to the hardware port124. For instance, based on a connection of a device to a hardware port124, the sensor136may sense a humidity within the device102(e.g., a humidity near the hardware port124) to generate a humidity value, and the operating system118may determine that the humidity value is outside of a predetermined range of humidity values to determine that an untrusted device has been connected to the hardware port124. For instance, a connection of an untrusted device to a hardware port124may cause the internal humidity of the device102(e.g., the humidity of the air within the housing of the device102) to change/deviate from a baseline (e.g., change to a humidity outside of a predetermined range of temperature values). In some implementations, the sensed humidity within the device102(e.g., near the hardware port124) is used as a corroborating signal to corroborate an out-of-range electrical parameter measured by the port meter126, and the operating system118may determine, with higher confidence, that an untrusted device is connected to the hardware port124if it detects both (i) an out-of-range electrical parameter value and (ii) an out-of-range humidity parameter value. As yet another example, the vibration experienced by the computing device102when used in the home office of the user108may remain within threshold limits. Accordingly, the sensor(s)136may represent a vibration sensor, such as an accelerometer, that is configured to sense the vibrations of the computing device102. The sensor136can sense vibrations and send a value(s) of the measured vibrations to the operating system118via the hardware bus of the computing device102. The sensor136may be configured to sense the vibrations of the computing device102at any suitable time, such in response to any suitable instruction (e.g., from the operating system118) or event (e.g., movement detected by the sensor136, such as an accelerometer), and/or at any suitable frequency or schedule. In some implementations, the sensor136is configured to periodically sense the vibrations (or lack thereof) to generate a series of values (e.g., vibration values). If the operating system118determines that the sensed vibration value(s) falls outside of a predetermined range of values, the operating system118may determine that the computing device102has likely been used in an untrusted way, such as taken by vehicle to another location outside of the user's108home office. This too may be a risk signal that is treated as a supplementary signal to the detection of an untrusted device being connected to a hardware port124, or it may be used independently as a signal that indicates the computing device102is likely being used in an untrusted way. In some examples, the sensor136is configured to measure a vibration parameter in units of standard gravity (g), in meters per second squared (m/s2), or any other suitable unit of measurement. In some implementations, the sensor136is configured to sense the vibrations within the computing device102(e.g., inside the housing of the computing device102), and/or vibrations of, or near, a hardware port124to help detect a connection of an untrusted device to the hardware port124. For instance, based on a connection of a device to a hardware port124, the sensor136may sense a vibration within the device102(e.g., of, or near, the hardware port124) to generate a vibration value, and the operating system118may determine that the vibration value is outside of a predetermined range of vibration values to determine that an untrusted device has been connected to the hardware port124. In some implementations, the sensed vibration within the device102(e.g., of, or near, the hardware port124) is used as a corroborating signal to corroborate an out-of-range electrical parameter measured by the port meter126, and the operating system118may determine, with higher confidence, that an untrusted device is connected to the hardware port124if it detects both (i) an out-of-range electrical parameter value and (ii) an out-of-range vibration parameter value. In an example scenario, the user108may transport his/her computing device102to a public location, such as a coffee shop, to handle customer queries from the public location. The public location may provide a public WiFi network to connect to the remote computing system104. In this scenario, the sensor(s)136may sense one or more parameters (e.g., temperature, humidity, and/or vibration) to generate a value(s) of the sensed parameter(s), and the value(s) may be sent to the operating system118. The operating system118may determine, based on the value(s) received from the sensor(s)136, that the computing device102is being used in an untrusted way and/or location. For example, the operating system118may determine that the value(s) received from the sensor(s)136is not within a predetermined range of values associated with normal usage of the computing device102and/or a known location where the computing device102is expected to be used. In this manner, the operating system118may determine, without knowing exactly how or where the computing device102is being used, that it is likely not being used in a trusted way or at a trusted location, such as at the user's108home office, to handle customer queries on behalf of the service provider. Similar actions aimed at protecting the customer data110can be performed (or taken) by the computing device102(e.g., by the operating system118) in response to determining that the computing device102is being used in an untrusted way or location. For example, the computing device102may send, to the remote computing system104, a notification indicative of an out-of-range parameter value(s) (e.g., temperature, humidity, and/or vibration). In some implementations, the action performed in response to determining that the computing device102is being used in an untrusted way or location may be to disable the computing device102, and/or to disable a component(s) of the computing device102, such as the communications interface(s)116, thereby preventing incoming and/or outgoing traffic to and/or from the computing device102. FIG.2illustrates the remote computing system104ofFIG.1in communication with multiple user computing devices102(1) to102(P) (P being any suitable integer). The individual computing devices102shown inFIG.2may be similar to the computing device102introduced inFIG.1in that they are configured to determine whether an untrusted device is connected to a hardware port124of the computing device102, among other things described herein. FIG.2illustrates a first computing device102(1) being used by a first user108(1) in a first home200(1) of the first user108(1). For example, the first user108(1) may represent a first CSA that is tasked with handling customer queries from customers of a service provider that maintains and/or operates the remote computing system104. Meanwhile,FIG.2illustrates a second computing device102(2) being used by a second user108(2) in a first home200(2) of the second user108(2), and a Pthcomputing device102(P) being used by a Pthuser108(P) in a Pthhome200(P) of the Pthuser108(P), P being any suitable integer. Accordingly, the remote computing system104may be in communication with multiple computing devices102(1) to102(P), such as by establishing multiple authenticated computing sessions106(1) to106(P) with the respective computing devices102(1) to102(P). In the example ofFIG.2, the first computing device102(1) is shown as sending measurements202(e.g., data including measured values) to the remote computing system104. Any of the computing devices102(1)-(P) may send measurements202in this manner. These measurements202can include electrical parameter measurements and/or sensor measurements. For example, as described above, the port meter(s)126of the first computing device102(1) is/are configured to measure one or more electrical parameters associated with the hardware port(s)124of the first computing device102(1). The values of the measured electrical parameters (e.g., impedance value(s), voltage value(s), and/or current value(s), etc.) may be sent by the first computing device102to the remote computing system104as the measurements202. As another example, and as described above, the sensor(s)136of the first computing device102(1) is/are configured to measure one or more parameters such as temperature, humidity, and/or vibration associated with the computing device102(1). The values of the sensed parameters (e.g., temperature value(s), humidity value(s), and/or vibration value(s), etc.) may be sent by the first computing device102to the remote computing system104as the measurements202. The remote computing system104may collect, aggregate, store, and/or process the measurements202for various purposes described herein. In an example, the measurements202can be used to determine value ranges204, such as value ranges that are associated with trusted and/or untrusted external devices that users108may connect to their computing devices102. In an illustrative example, the remote computing system104may collect multiple instances of impedance parameter values associated with company-issued headsets128that users108are connecting to their computing devices102, and that the port meters126are measuring when the headsets128are connected. Additionally, or alternatively, such value ranges204(or baselines) can be determined in other offline processes, such as by running tests that involve connecting known external devices to a computing device102and using the port meter(s)126of the computing device102to measure the electrical parameter values associated with a hardware port124when those external devices are connected to the hardware port124. In some implementations, statistics (e.g., average values) can be computed across a large data set based on collected measurements202from many different computing devices102to determine suitable value ranges204that are usable by operating systems118of the computing devices102to determine when untrusted devices are connected to a hardware port124, and/or to determine when the computing devices102are being used in an untrusted way or location. In general, the measurements202can be collected from multiple computing devices102as users108use the devices102during normal, permissible operation or otherwise. Thus, the measurements202can be used to fingerprint, profile, and/or baseline the typical electrical parameters and/or sensed parameters exhibited over time. Outlier data can be flagged and filtered out of the data set to determine averages and other statistical parameters of the remaining (unfiltered) measurements202. In some implementations, the measurements202may be collected over a threshold time period in order to aggregate a sufficient data set, such as collecting measurements202over a threshold period of days, weeks, or months. As another example, the measurements202can be collected over time, and a sampled set of the measurements202can be selected (e.g., periodically) and used to train a machine learning model(s)206. Machine learning generally involves processing a set of examples (called “training data”) in order to train a machine learning model(s)206. A machine learning model(s)206, once trained, is a learned mechanism that can receive new data as input and estimate or predict a result as output. For example, a trained machine learning model206can comprise a classifier that is tasked with classifying unknown input (e.g., an unknown image) as one of multiple class labels (e.g., labeling the image as a cat or a dog). In some cases, a trained machine learning model206is configured to implement a multi-label classification task (e.g., labeling images as “cat,” “dog,” “duck,” “penguin,” and so on). Additionally, or alternatively, a trained machine learning model206can be trained to infer a probability, or a set of probabilities, for a classification task based on unknown data received as input. In the context of the present disclosure, the unknown input may include values of an electrical parameter(s) (e.g., impedance value(s), voltage value(s), and/or current value(s), etc.) associated with a hardware port124and measured by a port meter126of the computing device102, and the trained machine learning model(s)206may be tasked with outputting a probability of an untrusted device being connected to a hardware port124of the computing device102. In some embodiments, the probability is a variable that is normalized in the range of [0,1]. In some implementations, the trained machine learning model(s)206may output a set of probabilities (e.g., two probabilities), where one probability relates to the probability of an untrusted device being connected to a hardware port124of the computing device102, and the other probability relates to the probability of a trusted device being connected to a hardware port124of the computing device102. The probability that is output by the trained machine learning model(s)206can relate to either of these probabilities (trusted device or untrusted device) to indicate a level of trustworthiness of an external device connected to a hardware port124of the computing device102. In some implementations, the unknown input to the machine learning model(s)206may include values of a sensed parameter (e.g., temperature value(s), humidity value(s), and/or vibration value(s), etc.) sensed by the sensor(s)136of the computing device102, and the trained machine learning model(s)206may be tasked with outputting a probability of the computing device102having been used in an untrusted way and/or location. In some implementations, the unknown input to the machine learning model(s)206may include both: (i) values of an electrical parameter(s) associated with a hardware port124and measured by a port meter126of the computing device102and (ii) values of a sensed parameter sensed by the sensor(s)136of the computing device102. The trained machine learning model(s)206may represent a single model or an ensemble of base-level machine learning models, and may be implemented as any type of machine learning model206. For example, suitable machine learning models206for use with the techniques and systems described herein include, without limitation, neural networks, tree-based models, support vector machines (SVMs), kernel methods, random forests, splines (e.g., multivariate adaptive regression splines), hidden Markov model (HMMs), Kalman filters (or enhanced Kalman filters), Bayesian networks (or Bayesian belief networks), expectation maximization, genetic algorithms, linear regression algorithms, nonlinear regression algorithms, logistic regression-based classification models, or an ensemble thereof. An “ensemble” can comprise a collection of machine learning models206whose outputs (predictions) are combined, such as by using weighted averaging or voting. The individual machine learning models of an ensemble can differ in their expertise, and the ensemble can operate as a committee of individual machine learning models that is collectively “smarter” than any individual machine learning model of the ensemble. The training data that is used to train the machine learning model206may include various types of data. In general, training data for machine learning can include two components: features and labels. However, the training data used to train the machine learning model(s)206may be unlabeled, in some embodiments. Accordingly, the machine learning model(s)206may be trainable using any suitable learning technique, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and so on. The features included in the training data can be represented by a set of features, such as in the form of an n-dimensional feature vector of quantifiable information about an attribute of the training data. As part of the training process, weights may be set for machine learning. These weights may apply to a set of features included in the training data. In some embodiments, the weights that are set during the training process may apply to parameters that are internal to the machine learning model(s) (e.g., weights for neurons in a hidden-layer of a neural network). These internal parameters of the machine learning model(s)206may or may not map one-to-one with individual input features of the set of features. The weights can indicate the influence that any given feature or parameter has on the probability that is output by the trained machine learning model206. FIG.2depicts the remote computing system104sending data to the second computing device102(2), such as data including value ranges204(e.g., value ranges204of an electrical parameter(s), such as impedance, voltage, and/or current, value ranges204of other parameters, such as temperature, humidity, and/or vibration, etc.), and/or data including the trained machine learning model(s)206. Such data may be sent to any of the computing devices102(1)-(P) in this manner. In the example ofFIG.2, the second computing device102(2) may store the data it receives from the remote computing system104in local memory114, and the operating system118may use this data in conjunction with the measured/sensed values it receives from the port meter(s)126and/or from the sensor(s)136to make a determination as to whether an untrusted device is connected to a hardware port124and/or whether the computing device102is being used in an untrusted way and/or location. For example, the operating system118of the second computing device102(2) may compare a value(s) of an electrical parameter(s) received from a port meter126to a predetermined range(s) of values204associated with a trusted device or an untrusted device to determine whether the received value(s) is/are indicative of a connection of a trusted device or an untrusted device to a hardware port124. Additionally, or alternatively, the operating system118may input a value(s) of an electrical parameter(s) received from a port meter126to the trained machine learning model(s)206to determine, based on the output of the machine learning model(s)206, whether the received value(s) is/are indicative of a connection of a trusted device or an untrusted device to a hardware port124. Accordingly, any computing device102may store data, such as value ranges204and/or a trained machine learning model(s)206, for local processing of parameter values to make determinations without reliance on the remote computing system104for making those determinations. In other implementations, some or all of the processing of parameter values may occur remotely relative to a user computing device102. For example, a computing device102may send (e.g., stream) a measured/sensed value(s) in real-time to the remote computing system104for remote processing of the measured/sensed value(s), and the remote computing system104may make a determination using a value range(s)204and/or a trained machine learning model(s)206, and send a response (e.g., an instruction) back to the computing device102based on the determination. In this manner, the computing device102may receive a response from the remote computing system104in response to sending a value(s) of a measured/sensed parameter(s) to the remote computing system104, and the response from the remote computing system104may inform the computing device102as to whether a connected external device is untrusted or trusted, and may, in some cases, cause the computing device102to perform an action, as described herein. FIG.2also illustrates an example scenario where a user108(P) connects an external device134to a hardware port124of a computing device102(P). The external device134, in the example ofFIG.2, represents a keyboard emulator (e.g., a USB Rubber Ducky). In this example scenario, the computing device102(P) determines that an external device is connected to a hardware port124, and determines, using the port meter126corresponding to the hardware port124, a value(s) of the electrical parameter(s) (e.g., an impedance parameter, a voltage parameter, and/or a current parameter, etc.) associated with the hardware port124. The computing device102(P) (e.g., the operating system118) then determines, based at least in part on the value(s) of the electrical parameter(s), that an untrusted device134is connected to the hardware port124, and performs an action based at least in part on the determining that the untrusted device134is connected to the hardware port124. The determination208of the untrusted device134being connected to the hardware port124may involve comparing the value(s) of the electrical parameter(s) to a predetermined range(s) of values204, as described herein, or inputting the value(s) of the electrical parameter(s) to a trained machine learning model(s)206, as described herein. In some examples, the computing device102(P) may determine the type of device that is connected to the hardware port124(e.g., a USB Rubber Ducky). In other examples, the computing device102(P) may deduce that an untrusted device is connected to the hardware port124(P) without knowing what type of untrusted device is connected. In the example ofFIG.2, the action performed by the computing device102(P) is an action of sending, to the remote computing system104, a notification210indicative of a connection of the untrusted device to the hardware port124. Other actions may be performed in lieu of, or in addition to, sending the notification210, such as disabling the computing device102(P) and/or disabling a communications interface(s)116of the computing device102(P). FIG.3illustrates an example printed circuit board (PCB)300of the computing device102ofFIG.1, the PCB300having mounted thereon a plurality of port meters126(1)-(6) to measure an electrical parameter(s) associated with respective hardware ports124(1)-(6) of the computing device102. The PCB300may be disposed internal to the computing device102(e.g., within, and enclosed by, a housing of the device102), and the PCB300may represent a motherboard, a baseboard, or any other suitable computer board. The PCB300may have various electronic components of the computing device102mounted thereon, such as the processor(s)112, the memory114, and the communications interface(s)116introduced inFIG.1. The hardware ports124are mounted at a periphery of the PCB300so that, when the PCB300is disposed within the housing of the computing device102, the hardware ports124are exposed through, and made accessible to the user108via, orifices defined in the housing of the computing device102. In this way, a user108may connect external devices to the hardware ports124. In some implementations, the hardware ports124represent USB ports, HDMI ports, other types of ports, or some combination thereof. The example ofFIG.3shows a PCB300with a total of six hardware ports124(1) to124(6), but six is merely an example number of hardware ports124. The port meters126(1) to126(6) each correspond to one of the hardware ports124(1) to124(6). For example, the port meter126(1) corresponds to (or is associated with) the hardware port124(1), the port meter126(2) corresponds to (or is associated with) the hardware port124(2), and so on and so forth. The port meters126are internal to the computing device102by virtue of being mounted on the PCB300. An individual port meter126may be mounted on the PCB300adjacent a corresponding hardware port124. “Adjacent” in this context can mean “within a threshold distance from” the hardware port124. This threshold distance may be about an inch, which facilitates electrical wiring/connections between the port meter126and the corresponding hardware port124. An individual port meter126may be in the form of a computer chip, an integrated circuit (IC), or any similar electronic component. In some implementations, an individual port meter126is mounted on (e.g., embedded in) the PCB300between the PCB pins302on the PCB300and the hardware port124. An example of this configuration is shown inFIG.3with respect to the zoomed-in view of the portion of the hardware port124(3), the port meter126(3), and the PCB pins302corresponding to the hardware port124(3). That is, the port meter126(3) is mounted on the PCB300between the PCB pins302associated with the hardware port124(3) and the connector portion of the hardware port124(3) itself. In some implementations, the PCB pins302represent USB pins, such as Vcc, Data− (D−), Data+ (D+), and Ground (Gnd), which correspond to red, white, green, and black USB pins. Furthermore, the hardware ports124, in some implementations, represent female ports (e.g., female USB ports) that are configured to receive a male connector (e.g., a male USB connector) of an external device. The port meter126(3) is electrically connected to the hardware port124(3) and to the PCB pins302in order to measure the electrical parameter(s) associated with the hardware port124(3). For example, a voltage parameter can be measured as the voltage across the Vcc (Red) and Gnd (Black) pins302to generate a value(s) of the voltage parameter. Similar measurements can be taken to determine other electrical parameters, such as impedance, current, etc. The PCB300is also shown as having mounted thereon a plurality of sensors136(1) to136(3). The sensors136may represent a temperature sensor136(1), a humidity sensor136(2), and a vibration sensor136(3), as described herein. These sensors136are configured to sense parameters such as temperature, humidity, and vibration to generate values of the sensed parameters, which may be received and processed by the operating system118to make a determination as to whether the computing device102is being used in an untrusted way and/or location, as described herein. FIG.3also illustrates example tables304that specify predetermined value ranges204that are usable to implement the techniques described herein. For example, a first table304(1) may specify predetermined value ranges204(1) associated with trusted devices306. Accordingly, the first table304(1) includes a list of trusted devices306, such as trusted Device A306(1), trusted Device B306(2), and so on and so forth for any number of trusted devices306. An example of a trusted device306might be company-issued headphones128that are used by a user108of the computing device102(e.g., by connecting the headphones128to the computing device102) to handle customer calls for a call center. For example, a service provider may issue its employees/contractors a few different types of headphones128or headsets to use when handling customer calls. Other trusted devices might be a company-issued keyboard, mouse, etc. For each trusted device306, the first table304(1) may specify one or more predetermined value ranges204(1) of an electrical parameter(s). For example, the electrical parameter value ranges204(1) may include a predetermined range of values308of an impedance parameter, a predetermined range of values310of a voltage parameter, and/or a predetermined range of values312of a current parameter. Accordingly, the first table304(1) indicates that the trusted Device A306(1) is associated with a predetermined range of values308(1) of an impedance parameter (e.g., specified in ohms), a predetermined range of values310(1) of a voltage parameter (e.g., specified in volts), and a predetermined range of values312(1) of a current parameter (e.g., specified in amperes). These value ranges308(1),310(1), and312(1) inform the operating system118of a computing device102as to what values of the electrical parameter(s) to expect when the trusted Device A is connected to a hardware port124of the computing device102, and if the port meter126measures a value(s) within the predetermined value range(s)308(1),310(1), and/or312(1), the operating system118can determine that the connected external device is likely the trusted Device A306(1). Similar value ranges308,310, and312may be specified in the first table304(1) for any number of other trusted devices306. Meanwhile, the second table304(2) includes a list of untrusted devices314, such as untrusted Device A314(1), untrusted Device B314(2), and so on and so forth for any number of untrusted devices314. An example of an untrusted device314might be an illicit audio recording device known to have been connected to hardware ports124of computing devices102in the field, or keyboard emulators, such as a USB Rubber Ducky, a mouse jiggler, and the like. For each untrusted device314, the second table304(2) may specify one or more predetermined value ranges204(2) of an electrical parameter(s). For example, the electrical parameter value ranges204(2) may include a predetermined range of values308of an impedance parameter, a predetermined range of values310of a voltage parameter, and/or a predetermined range of values312of a current parameter. Accordingly, the second table304(2) indicates that the untrusted Device A314(1) is associated with a predetermined range of values308(3) of an impedance parameter (e.g., specified in ohms), a predetermined range of values310(3) of a voltage parameter (e.g., specified in volts), and a predetermined range of values312(3) of a current parameter (e.g., specified in amperes). These value ranges308(3),310(3), and312(3) inform the operating system118of a computing device102as to what values of the electrical parameter(s) to expect when the known untrusted Device A is connected to a hardware port124of the computing device102, and if the port meter126measures a value(s) within the predetermined value range(s)308(3),310(3), and/or312(3), the operating system118can determine that the connected external device is likely the known untrusted Device A314(1). Similar value ranges308,310, and312may be specified in the second table304(2) for any number of other untrusted devices314. The third table304(3) includes a list of sensors136, such as the temperature sensor136(1), the humidity sensor136(2), the vibration sensor136(3), and so on and so forth for any number of sensors136of the computing device102. For each sensor136, the third table304(3) may specify one or more predetermined value ranges204(3) of a parameter. Accordingly, the third table304(3) indicates that the temperature sensor136(1) is associated with predetermined ranges of values316(1) of a temperature parameter (e.g., specified in ° F.) and a predetermined range of values316(2) of the temperature parameter (e.g., specified in ° C.). These value ranges316(1) and316(2) inform the operating system118of a computing device102as to what values of the temperature parameter to expect when the computing device102is used in a trusted location or environment, and if the temperature sensor136(1) senses a value(s) within the predetermined value range(s)316(1) and/or316(2), the operating system118can determine that the computing device102is likely being used in a trusted location. Similarly, the third table304(3) indicates that the humidity sensor136(2) is associated with a predetermined range of values318of a humidity parameter (e.g., specified in % RH). This value range318informs the operating system118of a computing device102as to what values of the humidity parameter to expect when the computing device102is used in a trusted location or environment, and if the humidity sensor136(2) senses a value(s) within the predetermined value range318, the operating system118can determine that the computing device102is likely being used in a trusted location. Similarly, the third table304(3) indicates that the vibration sensor136(3) is associated with a predetermined range of values320of a vibration parameter (e.g., specified in standard gravity). This value range320informs the operating system118of a computing device102as to what values of the vibration parameter to expect when the computing device102is used in a trusted way, and if the vibration sensor136(3) senses a value(s) within the predetermined value range320, the operating system118can determine that the computing device102is likely being used in a trusted way. The processes described herein are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. FIG.4is a flow diagram showing aspects of an example process400for determining that an untrusted device is connected to a hardware port124of a computing device102, and performing an action based on the determined connection of the untrusted device. The process400is described, by way of example, with reference to the previous figures. At402, a processor(s)112of a computing device102may determine that an external device is connected to a hardware port124of the computing device102. The external device, as its name implies, is external to the computing device102, and the hardware port is configured to receive a connector of the external device. In some implementations, the hardware port is a USB port (e.g., Type A, Type B, Type C, Standard, Mini, Micro, etc.). In other implementations, the hardware port is a HDMI port, or another type of female hardware port. At404, the processor(s)112may determine, using a port meter126that is internal to the computing device102and electrically connected to the hardware port124, a value(s) of an electrical parameter(s) associated with the hardware port124. The port meter126may be configured to measure any suitable type of electrical parameter(s), such as, without limitation, an impedance parameter, a voltage parameter, and/or a current parameter. In some implementations, the port meter126is a computer chip (e.g., IC) mounted adjacent the hardware port124on a PCB300(e.g., the motherboard) disposed within a housing of the computing device102. At406, an operating system118of the computing device102, when executed by the processor(s)112, may determine, based at least in part on the value(s) of the electrical parameter(s), that an untrusted device is connected to the hardware port124. For example, an illicit recording device, a keyboard emulator, a mouse jiggler, or a key logger may have been connected to the hardware port124, either directly or indirectly (e.g., via an additional connector132of a headphone splitter), which caused a change in the electrical parameter(s) to produce the value(s) measured by the port meter126, and, hence, the value(s) is indicative of the untrusted device having been connected to the hardware port124. In some implementations, determining that an untrusted device is connected to the hardware port124at block406may involve the computing device102sending the value(s) of the electrical parameter(s) to a remote computing system104, and the computing device102receiving a response from the remote computing system104, the response informing the computing device102that an untrusted device is connected to the hardware port124. In such an implementation, the remote computing system104may determine that the value(s) is/are outside of a predetermined range(s) of values associated with a trusted device and/or that the value(s) is/are within a predetermined range(s) of values associated with an untrusted device. In some examples, the remote computing system104may provide the value(s) as input to a trained machine learning model(s), and may generate, as output from the trained machine learning model(s), a probability that the untrusted device is connected to the hardware port104. At408, the processor(s)112may perform an action based at least in part on the determining that the untrusted device is connected to the hardware port124. The action performed at block408may include, without limitation, sending, to a remote computing system104, a notification indicative of a connection of the untrusted device to the hardware port124, disabling the computing device102, and/or disabling a component(s) (e.g., a communication interface(s)116) of the computing device102. In this manner, the process400may help detect and prevent unauthorized access to customer data110(e.g., sensitive data and/or resources of customers of a service provider) via the remote computing system104. FIG.5is a flow diagram showing aspects of another example process500for determining whether an untrusted device is connected to a hardware port124of a computing device102, and performing an action based on a determined connection of the untrusted device. The process500is described, by way of example, with reference to the previous figures. At502, a computing device102may establish, via a communications interface(s)116of the computing device102, an authenticated computing session106with a remote computing system104. The remote computing system104may maintain customer data110of a service provider. The authenticated computing session106may be established over any suitable network, such as a WAN (e.g., the Internet), a cellular network, an intranet or an ISP network or a combination of such networks. In some implementations, the authenticated computing session106may represent an encrypted, authenticated session. In some implementations, a VPN is utilized to establish the authenticated computing session106between the computing device102and the remote computing system104, but any suitable type of network access technology can be utilized to establish the session106. At504, a processor(s)112of the computing device102may determine that an external device is connected to a hardware port124of the computing device102. The operation(s) performed at block504may be similar to the operation(s) performed at block402of the process400. At506, the processor(s)112may determine, using a port meter126that is internal to the computing device102and electrically connected to the hardware port124, a value(s) of an electrical parameter(s) associated with the hardware port124. The operation(s) performed at block506may be similar to the operation(s) performed at block404of the process400. The determining the value(s) of the electrical parameter(s) at block506may occur during the authenticated computing session106established at block502. At sub-block508, the port meter126may be used to periodically measure the electrical parameter(s) to generate a series of values of an individual electrical parameter. For example, a periodic measurement interval (e.g., an interval of 100 ms) may be used to measure, using the port meter126, an impedance parameter associated with the hardware port124to generate a series of first values of the impedance parameter that are spaced at 100 ms intervals. Additionally, or alternatively, the port meter126may be used to measure a voltage parameter associated with the hardware port124to generate a series of second values of the voltage parameter that are spaced at 100 ms intervals. Additionally, or alternatively, the port meter126may be used to measure a current parameter associated with the hardware port124to generate a series of third values of the current parameter that are spaced at 100 ms intervals. If the computing device102includes multiple hardware ports124, the operation(s) performed at block506and sub-block508may be repeated using additional port meters126associated with those hardware ports126, each port meter126being used to generate a series of values of an electrical parameter(s). The periodic measuring of the electrical parameter(s) at sub-block508may occur during the authenticated computing session106established at block502. At510, an operating system118of the computing device102may receive the value(s) of the electrical parameter determined at block506. For example, the operating system118may receive a series of values of an electrical parameter(s) generated at sub-block508, such as a series of first values of an impedance parameter and/or a series of second values of a voltage parameter and/or a series of third values of a current parameter. The operating system118may receive values from multiple port meters126at block510. In some implementations, the values are received (e.g., streamed) in real-time from the port meters126, sent in batches, or received in any other suitable manner. In some implementations, the operating system118receives the value(s) via a hardware bus connected to the port meter(s)126. At512, a determination may be made (e.g., by the operating system118, and based on the value(s) of the electrical parameter(s) received at block510) as to whether an untrusted device is connected to a hardware port(s)124of the computing device102. The operation(s) performed at block512may be similar to the operation(s) performed at block406of the process400. Blocks514-518illustrate examples of how the determination can be made at block512. At514, the operating system118(or the remote computing system104) may determine that a value of the electrical parameter(s) received at block510(e.g., a value of a series of values received at block510) is not within (or is outside) a predetermined range of values204(1) associated with a trusted device306(e.g., a trusted USB device). For example, the operating system118(or the remote computing system104) may receive a value of an impedance parameter associated with a hardware port124, and by comparing the value to a predetermined range308(1) of impedance values, the operating system118(or the remote computing system104) may determine that the received value is not within (or is outside) the predetermined range308(1) of impedance values. This may be done for values of other types of electrical parameters, such as a voltage parameter and/or a current parameter, associated with the hardware port124. In some implementations, the operating system118(or the remote computing system104) looks for corroborating signals to make the determination at block512(and/or block514). For example, the operating system118(or the remote computing system104) may determine that an untrusted device is connected to a hardware port124if a first value of an impedance parameter is outside a predetermined range of impedance values and a second value of a voltage parameter is outside a predetermined range of voltage values and a third value of a current parameter is outside a predetermined range of current values. In other words, if all three electrical parameters (e.g., impedance, voltage, and current) are measuring outside of predetermined value ranges204(1) associate with trusted devices306, the determination may be made in the affirmative at block512(i.e., that an untrusted device is connected to the hardware port124). In this scenario, if any of the three electrical parameters measure within a predetermined value range204(1) of a trusted device306, the operating system118(or the remote computing system104) may not have enough confidence to make the determination in the affirmative at block512. In other implementations, other corroboration or confidence thresholds can be utilized, such as determining that an untrusted device is connected to a hardware port124if at least two out of three electrical parameters measure outside of predetermined value ranges204(1) associated with trusted devices306. In some implementations, the operating system118(or the remote computing system104) may determine, as a corroborating signal, whether an electrical parameter(s) measures outside of a predetermined value range204(1) associated with a trusted device306for longer than a threshold period of time and/or more than a threshold number of consecutive measurements. For example, if a port meter126streams a series of values of an impedance parameter to the operating system118(which may be forwarded to the remote computing system104), the operating system118(or the remote computing system104) may wait to receive a threshold number of consecutive values of the impedance parameter that are outside of a predetermined range308of values before determining that an untrusted device is connected to a hardware port124. This may allow for ignoring transient spikes of anomalous electrical parameter measurements (e.g., due to interference or the like). In some embodiments, the operating system118(or the remote computing system104) may look for other signals (e.g., out-of-range temperature, humidity, and/or vibration values sensed by the sensor(s)136) to corroborate a detection of an out-of-range value of an electrical parameter received from a port meter126. It is also to be appreciated that a predetermined “range” of values, as used herein, may include a range that includes a single value, in some implementations. In other words, the determination at block514might involve determining whether a value received at block510deviates from a single, baseline value associated with a trusted device. In another example, the determination at block514might involve determining whether a value received at block510deviates from a value that was previously measured by a port meter126. That is, if the port meter126associated with a hardware port124measured, at time, t1, a first value of an impedance parameter, for example, and then the port meter126subsequently measured, at time, t2, a second value of the impedance parameter that is different than the first value (e.g., different by more than a threshold difference/amount), the operating system118(or the remote computing system104) may determine that an untrusted device is connected to the hardware port124. At516, the operating system118(or the remote computing system104) may determine that a value of the electrical parameter(s) received at block510(e.g., a value of a series of values received at block510) is within (or is inside) a predetermined range of values204(2) associated with a known type of untrusted device314(e.g., a known type of untrusted USB device). For example, if a type of untrusted device, such as a keyboard emulator (e.g., a USB Rubber Ducky), is known and is associated with a predetermined range(s) of values204(2) of an electrical parameter, the operating system118(or the remote computing system104) may receive a value of, say, an impedance parameter associated with a hardware port124, and by comparing the value to a predetermined range308(3) of impedance values, the operating system118(or the remote computing system104) may determine that the received value is within (or is inside) the predetermined range308(3) of impedance values associated with the known type of untrusted device314. This may be done for values of other types of electrical parameters, such as a voltage parameter and/or a current parameter, associated with the hardware port124. Again, the operating system118(or the remote computing system104) may look for corroborating signals to make the determination at block512(and/or block516). For example, the operating system118(or the remote computing system104) may determine that an untrusted device is connected to a hardware port124if a first value of an impedance parameter is within a predetermined range of impedance values and a second value of a voltage parameter is within a predetermined range of voltage values and a third value of a current parameter is within a predetermined range of current values. In other words, if all three electrical parameters (e.g., impedance, voltage, and current) are measuring within predetermined value ranges204(2) associate with a known type of untrusted devices314, the determination may be made in the affirmative at block512(i.e., that an untrusted device is connected to the hardware port124). In this scenario, if any of the three electrical parameters measure outside a predetermined value range204(2) of an untrusted device314, the operating system118(or the remote computing system104) may not have enough confidence to make the determination in the affirmative at block512. In other implementations, other corroboration or confidence thresholds can be utilized, such as determining that an untrusted device is connected to a hardware port124if at least two out of three electrical parameters measure within predetermined value ranges204(1) associated with an untrusted device314. In some implementations, the operating system118(or the remote computing system104) may determine, as a corroborating signal, whether an electrical parameter(s) measures within a predetermined value range204(1) associated with an untrusted device314for longer than a threshold period of time and/or more than a threshold number of consecutive measurements. For example, if a port meter126streams a series of values of an impedance parameter to the operating system118(which may be forwarded to the remote computing system104), the operating system118(or the remote computing system104) may wait to receive a threshold number of consecutive values of the impedance parameter that are within a predetermined range308of values associated with an untrusted device314before determining that an untrusted device is connected to a hardware port124. This may allow for ignoring transient spikes of anomalous electrical parameter measurements (e.g., due to interference or the like). In some embodiments, the operating system118(or the remote computing system104) may look for other signals (e.g., out-of-range temperature, humidity, and/or vibration values sensed by the sensor(s)136) to corroborate a detection of an in-range value of an electrical parameter received from a port meter126. In some implementations, the determination at block516might involve determining whether a value received at block510matches a single, baseline value associated with an untrusted device. “Matching,” in this context, can mean within a threshold deviation from a single, baseline value. In general, the determination at block512(and/or blocks514and/or516) may include determining a type of device that is connected to a hardware port124, if a baseline is known for a particular type of device and if the value(s) matches, or is within a value range, associated with the known type of device. At518, the operating system118(or the remote computing system104) may provide the value(s) received at block510as input to a trained machine learning model(s)206, the trained machine learning model(s)206may generate, as output therefrom, a probability that an untrusted device is connected to a hardware port(s)124of the computing device102, and the operating system118(or the remote computing system104) may determine whether the probability meets or exceeds a threshold probability to determine whether an untrusted device is connected to a hardware port(s)124of the computing device102. The trained machine learning model(s)206may be stored locally on the computing device102if the computing device102is not resource constrained. In some embodiments, the trained machine learning model(s)206is stored remotely at the remote computing system104, and the computing device102sends data indicative of the value(s) received at block510to the remote computing system104, and remote computing system104inputs the value(s) to the trained machine learning model(s)206to generate an output probability, and the remote computing system104sends data back to the computing device102, in real-time, the data indicating whether an untrusted device is connected to a hardware port(s)124of the computing device102. In this regard, it is to be appreciated that any of the logic described in blocks514and/or516may be performed remotely from the computing device102, such as by the remote computing system104, in some implementations. If, at512, a determination is made (e.g., by the operating system118and/or the remote computing system104, and based on the value(s) of the electrical parameter(s) received at block510) that an untrusted device is not connected to a hardware port(s)124of the computing device102, the process500may follow the NO route from block512to block506, where additional value(s) may be determined using the port meter(s)126. If, on the other hand, a determination is made that an untrusted device (or a device that isn't what it claims to be) is connected to a hardware port(s)124of the computing device102, the process500may follow the YES route from block512to block520. At520, the processor(s)112may perform an action based at least in part on the determining that the untrusted device is connected to the hardware port124. The operation(s) performed at block520may be similar to the operation(s) performed at block408of the process400. Sub-blocks522-526illustrate example actions that may be performed at block520. At sub-block522, the action performed at block520may include sending, to a remote computing system104, a notification indicative of a connection of the untrusted device to the hardware port124. The notification can be sent in any suitable manner using any suitable type of messaging technology (e.g., email, text, output on a display, etc.) At sub-block524, the action performed at block520may include disabling the computing device102. For example, the operating system118may reboot the computing device102into a mode of operation where it cannot be used by the user108for handling customer queries. As another example, the operating system118may shut down (e.g., power off) the device102and may not allow the user108to establish an authenticated computing session106on a subsequent boot attempt. At sub-block526, the action performed at block520may include disabling a component(s) (e.g., a communication interface(s)116) of the computing device102, thereby preventing incoming and/or outgoing traffic to and/or from the computing device102. In this/these manners, the process500may help detect and prevent unauthorized access to customer data110(e.g., sensitive data and/or resources of customers of a service provider) via the remote computing system104. FIG.6is a flow diagram showing aspects of an example process600for sensing abnormal parameter value(s), such as temperature, humidity, and/or vibration, and performing an action based on the abnormal parameter value(s). The process600is described, by way of example, with reference to the previous figures. At602, a computing device102may establish, via a communications interface(s)116of the computing device102, an authenticated computing session106with a remote computing system104. The operation(s) performed at block602may be similar to the operation(s) performed at block502of the process500. At604, a processor(s)112of the computing device102may determine, using a sensor(s)136of the computing device102, a value(s) of a sensed parameter(s) associated with the computing device102. The sensor(s)136may include, without limitation, a temperature sensor136(1), a humidity sensor136(2), and/or a vibration sensor136(3). Accordingly, the sensed parameter(s) may include, without limitation, a temperature parameter, a humidity parameter, and/or a vibration parameter associated with the computing device102. As such, the value(s) determined at block604may be indicative of a temperature, a humidity, and/or a vibration associated with the computing device102. At sub-block606, the sensor(s)136may be used to periodically sense the parameter(s) to generate a series of values of an individual sensed parameter. For example, a periodic measurement interval (e.g., an interval of 100 ms) may be used to measure, using the temperature sensor136(1), a temperature of an environment of the computing device102to generate a series of values of the temperature parameter that are spaced at 100 ms intervals. Additionally, or alternatively, other sensors136, such as the humidity sensor136(2) and/or the vibration sensor136(3) may be used to measure respective sensed parameters to generate a respective series of values of those respective sensed parameter that are spaced at 100 ms intervals. The periodic measuring of the sensed parameter(s) at sub-block606may occur during the authenticated computing session106established at block602. At608, an operating system118of the computing device102may receive the value(s) of the sensed parameter(s) determined at block604. For example, the operating system118may receive a series of values of a sensed parameter(s) generated at sub-block606, such as a series of first values of a temperature parameter and/or a series of second values of a humidity parameter and/or a series of third values of a vibration parameter. The operating system118may receive values from multiple sensors136at block608. In some implementations, the values are received (e.g., streamed) in real-time from the sensor(s)136, sent in batches, or received in any other suitable manner. In some implementations, the operating system118receives the value(s) via a hardware bus connected to the sensor(s)136. At610, a determination may be made (e.g., by the operating system118, and based on the value(s) of the sensed parameter(s) received at block608) as to whether the sensed parameter is abnormal (e.g., relative to a baseline). Again, it is to be appreciated that the determination made at block610may include the computing device102sending the value(s) of the sensed parameter(s) to a remote computing system104, and the computing device102receiving a response from the remote computing system104, the response from the remote computing system104informing the computing device102as to whether a sensed parameter(s) is abnormal. At612, for example, the operating system118(or the remote computing system104) may determine that a value of the sensed parameter(s) received at block608(e.g., a value of a series of values received at block608) is not within (or is outside) a predetermined range of values204(3) associated with a “normal” sensed parameter. For example, the operating system118(or the remote computing system104) may receive a value of a temperature parameter, and by comparing the value to a predetermined range316(1)/(2) of temperature values, the operating system118(or the remote computing system104) may determine that the received value is not within (or is outside) the predetermined range316(1)/(2) of temperature values. This may be done for values of other types of sensed parameters, such as a humidity parameter and/or a vibration parameter. In some implementations, the operating system118(or the remote computing system104) looks for corroborating signals to make the determination at block610(and/or block612). For example, the operating system118(or the remote computing system104) may determine an abnormality if a first value of a temperature parameter is outside a predetermined range of temperature values and a second value of a humidity parameter is outside a predetermined range of humidity values and a third value of a vibration parameter is outside a predetermined range of vibration values, at least within some threshold timeframe. In other words, if all three sensed parameters (e.g., temperature, humidity, and vibration) are measuring outside of predetermined value ranges204(3) associate with a trusted location and/or use of the computing device102, the determination may be made in the affirmative at block610(i.e., that the computing device102is being used in an untrusted way and/or location). In this scenario, if any of the three sensed parameters measure within a predetermined value range204(3) associated with a trusted use and/or location of the computing device102, the operating system118(or the remote computing system104) may not have enough confidence to make the determination in the affirmative at block610. In other implementations, other corroboration or confidence thresholds can be utilized, such as determining that at least two out of three sensed parameters measure outside of predetermined value ranges204(3) associated with a trusted location and/or trusted use of the computing device. In some implementations, the operating system118(or the remote computing system104) may determine, as a corroborating signal, whether an Internet Protocol (IP) address associated with the computing device102has changed. This change in IP address may be indicative of the user108having moved the computing device102to another location, such as a public place with public WiFi access. In some implementations, the operating system118(or the remote computing system104) may determine, as a corroborating signal, whether a sensed parameter(s) measures outside a predetermined value range204(3) associated with a trusted use and/or location of the computing device102for longer than a threshold period of time and/or more than a threshold number of consecutive measurements. For example, if the temperature sensor136(1) streams a series of values of a temperature parameter to the operating system118(which may be forwarded to the remote computing system104), the operating system118(or the remote computing system104) may wait to receive a threshold number of consecutive values of the temperature parameter that are outside a predetermined range316(1)/(2) of values before determining that the computing device102is being used in an untrusted location. This may allow for ignoring transient spikes of anomalous parameter measurements. To illustrate, a user108may work for quite some time from his/her home, and the temperature sensor136(1) senses a series of temperature values that are fairly stable and within a predetermined range316(1)/(2) of values. Subsequently, the temperature sensor136(1) senses one or more values that are outside the predetermined range316(1)/(2) of values, which is a risk signal that the user108may have changed their environment (e.g., by moving the computing device102somewhere else). If, at610, a determination is made (e.g., by the operating system118and/or the remote computing system104, and based on the value(s) of the sensed parameter(s) received at block608) that the sensed parameter(s) is/are not abnormal, the process600may follow the NO route from block610to block604, where additional value(s) may be determined using the sensor(s)136. If, on the other hand, a determination is made that the sensed parameter(s) is/are abnormal, the process600may follow the YES route from block610to block614. At614, the processor(s)112may perform an action based at least in part on the determining that the sensed parameter(s) is/are abnormal. The operation(s) performed at block614may be similar to the operation(s) performed at block520of the process500, and the operation(s) performed at sub-blocks616-620may be similar to the operation(s) performed at sub-blocks522-526of the process500, except that the action(s) performed is based on determining that the value(s) of the sensed parameter(s) is/are abnormal (e.g., outside a predetermined range(s) of values). It is to be appreciated that the process600may be supplementary to the process500and performed in conjunction with the process500(e.g., in parallel with the process500), and that the actions performed at520and614of the respective processes500and600may be the same action (e.g., sending a notification to a remote computing system104). In this manner, the operating system may use the abnormal sensed parameter as a supplementary risk signal that something is amiss, in conjunction with detecting a connection of an untrusted external device using the process500. FIG.7shows an example computer architecture for a computer700capable of executing program components for implementing the functionality described above. The computer architecture shown inFIG.7illustrates a conventional workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, server computer, or other computing device, and can be utilized to execute any of the software components presented herein. The computer700includes a baseboard702, or “motherboard,” which is a printed circuit board (PCB) to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. The baseboard702may be the same as, or similar to, the PCB300ofFIG.3. In one illustrative configuration, one or more central processing units (“CPUs”)704operate in conjunction with a chipset706. The CPUs704can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer700, and the CPUs704may be the same as, or similar to, the processor(s)112ofFIG.1. The CPUs704perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements can generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset706provides an interface between the CPUs704and the remainder of the components and devices on the baseboard702. The chipset706may represent the “hardware bus” described above, and it can provide an interface to a RAM708, used as the main memory in the computer700. The chipset706can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)710or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer700and to transfer information between the various components and devices. The ROM710or NVRAM can also store other software components necessary for the operation of the computer700in accordance with the configurations described herein. The computer700can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network712. The chipset706can include functionality for providing network connectivity through a NIC714, such as a gigabit Ethernet adapter. The NIC714may be the same as, or similar to, the communications interface(s)116ofFIG.1, and it is capable of connecting the computer700to other computing devices over the network712. It should be appreciated that multiple NICs714can be present in the computer700, connecting the computer to other types of networks and remote computer systems. The computer700can be connected to a mass storage device716that provides non-volatile storage for the computer. The mass storage device716can store the operating system118, programs718, and data720, to carry out the techniques and operations described in greater detail herein. The mass storage device716can be connected to the computer700through a storage controller722connected to the chipset706. The mass storage device716can consist of one or more physical storage units. The storage controller722can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer700can store data on the mass storage device716by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different implementations of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the mass storage device716is characterized as primary or secondary storage, and the like. For example, the computer700can store information to the mass storage device716by issuing instructions through the storage controller722to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer700can further read information from the mass storage device716by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device716described above, the computer700can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer700. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. In one configuration, the mass storage device716or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer700, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the configurations described herein. These computer-executable instructions transform the computer700by specifying how the CPUs704transition between states, as described above. According to one configuration, the computer700has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer700, perform the various processes described above. The computer700can also include computer-readable storage media storing executable instructions for performing any of the other computer-implemented operations described herein. Any of the computer-readable storage media depicted inFIG.7may be the same as, or similar to, the memory114ofFIG.1. The computer700can also include one or more input/output controllers724for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller724can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It is to be appreciated that the computer700might not include all of the components shown inFIG.7, can include other components that are not explicitly shown inFIG.7, or can utilize an architecture completely different than that shown inFIG.7. Although the subject matter presented herein has been described in language specific to computer structural features, methodological acts, and computer readable media, it is to be understood that the appended claims are not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts, and media are disclosed as example forms of implementing the claims. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. Various modifications and changes can be made to the subject matter described herein without following the example configurations and applications illustrated and described, and without departing from the true spirit and scope of the following claims. | 117,646 |
11861053 | DETAILED DESCRIPTION As contemplated by this disclosure, persistent DIMMs that maintain a state of data following power down may pose a greater security risk to data compared to non-persistent DIMMs that include only volatile memory devices. Some techniques to mitigate these risks may include use of tamper resistant tape wrapped around a memory module such as a persistent DIMM. Any tampering of the memory module may be detected by visual inspection of the tamper resistant tape. For example, broken tape portions causing color changes around the broken tape portions. However, some types of persistent DIMMs such as those including byte or block addressable types of non-volatile memory having a 3-dimensional (3-D) cross-point memory structure that includes, but is not limited to, chalcogenide phase change material (e.g., chalcogenide glass) hereinafter referred to as “3-D cross-point memory”, may have operational thermal properties that may make tamper resistant tape ineffective (e.g., it melts) and/or interfere with thermal heat mitigation. Even if tamper resistant tape could be designed to work with operational thermal properties of persistent DIMMs having chalcogenide 3-D cross-point memory, these tape techniques may be incapable of providing any type of electronic detection and signaling. Further, tamper resistant tape only provides a visual indication of tampering and does not prevent an adversary from extracting data. Other techniques may include tamper mechanisms such a tamper switches. The tamper switches may be triggered responsive to mechanical disturbances when an adversary attempts to physically tamper with a memory module. Once trigged, the tamper switch activates tamper circuitry to erase data. These tamper mechanism techniques may have limited reliability and sensitivity. For example, setting the trigger to capture relatively small mechanical disturbances may result in triggering the tamper switch during normal operation. Yet adjusting the trigger to higher levels of mechanical disturbance may increase the likelihood of an adversary defeating tamper switches. A type of exotic tamper mechanism used in military or high security government intelligence agencies may include tamper vibration sensors in a memory module. Triggering of a tamper vibration sensors causes a controlled explosion that physically shatters the memory module. An explosive shattering of a memory module may protect data but it destroys the memory module and may not be a suitable solution for most types of operations that may use persistent DIMMs. FIG.1illustrates an example system100. In some examples, as shown inFIG.1, system100includes a circuit board101(e.g., a printed circuit board). As shown inFIG.1, circuit board101may include processors sockets110-1and110-2and modules112-1to112-16. System100, for example, may be included in a computing platform that includes, but is not limited to, a server. For these examples, modules112-1to112-16may be configured as DIMMs inserted in slots (not shown) on circuit board101. Modules112-1to112-16may be configured as DIMMs in a similar form factor as DIMMs described in one or standards promulgated by the Joint Electron Device Engineering Council (JEDEC). For example, JEDEC described DIMM form factors associated with JESD79-4A (DDR4) or JESD 79-5 (DDR5) standards. Modules112-1to112-16may include only persistent DIMMs or may include any combination of persistent and non-persistent DIMMs. In one example, modules112-1to112-7may be arranged to couple with a first processor (not shown) inserted in processor socket110-1and modules112-8to112-16may be arranged to couple with a second processor (not shown) inserted in processor socket110-2. As described in more details below, persistent memory modules included in modules112-1to112-16may be manufactured to include a combination of passive and active tamper detection elements to protect data stored in non-volatile memory devices resident on these persistent memory modules. The data, for example, generated by first or second processors inserted in processor sockets110-1and110-2while these processors execute an application or process an application workload. FIG.2illustrates first example views of a module200. In some examples, as shown inFIG.2, the first example views include a side view201and a side view202that depict views of two separate sides of module200. As shown inFIG.1, side view201shows a device cover210-1. Dashed lines for non-volatile memory (NVM) devices230-1to230-6indicate these memory devises are located behind (not visible) device cover210-1. Dashed lines for controller240indicates that controller240is also located behind device cover210-1. NVM devices230-1to230-6and controller240may be attached to or couple with a printed circuit board (PCB)220, a portion of which is visible at the bottom edge of module200. Side view201also shows contacts220-1that may couple with a first set of contacts include in a slot of a circuit board (e.g., circuit board110). Module200may be in a similar form factor as a DIMM described in one or more JEDEC standards such as but not limited to the JESD79-4A (DDR4) standard or the JESD 79-5 (DDR5) standard. Contacts240-1may be arranged in a similar manner as described in the JESD79-4A (DDR4) or JESD 79-5 (DDR5) standards. As shown inFIG.1, side view202shows a device cover210-2. Dashed lines for non-volatile memory (NVM) devices230-7to230-12also indicate these memory devises are located behind (not visible) device cover210-2. Dashed lines for volatile memory device250also indicate that volatile memory device250is located behind device cover210-2. NVM devices230-1to230-6and volatile memory device250may attached to or couple with PCB220, a portion of which is visible at the bottom edge of module200. Side view202shows contacts220-2that may couple with a second set of contacts include in a slot of a circuit board (e.g., circuit board110). As mentioned above for contacts220-1, contacts220-2may be arranged in a similar manner as described in the JESD79-4A (DDR4) or JESD 79-5 (DDR5) standards. In some examples, device cover210-1and device cover210-2may serve as heat spreaders to facilitate dissipation of thermal energy generated from NVM devices230-1to230-12, controller240or volatile memory device250while module200is in operation (e.g., powered on). For these examples, device covers210-1and210-2may be a type of metal plate or other type of material capable of absorbing and dissipating at least a portion of the generated thermal energy. An example type of metal may include, but is not limited to, anodized aluminum. According to some examples, volatile memory device250may serve as a type of buffer or cache for read or write access to NVM devices230-1to230-12. Although not shown inFIG.3, module200may include power loss imminent (PLI) circuitry (e.g., batteries and/or capacitors—not shown) to enable data stored in volatile memory device250to be moved to non-volatile memory devices as part of an expected or unexpected power down or power loss event. An ability to preserve data responsive to a PLI event may classify module200as a type of persistent memory module. As disclosed herein, reference to a non-volatile memory devices such as NVM devices230-1to230-12may include one or more different non-volatile memory types that may be byte or block addressable types of non-volatile memory such as 3-D cross-point memory. Non-volatile types of memory may also include other types of byte or block addressable non-volatile memory such as, but not limited to, single or multi-level phase change memory (PCM), resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, resistive memory including a metal oxide base, an oxygen vacancy base and a conductive bridge random access memory (CB-RAM), a spintronic magnetic junction memory, a magnetic tunneling junction (MTJ) memory, a domain wall (DW) and spin orbit transfer (SOT) memory, a thyristor based memory, a magnetoresistive random access memory (MRAM) that incorporates memristor technology, spin transfer torque MRAM (STT-MRAM), or a combination of any of the above. As disclosed herein, reference to a volatile memory devices such as volatile memory device250may include one or more different volatile memory types. Volatile types of memory may include, but are not limited to, random-access memory (RAM), Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR SDRAM), static random-access memory (SRAM), thyristor RAM (T-RAM) or zero-capacitor RAM (Z-RAM). FIG.3illustrates second example views of module200. In some examples, as shown inFIG.3, the second example views include a side view301and a side view302that depict views of two separate sides of module200with respective device covers210-1and210-2removed and flipped to show a back-side. For these example, device cover210-1may have a character pattern360and device cover210-2may have a character pattern370. Character patterns360and370may be a same pattern or separate patterns that represent per-module unique character patterns that may be generated during manufacture and/or assembling of module200by spraying, painting, or drawing a conductive ink on a back side of respective device covers210-1and210-2. The per-module unique character patterns, for example, may include alphabetic characters, number characters or symbol characters. The characters included in the per-module unique character pattern may be arranged in a pattern that connects the characters to enable a current to flow from an input contact to an output contact. For example, from input contact362to output contact364for character pattern360or from input contact372to output contact374for character pattern370. The conductive ink may include, but is not limited to, carbon ink, a conductive polymer ink, or a metal nanoparticle ink (e.g., copper, silver or gold). In some examples, the conductive ink may be a color that matches the back side color of device covers210-1and210-2or is clear/colorless to make character patterns360and370nearly invisible. The black “pattern” shown inFIG.3is shown to more clearly depict connected characters sprayed, painted, or drawn on the back side of device covers210-1and210-2. According to some examples, as shown inFIG.3, side view301shows that PCB220includes an input stub320and an output stub322. For these examples, input stub320is to connect to input contact362and output stub322is to connect to output contact364when device cover210-1is placed over PCB220. Device cover210-1may be placed to be in close contact with NVM devices230-1to230-6. Input stub320and output stub322may be of sufficient height to rise above the height of NVM devices230-1to230-6in order to couple with respective input contact362and output contact364. As described in more details below, controller240includes circuitry241. Circuitry241may include circuitry or logic to determine a resistance value for character pattern360based on an applied input voltage and corresponding input current through input stub320/input contact362, character pattern360and resulting output voltage and current outputted through output contact364/output stub322. Also, as described more below, controller240may include a control register (CR)242that is used by the circuitry or logic of circuitry241to store the determined resistance value for character pattern360. In some examples, circuitry241of controller240may include additional circuits or logic such as, but not limited to, an analog to digital converter (ADC) (not shown) to convert measured resistance values into a number (digital format) that is then stored by circuitry241in CR242. For these examples, these digital values may be several bits long (e.g., 32-64 bits) and may provide fine-grained custom formatted digital values. According to some examples, as shown inFIG.3, side view302shows that PCB220includes an input stub324and an output stub326. For these examples, input stub324is to connect to input contact372and output stub326is to connect to output contact374when device cover210-2is placed over PCB220. Similar to device cover210-1, device cover210-2may be placed to be in close contact with NVM devices230-7to230-12. Input stub324and output stub326may be of sufficient height to rise above the height of NVM devices230-7to230-12in order to couple with respective input contact372and output contact374. As described more below, circuitry241of controller240may include circuitry or logic used to implement booting or power up actions of module. These actions may include determining a resistance value for character pattern370based on a voltage and current applied through input stub324/input contact372, character pattern370and output contact374/output stub326. As described more below, CR242may also be used by the boot related circuitry or logic of circuitry241to store the determined resistance value for character pattern370. FIG.4illustrates a first example a sub-system400to measure resistance of character patterns360and370. For this first example, sub-system400is included in module200and includes controller240, circuitry241, CR242, input stubs,320,324, output stubs322,326, input contacts362,372, output contacts364,374and character patterns360,370as shown inFIG.3and described above. For these examples, sub-system400, as shown inFIG.4, also includes traces402,404(e.g., metal traces) that allow for an input voltage (Vin) and an input current (Lin) to be applied through input stub320/input contact362and then through character pattern360to result in an output voltage (Vout) and an output current (Iout) through output contact364/output stub322. Sub-system400, as shown inFIG.4, also includes traces406,408(e.g., metal traces) that allow for a Vinan Iinto be applied through input stub324/input contact372and then through character pattern370to result in a Voutand an Ioutthrough output contact374/output stub326. In some examples, as shown inFIG.4, circuitry241of controller240includes control circuitry441and sense circuitry443. Sense circuitry443may include circuitry or logic to cause a Vinto be applied that causes an IInto flow via trace402through input stub320and input contact362. Current may then flow across character pattern360and an Ioutflows through output contact364and output stub322and is outputted via trace404. Sense circuitry443may measure Voutand Iouton trace404. In some examples, control circuitry441may obtain the measured Voutand Ioutand determine a resistance value (Rvalue) for character pattern360based on Rvalue=Vout/Iout. Control circuitry441may then cause the determined Rvalueto be stored to CR242. For example, as described more below, CR242may have bits that can be selectively set to indicate the determined Rvaluefor character pattern360. In other examples, sense circuitry443, rather than control circuitry441, may determine Rvalueand selectively set the bits of CR242to indicate the determined Rvalue. According to some examples, sense circuitry443may cause a Vinto be applied that causes an Iinto flow via trace406through input stub324and input contact372. Current may then flow across character pattern370and an Ioutflows through output contact374and output stub326and is outputted via trace408. Sense circuitry443may measure Voutand Iouton trace408. In some examples, control circuitry441may obtain the measured Voutand Ioutand determine an Rvaluefor character pattern370based on Rvalue=Vout/Iout. Control circuitry441may then cause the determined Rvalueto be stored to CR242. In other examples, sense circuitry443, rather than control circuitry441, may determine Rvalueand cause the determined Rvalueto be stored to CR242. As mentioned briefly above, circuitry241may include an ADC. The ADC may convert the determined Rvalueinto a number and cause the number to be stored to CR242. In some examples, respective Rvaluesfor character patterns360and370may be initially determined during manufacturing of module200. For these examples, upon a first boot or power up of module200the Rvaluesfor character patterns360and370are determine and then stored to CR242as a base Rvalues. As described more below, the base Rvaluesfor character patterns360and370may be used to compare to Rvaluesdetermined following subsequent boots or power ups of module200and then enact tamper protocols or policies if the comparison indicates a difference in Rvaluesthat is greater than a threshold amount. In other words, a difference that indicates possible tampering. The possible tampering may have included removal of device cover210-1or device cover210-2. The removal of device covers210-1or210-2may have caused at least portions of respective character patterns360or370to be altered (e.g., some of the conductive ink scrapped off). In some examples, adhesive or sticky material may attach device covers210-1to210-2to memory device and breaking that attachment may increase the likelihood that character patterns360or370are altered upon removal of device covers210-1or210-2. As a result of being altered, determined Rvaluesfor character pattern360or370may noticeably change between boots of module200. FIG.5illustrates an example register table500. In some examples, as shown inFIG.5, register table500may be for an 8 bit register (examples are not limited to an 8 bit register). For these examples, the 8 bit register includes a Base_Rvaluein bits [2:0]. Bits [2:0] may be selectively asserted to indicate up to 8 resistance ranges for a character pattern or patterns measured following a first boot of a module such as module200. For example, each range may cover a range of 0.01 ohms (e.g., 0.040 to 0.049, 0.050 to 0.059, etc.). The 8 bit register also includes Most_Recent_Rvaluein bits [5:3]. Bits [5:3] may be selectively asserted to indicate up to 8 resistance ranges for a character pattern or patterns measured following a most recent boot of the module. The 8 bit register also includes Tamper_Flag in bit [6]. Bit [6] may be asserted if logic and/or circuitry of a controller for the module determines that a comparison of the Base_Rvalueindicated in bits [2:0] to the Most_Recent_Rvalueindicated in bits [5:3] indicates tampering of the module. In some examples, a Tamper_Flag indication may also be saved in some portion of non-volatile memory to permanently advertise that a module has been tampered with. This permanent advertisement may be used for future forensic investigations. In other examples, the controller may cause a programmable fuse bit may activated to indicate tampering of the memory module. As described more below, asserting bit [6] may be an initial part of tamper protocols or policies enacted based on detected tampering. The 8 bit register also includes Debug_Flag in bit [7]. As described more below, bit [7] may be asserted to disable any tamper response actions (but not detection) to allow for debugging of the module. In some examples, bit [7] may only be asserted via a tightly controlled debug interface that allows only authorized access to cause logic and/or features of the controller for the module to assert or de-assert bit [7]. In some examples, tightly controlled debug interface may only allow or limit disabling of tamper response in relation to a pre-manufacturing life cycle of the module. FIG.6illustrates a second example of sub-system400to measure resistance of character patterns360and370. According to some example, the second example of sub-system400is post manufacturing or a non-first boot of module200. For example, a boot up in a computing platform deployed in a data center. For these examples, as shown inFIG.6, character pattern360includes altered portions601that includes the “a”, first “t” and “r” of “pattern” being slightly altered. Also, character pattern370includes altered portions that includes altered portions602that includes the “a”, first “t” and “e” of “pattern” being slightly altered. These alterations may have resulted in some conductive ink being scraped off during removal of device covers240-1and240-2. Further alterations may have also resulted when the device covers240-1and240-2were placed back over NVM devices230-1to230-12. According to some examples, Rvaluesfor character patterns360and370with respective altered portions601and602as shown inFIG.6, are determined by logic and/or features of controller240as described above for sub-system400. For these examples, controller240may selectively assert bits [5:3] of CR242to record the Rvalueas a Most_Recent_Rvalue. Logic and/or features of controller240(e.g., control circuitry441implementing firmware logic) may obtain the Base_Rvaluefrom bits [2:0] of CR242and compare to the Most_Recent_Rvalue. For this example, the comparison will show that altered portions601and602caused a detectable change (e.g., a delta>0.01 ohms) in Rvaluessince manufacturing. The logic and/or features of controller240may then assert bit [6] of CR242to indicate that tampering has been detected. The assertion of bit [6] of CR242serves as an immutable bit that indicates module200has been tampered with since manufacturing and on subsequent boots of module200tamper policies may be implemented. These tamper policies may include, but are not limited to, alerting of a tamper detection, causing all data stored to NVM devices230-1to230-12to be erased, preventing/restricting decryption of data stored to NVM devices230-1to230-12, or deactivating module200. In some examples, certain flavors of tamper detection and resistance policies may be allowed to be re-configured by a user during the user's first boot (e.g., as an opt-in mechanism). FIG.7illustrates an example logic flow700. In some examples, logic flow700may illustrate actions by logic and/or features of a controller for a persistent memory module. For these examples, logic flow700may be implemented by circuitry and/or logic of a controller for a persistent memory module such as circuitry241included in controller240of module200as mentioned above forFIGS.2-6. Also, a control register used by the circuitry and/or logic of the controller may be set or programmed as indicated in register table500mentioned above forFIG.5. The registers may be set or programmed by control circuitry441or sense circuitry443of circuitry241as shown inFIG.4or6. Examples are not limited to circuitry241included in controller240as shown inFIGS.2-4and6or to register bits indicated in register table500shown inFIG.5to implement at least portions of logic flow700. Starting at decision block705, a determination is made as to whether a module is being booted for the first time. For example, initial boot or power up following assembly at a manufacturer. If a first boot, logic flow700moves to block705. Otherwise, logic flow700moves to block725. Moving from block705to block710, sense circuitry443of controller circuitry241senses resistance of character patterns360and370sprayed on a backside of device covers240-1and240-2(e.g., heat spreader plates) covering NVM devices230-1to230-12. Moving to block715, control circuitry441of circuitry241may assert bits [0:2] of CR242to indicate Base_Rvalues. Moving to block720, module200is powered down. In some examples, the power down may follow other operations unrelated to tamper detection. Moving from decision block705to decision block725, control circuitry441may determine whether module200has been placed in a debug mode. If in debug mode, logic flow700moves to block730. Otherwise, logic flow700moves to block745 Moving from decision block725to block730, control circuitry441may assert bit [7] of CR242to indicate that module200is in a debug mode. Moving to block735, debug operations are completed for module200and bit [7] of CR242is de-asserted to indicate that module200is no longer in a debug mode. Moving to block740, module200is powered down. Moving from decision block725to block745, sense circuitry443senses resistance of character patterns360and370and control circuitry441determines Rvaluesasserts bits [3:5] of CR242to store Most_Resent_Rvaluesfor patterns360and370. Moving to decision block750, control circuitry441of control circuitry441may implement firmware to compare the Base_Rvaluesmaintained in bits [2:0] of CR242to Most_Recent_Rvaluesmaintained in bits [3:5] to determine whether the most recent Rvaluesof patterns360and370are within a predetermined tolerance (e.g., within 0.01 olms of each other). If the compared Rvaluesare within the predetermined tolerance, logic flow700moves to block755. Otherwise, logic flow700moves to block765. Moving to block755, module200continues with normal operation. In other words, no tamper detection protocols or policies are activated. Moving to block760, module200is powered down. Moving from decision block750to block765, control circuitry441may set or assert bit [6] of CR242to indicate detection of tampering of module200. Moving to block770, module200continues with following an adopted tamper detection policy. In some examples, following the adopted tamper detection policy may occur during next boot. In any case, module200will not allow access to previously stored data maintained in NVM devices230-1to230-12when tamper is detected. Moving to block775, module200is powered down. In some examples, logic flow700moves to a logic flow800(B) shown inFIG.8rather moving back to the beginning of logic flow700(A). The movement to logic flow800being responsive to the setting of bit [6] of CR242to indicate detected tampering. FIG.8illustrates an example logic flow800. In some examples, logic flow800may illustrate actions by logic and/or features of a controller for a persistent memory module for which a tampering has been detected as mentioned above for logic flow700. For these examples, similar to logic flow700, logic flow800may be implemented by circuitry and/or firmware logic of a controller for a persistent memory module such as control circuitry441of circuitry241included in controller240as mentioned above forFIGS.2-6. Starting at block805, module200is booted up. Moving to block810, control circuitry441may read bit [6] of CR242and based on bit [6] being asserted, detects that the tamper bit has been asserted. Moving to decision block815, control circuitry441determines which policy action to implement. If an alert policy action, logic flow800moves to block825. If a deactivation policy, logic flow800moves to block820. If other policy actions, which may include any combination of alert, deactivation, restricts or other tamper-related policies, logic flow moves to block830. Moving from decision block815to block820, control circuitry441may initiate a deactivation policy that cause module200to become inoperable. Actions may include, preventing access to NVM devices230-1to230-12or preventing decryption of any encrypted data stored in NVM devices230-1to230-12. Moving from decision block815to block825, control circuitry441may cause an alert to be generated. In some examples, the alert may indicate to an operator of a computing platform for which module200may be inserted that tampering of module200has been detected. For these examples, the operator may take correction actions such as removing all sensitive data from module200and allowing only non-sensitive data to be stored to module200. Moving from decision block815to block830, control circuitry441may initiate other policy actions that may include a combination of alerting, deactivating, restricting or other tamper-related policies for use of module200. For example, erasing at least a portion (or all) of the data stored to NVM devices230-1to230-12. Moving from either blocks820,825or835to block835, module200is powered down. In some examples, if module200is powered on or booted up again, logic flow800may be restarted. FIG.9illustrates an example block diagram for apparatus900. Although apparatus900shown inFIG.9has a limited number of elements in a certain topology, it may be appreciated that apparatus900may include more or less elements in alternate topologies as desired for a given implementation. According to some examples, apparatus900may be supported by circuitry920of a controller such as circuitry241of controller240for a memory module such as module200. Circuity included in circuitry920such as control circuitry822-1or sense circuitry822-2may be arranged to execute logic or one or more firmware implemented modules, components or features of the logic. Also, “module”, “component” or “feature” may also include firmware stored in computer-readable or machine-readable media (e.g., non-volatile memory media maintained at or accessible to controller240), and although types of circuitry are shown inFIG.9as discrete boxes, this does not limit these types of features to being implemented by distinct hardware components (e.g., separate application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs)). According to some examples, circuitry920may include one or more ASICs or FPGAs and, in some examples, at least some of control circuitry922-1or822-2may be implemented as hardware elements of these ASICs or FPGAs. In some examples, as shown inFIG.9, circuitry920may include control circuitry922-1and sense circuitry922-2. For these examples, control circuitry922-1may determine, responsive to a first boot of the memory module, a first resistance value for a character pattern sprayed on a side of a heat spreader cover that faces non-volatile memory devices resident on a first side of a PCB of the memory module, wherein the character pattern is sprayed on using conductive ink. First boot905may indicate to control circuitry922-1to determine the first resistance value. Also, for these examples, sense circuitry922-2may sense the output current and voltage from the character pattern and provide the outputted current and voltage to enable control circuitry922-1to determine the first resistance value. Control circuitry922-1may store this first resistance value to a register accessible to circuitry920. Base_Rvalue930, for example, may include the first resistance value stored to the register. According to some examples, control circuitry922-1may determine, responsive to a second boot of the memory module, a second resistance value for the character pattern. Second boot910may indicate to control circuitry922-1to determine the second resistance value. For these examples, sense circuitry922-2may sense the output current and voltage from the character pattern and provide the outputted current and voltage to enable control circuitry922-1to determine the second resistance value. Control circuitry922-1may store this second resistance value to the register accessible to circuitry920. Most recent Rvalue935, for example, may include the second resistance value stored to the register. In some examples, control circuitry922-1may assert a bit of the register to indicate tampering of the memory module based on the second resistance value not matching the first resistance value within a threshold resistance value. For these examples, tamper indication940may indicate assertion of the bit. The bit asserted to be separate from any bits used to store the first and second resistance values to the register. Various components of apparatus900may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Example connections include parallel interfaces, serial interfaces, and bus interfaces. Included herein is a set of logic flows representative of example methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein are shown and described as a series of acts, those skilled in the art will understand and appreciate that the methodologies are not limited by the order of acts. Some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation. A logic flow may be implemented in software, firmware, and/or hardware. In software and firmware embodiments, a logic flow may be implemented by computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. The embodiments are not limited in this context. FIG.10illustrates an example logic flow1000. Logic flow1000may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as apparatus900. More particularly, logic flow1000may be implemented by control circuitry922-1. According to some examples, logic flow1000at block1002may determine, following a first boot of a memory module, a first resistance value for a character pattern sprayed on a side of a heat spreader cover that faces non-volatile memory devices resident on a first side of a PCB of the memory module, the character pattern sprayed on using conductive ink. For these examples, control circuitry922-1determines the first resistance value. In some examples, logic flow1000at block1004may determine, following a second boot of the memory module, a second resistance value for the character pattern. For these examples, control circuitry922-1determines the second resistance value. According to some examples, logic flow1000at block1006may assert a bit of a register accessible to circuitry of a controller resident on the first side or the second side of the PCB to indicate tampering of the memory module based on the second resistance value not matching the first resistance value within a threshold resistance value. For these examples, control circuitry922-1may assert the bit to indicate tampering of the memory module. FIG.11illustrates an example storage medium1100. In some examples, storage medium1100may be an article of manufacture. Storage medium1100may include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. Storage medium1100may store various types of computer executable instructions, such as instructions to implement logic flow1000. Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context. FIG.12illustrates an example computing platform1200. In some examples, as shown inFIG.12, computing platform1200may include a memory system1230, a processing component1240, other platform components1250or a communications interface1260. According to some examples, computing platform1200may be implemented in a computing device. According to some examples, memory system1230may include a controller1232and memory device(s)1234. For these examples, circuitry of controller1232may execute at least some processing operations or logic for apparatus900and may include storage media that includes storage medium1100. Also, memory device(s)1234may include similar types of volatile or non-volatile memory (not shown) that are described above for non-volatile memory devices230-1to230-12and volatile memory device250shown inFIGS.2-3. According to some examples, Processing components1240may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, management controllers, companion dice, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, programmable logic devices (PLDs), digital signal processors (DSPs), FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (APIs), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given example. In some examples, other platform components1250may include common computing elements, memory units (that include system memory), chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays), power supplies, and so forth. Examples of memory units or memory devices included in other platform components1250may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory), solid state drives (SSD) and any other type of storage media suitable for storing information. In some examples, communications interface1260may include logic and/or features to support a communication interface. For these examples, communications interface1260may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the PCIe specification, the NVMe specification or the I3C specification. Network communications may occur via use of communication protocols or standards such those described in one or more Ethernet standards promulgated by the Institute of Electrical and Electronics Engineers (IEEE). For example, one such Ethernet standard promulgated by IEEE may include, but is not limited to, IEEE 802.3-2018, Carrier sense Multiple access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, Published in August 2018 (hereinafter “IEEE 802.3 specification”). Network communication may also occur according to one or more OpenFlow specifications such as the OpenFlow Hardware Abstraction API Specification. Network communications may also occur according to one or more Infiniband Architecture specifications. Computing platform1200may be part of a computing device that may be, for example, user equipment, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet, a smart phone, embedded electronics, a gaming console, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, or combination thereof. Accordingly, functions and/or specific configurations of computing platform1200described herein, may be included or omitted in various embodiments of computing platform1200, as suitably desired. The components and features of computing platform1200may be implemented using any combination of discrete circuitry, ASICs, logic gates and/or single chip architectures. Further, the features of computing platform1200may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “logic”, “circuit” or “circuitry.” It should be appreciated that the exemplary computing platform1200shown in the block diagram ofFIG.12may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments. One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” and may be similar to IP blocks. IP cores may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. Some examples may include an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. Some examples may be described using the expression “in one example” or “an example” along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the example is included in at least one example. The appearances of the phrase “in one example” in various places in the specification are not necessarily all referring to the same example. Some examples may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled” or “coupled with”, however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. To the extent various operations or functions are described herein, they can be described or defined as software code, instructions, configuration, and/or data. The content can be directly executable (“object” or “executable” form), source code, or difference code (“delta” or “patch” code). The software content of what is described herein can be provided via an article of manufacture with the content stored thereon, or via a method of operating a communication interface to send data via the communication interface. A machine readable storage medium can cause a machine to perform the functions or operations described and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, etc., medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content. The communication interface can be accessed via one or more commands or signals sent to the communication interface. The follow examples pertain to additional examples of technologies disclosed herein. Example 1. An example apparatus may include a controller to reside on a printed circuit board (PCB) of a memory module. The controller may include circuitry to determine, responsive to a first boot of the memory module, a first resistance value for a character pattern sprayed on a side of a heat spreader cover that faces non-volatile memory devices resident on a first side of the PCB, wherein the character pattern is to be sprayed on using conductive ink. The circuitry may also determine, responsive a second boot of the memory module, a second resistance value for the character pattern. The circuitry may also assert a bit of a register accessible to the circuitry to indicate tampering of the memory module based on the second resistance value not matching the first resistance value within a threshold resistance value. Example 2. The apparatus of example 1, the circuitry may also store the first resistance value to a first set of bits of the register accessible to the circuitry of the controller. The circuitry may also store the second resistance value to a second set of bits of the register, wherein the first and second set of bits do not include the bit asserted to indicate tampering of the memory module. Example 3. The apparatus of example 2, the circuitry may also convert the first resistance value to a first digital formatted number and store the first digital formatted number to the first set of bits of the register. The circuitry may also convert the second resistance value to a second digital formatted number and store the second digital formatted number to the second set of bits of the register. Example 4. The apparatus of example 1, the circuitry may also detect, responsive to a third boot of the memory module, the asserted bit of the register that indicates tampering of the memory module. The circuitry may also initiate a tamper policy that includes a policy to deactivate the memory module, a policy to generate an alert to a user of the memory module that tampering was detected, a policy that prevents decryption of encrypted data stored in the non-volatile memory devices, or a policy that erases at least a portion of data stored in the non-volatile memory devices. Example 5. The apparatus of example 1, the circuitry may also cause a tamper indication to be stored in a physical memory address of at least one of the non-volatile memory devices or cause a programmable fuse bit to be activated to indicate tampering of the memory module. Example 6. The apparatus of example 1, the character pattern may include a per-module unique character pattern sprayed on the heat spreader using the conductive ink in a pattern that connects characters to enable a current to flow through the conductive ink from an input contact on the heat spreader cover to an output contact on the heat spreader cover. Example 7. The apparatus of example 1, the conductive ink may include a carbon ink, a conductive polymer ink, or metal nanoparticle ink. Example 8. The apparatus of example 1, the memory module may be a dual in-line memory module (DIMM) that also includes second non-volatile memory devices resident on a second side of the PCB and a second heat spreader cover that has a second character pattern sprayed on a side of the second heat spreader that faces the second non-volatile memory devices, wherein the second character pattern is sprayed using conductive ink. Example 9. The apparatus of example 8, the circuitry to determine the first resistance value and the second resistance value may further include the circuitry to determine, responsive to the first boot of the memory module, the first resistance value based on resistance values of the character pattern and the second character pattern. The circuitry may also determine, responsive to the second boot of the memory module, the second resistance value based on resistance values of the character pattern and the second character pattern. Example 10. The apparatus of example 1, the non-volatile memory devices may include a byte or block addressable type of non-volatile memory having a 3-dimensional (3-D) cross-point memory structure that includes chalcogenide phase change material. Example 11. The apparatus of example 1, the first boot of the memory module may include an initial boot of the memory module following assembly of the memory module at a manufacturer. Example 12. An example method may include determining, following a first boot of a memory module, a first resistance value for a character pattern sprayed on a side of a heat spreader cover that faces non-volatile memory devices resident on a first side of a printed circuit board (PCB) of the memory module, wherein the character pattern is sprayed on using conductive ink. The method may also include determining, following a second boot of the memory module, a second resistance value for the character pattern. The method may also include asserting a bit of a register accessible to circuitry of a controller resident on the first side or a second side of the PCB to indicate tampering of the memory module based on the second resistance value not matching the first resistance value within a threshold resistance value. Example 13. The method of example 12 may also include storing the first resistance value to a first set of bits of the register accessible to the circuitry of the controller. The method may also include storing the second resistance value to a second set of bits of the register, wherein the first and second set of bits do not include the bit asserted to indicate tampering of the memory module. Example 14. The method of example 13, may also include converting the first resistance value to a first digital formatted number and storing the first digital formatted number to the first set of bits of the register. The method may also include converting the second resistance value to a second digital formatted number and store the second digital formatted number to the second set of bits of the register. Example 15. The method of example 12, may also include detecting, following a third boot of the memory module, the asserted bit of the register that indicates tampering of the memory module. The method may also include initiating a tamper policy that includes a policy to deactivate the memory module, a policy to generate an alert to a user of the memory module that tampering was detected, a policy that prevents decryption of encrypted data stored in the non-volatile memory devices, or a policy that erases at least a portion of data stored in the non-volatile memory devices. Example 16. The method of example 15 may also include causing a tamper indication to be stored in a physical memory address of at least one of the non-volatile memory devices or causing a programmable fuse bit to be activated to indicate tampering of the memory module. Example 17. The method of example 12, the character pattern may include a per-module unique character pattern sprayed on the heat spreader using the conductive ink in a pattern that connects characters to enable a current to flow through the conductive ink from an input contact on the heat spreader cover to an output contact on the heat spreader cover. Example 18. The method of example 12, the conductive ink may include a carbon ink, a conductive polymer ink, or metal nanoparticle ink. Example 19. The method of example 12, the memory module may include a dual in-line memory module (DIMM) that also includes second non-volatile memory devices resident on a second side of the PCB and a second heat spreader cover that has a second character pattern sprayed on a side of the second heat spreader that faces the second non-volatile memory devices, wherein the second character pattern is sprayed using conductive ink. Example 20. The method of example 19, determining the first resistance value and the second resistance value may include determining, following the first boot of the memory module, the first resistance value based on resistance values of the character pattern and the second character pattern. The method may also include determining, following the second boot of the memory module, the second resistance value based on resistance values of the character pattern and the second character pattern. Example 21. The method of example 12, the non-volatile memory devices may include a byte or block addressable type of non-volatile memory having a 3-dimensional (3-D) cross-point memory structure that includes chalcogenide phase change material. Example 22. The method of example 12, the first boot of the memory module may include an initial boot of the memory module following assembly of the memory module at a manufacturer. Example 23. An example at least one machine readable medium may include a plurality of instructions that in response to being executed by a system may cause the system to carry out a method according to any one of examples 12 to 22. Example 24. An example apparatus may include means for performing the methods of any one of examples 12 to 22. Example 25. An example dual in-line memory module (DIMM) may include a printed circuit board (PCB). The DIMM may also include a first non-volatile memory devices resident on a first side of the PCB. The DIMM may also include a second non-volatile memory devices resident on a second side of the PCB. The DIMM may also include a first heat spreader cover having a first character pattern sprayed on a side facing the first non-volatile memory devices. The first character pattern may be sprayed on using conductive ink. The DIMM may also include a second heat spreader cover having a second character pattern sprayed on a side facing the second non-volatile memory devices. The second character pattern may be sprayed on using conductive ink. The DIMM may also include a controller resident on the first side of the PCB. The controller may include circuitry to determine, responsive to a first boot of the DIMM, a first resistance value for the first and second character patterns. The circuitry may also determine, responsive to a second boot of the DIMM, a second resistance value for the character pattern. The circuitry may also assert a bit of a register accessible to the circuitry to indicate tampering of the DIMM based on the second resistance value not matching the first resistance value within a threshold resistance value. Example 26. The DIMM of example 25, may also include the circuitry to store the first resistance value to a first set of bits of the register accessible to the circuitry. The circuitry may also store the second resistance value to a second set of bits of the register, wherein the first and second set of bits do not include the bit asserted to indicate tampering of the DIMM. Example 27. The DIMM of example 26, may also include the circuitry to convert the first resistance value to a first digital formatted number and store the first digital formatted number to the first set of bits of the register. The circuitry may also convert the second resistance value to a second digital formatted number and store the second digital formatted number to the second set of bits of the register. Example 28. The DIMM of example 25, may also include the circuitry to detect, following a third boot of the DIMM, the asserted bit of the register that indicates tampering of the DIMM. The circuitry may also initiate a tamper policy that includes a policy to deactivate the DIMM, a policy to generate an alert to a user of the DIMM that tampering was detected, a policy that prevents decryption of encrypted data stored in the first or second non-volatile memory devices, or a policy that erases at least a portion of data stored in the first or second non-volatile memory devices. Example 29. The DIMM of example 28 may also include the circuitry to cause a tamper indication to be stored in a physical memory address of at least one of the non-volatile memory devices or cause a programmable fuse bit to be activated to indicate tampering of the memory module. Example 30. The DIMM of example 25, the character pattern may include a per-DIMM unique character pattern sprayed on the first and second heat spreaders using the conductive ink in separate patterns that connect characters to enable currents to flow through the conductive ink from respective input contacts on the first heat spreader cover and the second heat spreader cover to respective output contacts on the first heat spreader cover and the second heat spreader cover. Example 31. The DIMM of example 25, conductive ink may include a carbon ink, a conductive polymer ink, or metal nanoparticle ink. Example 32. The DIMM of example 25, the first and second non-volatile memory device may include byte or block addressable types of non-volatile memory having a 3-dimensional (3-D) cross-point memory structure that includes chalcogenide phase change material. Example 33. The DIMM of example 25, the first boot of the DIMM may include an initial boot of the DIMM following assembly of the DIMM at a manufacturer. It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” “third,” and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 63,361 |
11861054 | MODES FOR CARRYING OUT THE PREFERRED EMBODIMENTS Hereinafter, embodiments of a moving robot and a method for controlling the moving robot according the present disclosure will be described in detail with reference to the accompanying drawings, and the same reference numerals are used to designate the same/like components and redundant description thereof will be omitted. In describing technologies disclosed in the present disclosure, if a detailed explanation for a related known function or construction is considered to unnecessarily divert the idea of the technologies in the present disclosure, such explanation has been omitted but would be understood by those skilled in the art. It should be noted that the attached drawings are provided to facilitate understanding of the technical idea disclosed in this specification, and should not be construed as limiting the technical idea by the attached drawings. Hereinafter, an embodiment of a moving robot (hereinafter referred to as “robot”) according to the present disclosure will be described. The robot may refer to a robot capable of autonomous traveling, a lawn-mowing moving robot, a lawn mowing robot, a lawn mowing device, or a moving robot for lawn mowing. As illustrated inFIG.1, the robot100includes a main body10provided with a handle H, a driving unit11moving the main body10, a sensing unit12sensing more than one state (or status) information of the main body10, a communication unit13communicating with a communication target element of the robot100, an output unit14displaying a control screen of the robot100, and a controller20determining position (or location) information of the main body10based on at least one of a result of sensing by the sensing unit12and a result of communication by the communication unit13, and controlling the driving unit11such that the main body10travels within a travel area. The controller20may control the driving unit11, the sensing unit12, the communication unit13, and the output unit14. The controller20may control driving of the driving unit11, the sensing unit12, the communication unit13, and the output unit14, so that the driving unit11, the sensing unit12, the communication unit13, and the output unit14perform their respective functions. That is, the controller20may control driving of the driving unit11, the sensing unit12, the communication unit13, and the output unit14to control operation of the robot100. The controller20may determine current position information of the main body10based on at least one of the result of sensing and the result of communication to control the driving unit11such that the main body10travels in the travel area1000, and to display status information regarding operation and control of the robot100on the control screen via the output unit14. In the robot100including the main body10, the driving unit11, the sensing unit12, the communication unit13, the output unit14, and the controller20, when an anti-theft mode designed to prevent the robot100from being stolen is set, the controller20detects a theft occurrence of the robot100based on the sensing result and the position information, and controls driving of at least one of the driving unit11, the communication unit13, or the output unit14to restrict operation of the robot100. That is, when the anti-theft mode is set, the operation of the robot100is restricted by controlling the driving of at least one of the driving unit11, the communication unit13, and the output unit14. As shown inFIGS.2and3, the robot100may be an autonomous traveling robot including the main body10configured to be movable so as to cut a lawn. The main body10forms an outer shape (or appearance) of the robot100and is provided with the handle H. The main body10may include one or more elements performing operation such as traveling of the robot100and lawn cutting. The main body10includes the driving unit11that may move the main body10in a desired direction and rotate the main body10. The driving unit11may include a plurality of rotatable driving wheels. Each of the driving wheels may individually rotate so that the main body10rotates in a desired direction. In detail, the driving unit11may include at least one main driving wheel11aand an auxiliary wheel11b. For example, the main body10may include two main driving wheels11a, and the two main driving wheels may be installed on a rear lower surface of the main body10. The robot100may travel by itself within a travel area1000shown inFIG.4. The robot100may perform particular operation during traveling. Here, the particular operation may be cutting a lawn in the travel area1000. The travel area1000is a target area in which the robot100is to travel and operate. A predetermined outside and outdoor area may be provided as the travel area1000. For example, a garden, a yard, or the like in which the robot100is to cut a lawn may be provided as the travel area1000. A charging apparatus500for charging the robot100with driving power may be installed in the travel area1000. The robot100may be charged with driving power by docking with the charging apparatus500installed in the travel area1000. The travel area1000may be provided as a boundary area1200that is predetermined, as shown inFIG.4. The boundary area1200corresponds to a boundary line between the travel area1000and an outside area1100, and the robot100may travel within the boundary area1200not to deviate from the outside area1100. In this case, the boundary area1200may be formed to have a closed curved shape or a closed-loop shape. Also, in this case, the boundary area1200may be defined by a wire formed to have a shape of a closed curve or a closed loop. The wire1200may be installed in an arbitrary area. The robot100may travel in the travel area1000having a closed curved shape formed by the installed wire1200. As shown inFIG.4, a transmission device200may be provided in plurality in the travel area1000. The transmission device200is a signal generation element configured to transmit a signal to determine position (or location) information of the robot100. The transmission devices200may be installed in the travel area1000in a distributed manner. The robot100may receive signals transmitted from the transmission devices200to determine a current position of the robot100based on a result of receiving the signals, or to determine position information regarding the travel area1000. In this case, a receiver of the robot100may receive the transmitted signals. The transmission devices200may be provided in a periphery of the boundary area1200of the travel area1000. Here, the robot100may determine the boundary area1200based on installed positions of the transmission devices200in the periphery of the boundary area1200. The robot100may operate according to a driving mechanism (or principle) as shown inFIG.4, and a signal may flow between devices for determining a position as shown inFIG.6. As shown inFIG.5, the robot100may communicate with the terminal300moving in a predetermined area, and travel by following a position of the terminal300based on data received from the terminal300. The robot100may set a virtual boundary in a predetermined area based on position information received from the terminal300or collected while the robot100is traveling by following the terminal300, and set an internal area formed by the virtual boundary as the travel area1000. When the boundary area1200and the travel area1000are set, the robot100may travel in the travel area1000not to deviate from the boundary area1200. According to cases, the terminal300may set the boundary area1200and transmit the boundary area1200to the robot100. When the terminal300changes or expands an area, the terminal300may transmit changed information to the robot100so that the robot100may travel in a new area. Also, the terminal300may display data received from the robot100on a screen to monitor operation of the robot100. The robot100or the terminal300may determine a current position by receiving position information. The robot100and the terminal300may determine a current position based on a signal for position information transmitted from the transmission device200in the travel area1000or a global positioning system (GPS) signal obtained using a GPS satellite400. The robot100and the terminal300may determine a current position by receiving signals transmitted from three transmission devices200and comparing the signals with each other. That is, three or more transmission devices200may be provided in the travel area1000. The robot100sets one certain point in the travel area1000as a reference position, and then calculates a position while the robot100is moving as a coordinate. For example, an initial starting position, that is, a position of the charging apparatus500may be set as a reference position. Alternatively, a position of one of the plurality of transmission devices200may be set as a reference position to calculate a coordinate in the travel area1000. The robot100may set an initial position of the robot100as a reference position in each operation, and then determine a position of the robot100while the robot100is traveling. With respect to the reference position, the robot100may calculate a traveling distance based on rotation times and a rotational speed of a driving wheel, a rotation direction of a main body, etc. to thereby determine a current position in the travel area1000. Even when the robot100determines a position of the robot100using the GPS satellite400, the robot100may determine the position using a certain point as a reference position. As shown inFIG.6, the robot100may determine a current position based on position information transmitted from the transmission device200or the GPS satellite400. The position information may be transmitted in the form of a GPS signal, an ultrasound signal, an infrared signal, an electromagnetic signal, or an ultra-wideband (UWB) signal. A signal transmitted from the transmission device200may preferably be a UWB signal. Accordingly, the robot100may receive the UWB signal transmitted from the transmission device200, and determine the current position based on the UWB signal. The robot100operating as described above may include the main body10, the driving unit11, the sensing unit12, the communication unit13, the output unit14, and the controller20as shown inFIG.7. When the anti-theft mode is set, a robot100theft occurrence may be detected and operation of the robot100may be limited according to a result of detection. The robot100may further include at least one selected from a data unit15, an image capturing unit16, a receiver17, an audio unit18, an obstacle detection unit19, and a weeding unit30. Also, the robot100may further include a power supply unit (not shown) for supplying power to each of the driving unit11, the sensing unit12, the communication unit13, the output unit14, the data unit15, the image capturing unit16, and the receiver17, the audio unit18, the obstacle detection unit19, the controller20, and the weeding unit30. The driving unit11is a driving wheel included in a lower part of the main body10, and may be rotationally driven to move the main body10. That is, the driving unit11may be driven such that the main body10travels in the travel area1000. The driving unit11may include at least one driving motor to move the main body10so that the robot100travels. For example, the driving unit11may include a left wheel driving motor for rotating a left wheel and a right wheel driving motor for rotating a right wheel. The driving unit11may transmit information about a result of driving to the controller20, and receive a control command for operation from the controller20. The driving unit11may operate according to the control command received from the controller20. That is, the driving unit11may be controlled by the controller20. The sensing unit12may include one or more sensors that sense at least one state (or status) of the main body10. The sensing unit12may include at least one sensor that senses a posture and an operation state (or status) of the main body10. The sensing unit12may include at least one selected from an inclination sensor that detects movement of the main body10and a speed sensor that detects a driving speed of the driving unit11. The sensing unit12may further include a grip sensor that detects a grip (or gripped) state of the handle H. The inclination sensor may be a sensor that senses posture information of the main body10. When the main body10is inclined forward, backward, leftward or rightward, the inclination sensor may sense the posture information of the main body10by calculating an inclined direction and an inclination angle. A tilt sensor, an acceleration sensor, or the like may be used as the inclination sensor. In the case of the acceleration sensor, any of a gyro type sensor, an inertial type sensor, and a silicon semiconductor type sensor may be used. In addition, various sensors or devices capable of detecting movement of the main body10may be used. The speed sensor may be a sensor for sensing a driving speed of a driving wheel provided in the driving unit11. When the driving wheel rotates, the speed sensor may sense the driving speed by detecting rotation of the driving wheel. The sensing unit12may transmit information of a result of sensing to the controller20, and receive a control command for operation from the controller20. The sensing unit12may operate according to a control command received from the controller20. That is, the sensing unit12may be controlled by the controller20. The communication unit13may communicate with at least one communication target element that is to communicate with the robot100. The communication unit13may communicate with the transmission device200and the terminal200using a wireless communication method. The communication unit13may be connected to a predetermined network so as to communicate with an external server or the terminal300that controls the robot100. When the communication unit13communicates with the terminal300, the communication unit13may transmit a generated map to the terminal300, receive a command from the terminal300, and transmit data regarding an operation state (or status) of the robot100to the terminal300. The communication unit13may include a communication module such as wireless fidelity (Wi-Fi), wireless broadband (WiBro), or the like, as well as a short-range wireless communication module such as Zigbee, Bluetooth, or the like, to transmit and receive data. The communication unit13may transmit information about a result of the communication to the controller20, and receive a control command for operation from the controller20. The communication unit13may operate according to the control command received from the controller20. That is, the communication unit13may be controlled by the controller20. The output unit14may include at least one input element such as a button, a switch, a touch pad, etc., and an output element such as a display unit, and the like to receive a user's command and output an operation state of the robot100. For example, a command for executing the anti-theft mode may be input and a status for execution of the anti-theft mode may be output via the display unit. The output unit14may display a state of the robot100through the display unit, and display a control screen on which manipulation or an input is applied for controlling the robot100. The control screen may mean a user interface screen on which a driving state of the robot100is displayed and output, and a command for operating the robot100is input from a user. The control screen may be displayed on the display unit under the control of the controller20, and a display and an input command on the control screen may be controlled by the controller20. The output unit14may transmit information about an operation state to the controller20and receive a control command for operation from the controller20. The output unit14may operate according to a control command received from the controller20. That is, the output unit14may be controlled by the controller20. The data unit15is a storage element that stores data readable by a microprocessor, and may include a hard disk drive (HDD), a solid state disk (SSD), a silicon disk drive (SDD), a read only memory (ROM) a random access memory (RAM), CD-ROM, a magnetic tape, a floppy disk, or an optical data storage device. In the data unit15, a received signal may be stored, reference data to determine an obstacle may be stored, and obstacle information regarding a detected obstacle may be stored. In the data unit15, control data that controls operation of the robot100, data according to an operation mode of the robot100, position information collected, and information about the travel area1000and the boundary area1200may be stored. The image capturing unit16may be a camera capturing an image of a periphery of the main body10to generate image information of the travel area1000of the main body10. The image capturing unit16may capture an image of a forward direction of the main body10to detect an obstacle around the main body10and in the travel area1000. The image capturing unit16may be a digital camera, which may include an image sensor (not shown) and an image processing unit (not shown). The image sensor is a device that converts an optical image into an electrical signal. The image sensor includes a chip in which a plurality of photodiodes is integrated. A pixel may be an example of a photodiode. Electric charges are accumulated in the respective pixels by an image, which is formed on the chip by light that has passed through a lens, and the electric charges accumulated in the pixels are converted to an electrical signal (for example, a voltage). A charge-coupled device (CCD) sensor and a complementary metal oxide semiconductor (CMOS) sensor are well known as image sensors. In addition, the image capturing unit16may include a Digital Signal Processor (DSP) for the image processing unit to process a captured image so as to generate image information. The image capturing unit16may capture an image of a periphery of the main body10from a position where it is installed, and generate image information according to a result of image capturing. The image capturing unit16may be provided at an upper portion of a rear side of the main body10. The image capturing unit16may capture an image of a traveling direction of the main body10. That is, the image capturing unit16may capture an image of a forward direction of the main body10to travel. The image capturing unit16may capture an image around the main body10in real time to generate the image information. The image capturing unit16may transmit information about a result of image capturing to the controller20, and receive a control command for operation from the controller20. The image capturing unit16may operate according to the control command received from the controller20. That is, the image capturing unit16may be controlled by the controller20. The receiver17may include a plurality of signal sensor modules that transmits and receives the position information. The receiver17may include a position sensor module that receives the signals transmitted from the transmission device200. The position sensor module may transmit a signal to the transmission device200. When the transmission device200transmits a signal using a method selected from an ultrasound method, a UWB method, and an infrared method, the receiver17may include a sensor module that transmits and receives an ultrasound signal, a UWB signal, or an infrared signal, in correspondence with this. The receiver17may include a UWB sensor. As a reference, UWB radio technology refers to technology using a very wide frequency range of several GHz or more in baseband instead of using a radio frequency (RF) carrier. UWB wireless technology uses very narrow pulses of several nanoseconds or several picoseconds. Since pulses emitted from such a UWB sensor are several nanoseconds or several picoseconds long, the pulses have good penetrability. Thus, even when there are obstacles in a periphery of the UWB sensor, the receiver17may receive very short pulses emitted by other UWB sensors. When the robot100travels by following the terminal300, the terminal300and the robot100include the UWB sensor, respectively, thereby transmitting or receiving a UWB signal with each other through the UWB sensor. The terminal300may transmit the UWB signal to the robot100through the UWB sensor included in the terminal300. The robot100may determine a position of the terminal300based on the UWB signal received through the UWB sensor, allowing the robot100to move by following the terminal300. In this case, the terminal300operates as a transmitting side and the robot100operates as a receiving side. When the transmission device200includes the UWB sensor and transmits a signal, the robot100or the terminal300may receive the signal transmitted from the transmission device200through the UWB sensor included in the robot100or the terminal300. At this time, a signaling method performed by the transmission device200may be identical to or different from signaling methods performed by the robot100and the terminal300. The receiver17may include a plurality of UWB sensors. When two UWB sensors are included in the receiver17, for example, provided on left and right sides of the main body10, respectively, the two USB sensors may receive signals, respectively, and compare a plurality of received signals with each other to thereby calculate an accurate position. For example, according to a position of the robot100, the transmission device200, or the terminal300, when a distance measured by a left sensor is different from a distance measured by a right sensor, a relative position between the robot100and the transmission device200or the terminal300, and a direction of the robot100may be determined based on the measured distances. The receiver17may further include a GPS module for transmitting and receiving a GPS signal to and from the GPS satellite400. The receiver17may transmit a result of receiving a signal to the controller20, and receive a control command for operation from the controller20. The receiver17may operate according to the control command received from the controller20. That is, the receiver17may be controlled by the controller20. The audio unit (or module)18may include an output element such as a speaker to output an operation state of the robot100in the form of an audio output. The audio unit18may output an alarm when an event occurs while the robot100is operating. For example, when the power is run out, an impact or shock is applied to the robot100, or an accident occurs in the travel area1000, the audio unit18may output an alarm audio output so that the corresponding information is provided to the user. The audio unit18may transmit information regarding an operation state to the controller20and receive a control command for operation from the controller20. The audio unit18may operate according to a control command received from the controller20. That is, the audio unit18may be controlled by the controller20. The obstacle detection unit19includes a plurality of sensors to detect obstacles located in a traveling direction. The obstacle detection unit19may detect an obstacle located in a forward direction of the main body10, that is, in a traveling direction of the main body10using at least one selected from a laser sensor, an ultrasonic sensor, an infrared sensor, and a three-dimensional (3D) sensor. The obstacle detection unit19may further include a cliff detection sensor installed on a rear surface of the main body10to detect a cliff. The obstacle detection unit19may transmit information regarding a result of detection to the controller20, and receive a control command for operation from the controller20. The obstacle detection unit19may operate according to the control command received from the controller20. That is, the obstacle detection unit19may be controlled by the controller20. The weeding unit30cuts grass on the bottom while traveling. The weeding unit30is provided with a brush or blade for cutting a lawn, so as to cut the grass on the ground in a rotating manner. The weeding unit30may transmit information about a result of operation to the controller20and receive a control command for operation from the controller20. The weeding unit30may operate according to the control command received from the controller20. That is, the weeding unit30may be controlled by the controller20. The controller20may include a central processing unit to control overall operation of the robot100. The controller20may determine the position information via the main body10, the driving unit11, the sensing unit12, the communication unit13, and the output unit14to control the main body10such that the main body10travels within the travel area1000, and control functions and operation of the robot100to be performed via the data unit15, the image capturing unit16, the receiver17, the audio unit18, the obstacle detection unit19, and the weeding unit30. The controller20may control input and output of data, and control the driving unit11so that the main body10travels according to settings. The controller20may independently control operation of the left wheel driving motor and the right wheel driving motor by controlling the driving unit11to thereby control the main body10to travel rotationally or in a straight line. The controller20may set the boundary area1200of the travel area1000based on position information received from the terminal300or position information determined based on the signal received from the transmission device200. The controller20may also set the boundary area1200of the travel area1000based on position information that is collected by the controller20during traveling. The controller20may set a certain area of a region formed by the set boundary area1200as the travel area1000. The controller20may set the boundary area1200in a closed loop form by connecting discontinuous position information in a line or a curve, and set an inner area within the boundary area1200as the travel area1000. When the travel area1000and the border area1200corresponding thereto are set, the controller20may control traveling of the main body10so that the main body10travels in the travel area1000without deviating from the set boundary area1200. The controller20may determine a current position based on received position information and control the driving unit11so that the determined current position is located in the travel area1000to thereby control traveling of the main body10. In addition, according to obstacle information input by at least one of the image capturing unit16, the obstacle detection unit19, and the controller20may control traveling of the main body10to avoid obstacles and travel. In this case, the controller20may modify the travel area1000by reflecting the obstacle information to pre-stored area information regarding the travel area1000. In the robot100having the configuration as shown inFIG.7, when the anti-theft mode is set, the controller20may detect a robot100theft occurrence, and control driving at least one of the driving unit11, the communication unit13, or the output unit14to restrict operation of the robot100depending on a result of detection. The robot100may perform set operation while traveling in the travel area1000. For example, the robot100may cut a lawn on the bottom of the travel area1000while traveling in the travel area1000as shown inFIG.8. In the robot100, the main body10may travel according to driving of the driving unit11. The main body10may travel as the driving unit11is driven to move the main body10. In the robot100, the driving unit11may be driven by the controller20. Under the control of the controller20, the driving unit11may be driven by receiving driving power from the power supply unit. The driving unit11may move the main body10by driving the driving wheels. The driving unit may move the main body10by operating the driving wheels, so that the main body10travels. In the robot100, the sensing unit12may be driven by the controller20. The sensing unit12may be driven by receiving driving power from the power supply unit under the control of the controller20. The sensing unit12may include one or more sensors to sense one or more states of the main body10. The sensing unit12may include at least one of a contact sensor that senses a grip (or gripped) state of the handle H and an inclination sensor that senses posture information of the main body10. That is, in the sensing unit12, the grip state of the handle H may be sensed by the contact sensor, and an inclination (or tilt) of the main body10may be sensed by the inclination sensor. Accordingly, a result of the sensing may be at least one of the sensing the grip state of the handle H and the sensing of the inclination of the main body10. The sensing unit12may include both the contact sensor and the inclination sensor. In the robot100, the communication unit13may be driven by the controller20. The communication unit13may be driven by receiving driving power from the power supply unit under the control of the controller20. The communication unit13may communicate with the communication target element for transmitting and receiving information to and from the communication target element. Here, the communication target element may be the terminal300. The communication target element may further include the transmission device200. The communication unit13may receive information for determining the position information from the communication target element, and transmit the position information to the communication target element. The communication unit13may communicate with the communication target element in real time. In the robot100, the output unit14may be driven by the controller20. The output unit14may be driven by receiving driving power from the power supply unit under the control of the controller20. The output unit14may display the control screen, so as to display information regarding operation and control state of the robot100. For example, position information of the main body10, a control interface for controlling operation of the robot100, and the like may be displayed. In the robot100, the controller20may control each of the driving unit11, the sensing unit12, the communication unit13, and the output unit14. The controller20may control the driving unit11, the sensing unit12, the communication unit13, and the output unit14individually (or separately) by controlling driving power supply. In more detail, the controller20may control the driving power of the driving unit11, the sensing unit12, the communication unit13, and the output unit14supplied from the power supply unit to control driving of the driving unit11, the sensing unit12, the communication unit13, and the output unit14. Here, the driving control may mean controlling a function of the driving unit11, the sensing unit12, the communication unit13, and the output unit14, as well as controlling the driving itself. The controller20may determine position information of the main body10based on at least one of a result of sensing by the sensing unit12and a result of communication by the communication unit13to control the driving unit11, so that the main body10is controlled to travel in the travel area1000based on the position information. The controller20may control operation of the robot100according to a set operation mode. Here, the operation mode is a mode related to the operation of the robot100, and may include, for example, a traveling mode, a monitoring mode, and the anti-theft mode. The controller20may control each of the driving unit11, the sensing unit12, the communication unit13, and the output unit14according to a set operation mode. That is, the controller20may control operation of the robot100to perform the set mode by controlling the driving unit11, the sensing unit12, the communication unit13, and the output unit14, respectively. When the anti-theft mode designed to prevent the robot100from being stolen is set, the controller20may detect a robot100theft occurrence based on the sensing result and the position information, and control at least one of the driving unit11, the communication unit13, or the output unit14to restrict operation of the robot100according to a detection result. That is, the anti-theft mode may be a mode for detecting robot100theft and limiting operation of the robot100when the robot100theft is occurred. Accordingly, in the anti-theft mode, the controller20may detect the robot100theft occurrence based on the sensing result and the location information, and restrict the operation of the robot100by controlling the driving of one or more of the driving unit11, the communication unit13, and the output unit14when the robot100theft is occurred. An example how the controller20detects the robot100theft occurrence in the anti-theft mode will be described with reference toFIG.8. As illustrated inFIG.8, when the main body10located in the travel area1000is not in the travel area1000, the controller20may determine that robot100theft is occurred, and identify the robot100theft occurrence. That is, the controller20may detect the robot100theft when the robot100is moved to the outside of the travel area1000by an external force of a person who is not the owner of the robot100. The controller20may compare the sensing result with predetermined determination criteria (or reference) and the position information to determine whether the main body10is deviated from the travel area1000, so as to detect the robot100theft occurrence. In more detail, the controller20may identify the robot100theft occurrence based on a result of comparing at least one of the results of sensing the main body10status with the determination criteria, and the position information of the current position of the main body10. Here, the sensing results may be sensing a grip (or gripped) state of the handle H and an inclination of the main body10. In addition, the determination criteria may be a reference for at least one of a grip state of the handle H and an inclination of the main body10, for example, whether the handle H is gripped or whether the main body10is inclined more than a predetermined inclination. Accordingly, the controller20may detect the robot100theft occurrence based on a result of comparing a sensing result of the grip state of the handle H with the determination criteria and the location information, or based on a result of comparing a sensing result of the inclination of the main body10with the determination criteria and the location information. The controller20may detect the theft occurrence when the sensing result corresponds to the determination criteria, and the position information corresponds to the outside of the travel area1000(or non-travel area1000). That is, when at least one of the sensing results, either the grip state of the handle H or the inclination of the main body10, corresponds to the determination criteria, and when the position information corresponds to the outside of the travel area1000, the controller20may detect the theft occurrence. For instance, the controller20may detect the theft occurrence when the handle H is gripped and the main body10is deviated from the travel area1000. Referring toFIGS.9and10, the theft occurrence may be detected when the handle H is gripped by the person who is not the owner of the robot100, and the main body10is lifted from the ground as shown inFIG.10and is then moved to the outside the travel area1000as shown inFIG.8. Alternatively, the controller may detect the theft occurrence when the main body10is tilted more than a predetermined inclination and the main body10is deviated from the travel area1000. In detail, the robot100theft may be detected when the main body10in a state as shown inFIG.9is lifted from the ground more than a predetermined inclination8set for the predetermined determination criteria as shown inFIG.10, and is then moved to the outside of the travel area1000. As such, when the sensing result corresponds to the determination criteria, and the position information corresponds to the outside of the travel area1000, the controller20that senses the theft occurrence may detect a malfunction or an error in the sensing unit12when the sensing result corresponds to the determination criteria, but the position information falls within the travel area1000. In other words, when the sensing result corresponds to the determination criteria, but the main body10is not deviated from the travel area1000, the controller20determines that the sensing unit12is not working properly since the sensing result of the gripped state of the handle H or the inclination of the main body10by the sensing unit12is mistakenly or wrongly sensed. A process of detecting the theft occurrence by the controller20is illustrated inFIG.11. When the anti-theft mode is set, the controller20may carry on detecting the theft occurrence (PO), and receive a result of sensing the main body10from the sensing unit12. The sensing unit12may sense at least one of a grip(ped) state of the handle H (P1a) and an inclination of the main body10(P1b), and transmit the sensing result to the controller20. Then the controller20compares the sensing result with the determination criteria to determine whether the grip state of the handle H (P1a) and/or the inclination of the main body10(P1b) corresponds to the determination criteria (P2). When at least one of the grip state of the handle H (P1a) and the inclination of the main body10(P1b) corresponds to the determination criteria, the controller20may determine whether the current position of the main body10is deviated from the travel area1000(P3) to detect the theft occurrence (P4or P4′). When the main body10is deviated from the travel area1000, the controller20may determine that the robot100theft is occurred, thereby detecting the theft occurrence (P4). That is, the controller20may detect the theft occurrence (P4) when at least one of the grip state of the handle H (P1a) and the inclination of the main body10(P1b) corresponds to the determination criteria, and the position information corresponds to the outside of the travel area1000. When the main body10is not deviated from the travel area1000, the controller20may detect a malfunction or an error in the sensing unit12(P4′) and determines that the sensing unit12is not working properly. In more detail, the controller20may detect the error in the sensing unit12(P4′) when at least one of the grip state of the handle H (P1a) and the inclination of the main body10(P1b) corresponds to the determination criteria, and the position information falls within the travel area1000. As such, when the controller20detects the theft occurrence, the controller20may control driving of the driving unit11, the communication unit13, and the output unit14to restrict operation of the robot100, respectively. In other words, when the theft occurrence is detected, the controller20controls the driving of the driving unit11, the communication unit13, and the output unit14, respectively, so as to prevent the robot100from being operated by a person who stole the robot100. When the controller20detects the theft occurrence, the controller20may cut off power supplied to the driving unit11and the output unit14to prevent driving of the driving unit11and the output unit14. That is, when the theft occurrence is sensed, the controller20blocks driving of the driving unit11and the output unit14, so as to prevent the robot100from being manipulated by the person who stole the robot100and being operated or used by the person who stole the robot10. The controller20may cut off the driving power supplied to the driving unit11and the output unit14from the power supply unit, so as to prevent driving of the driving unit11and the output unit14. In more detail, the controller20cuts off the driving power of the driving unit11moving the main body10and the output unit14displaying the control screen for controlling the robot100to prevent the robot100from being manipulated by the person who stole the robot100and from being used by the person who stole the robot10in the first place. When the robot100theft is detected, the controller20may also transmit information about the theft occurrence to the communication target element via the communication unit13. That is, when the robot100theft is detected, the controller20controls the communication unit13to transmit the information of the theft occurrence to the communication target element, allowing the corresponding theft information to be transmitted to the communication target element. In case the theft occurrence is detected, the controller20generates information regarding at least one of a location in which the theft occurred and time at which the theft occurred, and transmit the generated information to the communication target element via the communication unit13. When the theft occurrence is detected, the controller20may output a notification audio output via the audio unit18. That is, the controller20may control the audio unit18to output the notification audio output to notify a situation of the robot100theft occurrence when the theft occurrence is detected. Here, the controller20may control the audio unit18to output the notification audio output according to a preset output reference. The controller20, after detecting the theft occurrence, may control the output unit14to display an input screen for requesting an input of a preset usage code. The power supplied to the driving unit11and the output unit14may be cut off depending on the code entered through the input screen. The usage code may mean a code for identifying an (authorized) user of the robot100in the event of the theft occurrence, the code may be a PIN CODE, a PASSWORD, or the like. The usage code may also mean a code for reactivating the robot100, a user authentication code for the robot100, and a code for unlocking the robot100. The usage code may be set by the user of the robot100in advance. The usage code may be a combination of any numbers, letters, and symbols created by the user of the robot100. The input screen IS for requesting an input of the usage code may be displayed on the output unit14, allowing the user to input the usage code on the input screen IS as shown inFIG.13. The input screen IS may be displayed on the output unit14after the theft occurrence is detected, and before the power supplied to the driving unit11and the output unit14is cut off. In other words, the input screen IS may be a screen for determining on whether a user (or person) is the authorized user of the robot100before restricting the operation of the robot100. The controller20may cut off the power supplied to the driving unit11and the output unit14according to a code input on the input screen IS. When the input code matches with the preset usage code, the controller20may determine that the theft occurrence is cleared or resolved, and maintain the power supplied to the driving unit11and the output unit14. That is, when the usage code is correctly entered into the input screen IS, the controller20may determine that the robot100is operated by the authorized user and the theft occurrence is cleared as it is not stolen, thereby maintaining the power supplied to the driving unit11and the output unit14. When the input code does not match with the preset usage code, the controller20may cut off the power supplied to the driving unit11and the output unit14. In other words, when the usage code is incorrectly entered into the input screen, the controller20may determine that the robot100is manipulated by a person who stole the robot100(or an unauthorized user), so that the power supplied to the driving unit11and the output unit14may be cut off to limit the operation of the robot100. The controller20that displays the input screen IS for requesting an input of the usage code via the output unit14may display the input screen IS for a predetermined number of input times. The controller20may display the input screen IS by the number of input times until the usage code matching with the preset usage code is entered. In other words, the controller20may repeat a usage code input request by the number of input times until the usage code is entered correctly. Here, the number of input times may be set by the user, for example, five times. If the usage code is entered incorrectly more than the predetermined number of input times, the controller20may cut off the power supplied to the driving unit11and the output unit14. In more detail, if the wrong usage code is entered more than the number of input times, the controller20determines that an unauthorized user attempts to manipulate the robot100. Then the power supplied to the driving unit11and the output unit14may be cut off. As such, the controller20that detects the theft occurrence and restricts the operation of the robot100may track the position information of the robot100until the theft occurrence is cleared after the theft occurrence is detected, and transmit the position information to the communication target element according to a predetermined transmission period via the communication unit13. That is, the controller20may keep tracking of the position information of the robot100until the theft occurrence is cleared after detecting the theft occurrence, and transmit the position information to the communication target element via the communication unit13. By doing so, a theft or stolen path can be tracked as the robot100keeps providing its position information in a stolen state. A process of restricting operation of the robot100by the controller20will be described with reference toFIG.12. When the theft occurrence is detected (P4), the controller20controls the output unit14to display the input screen IS (P5). Here, the controller20may transmit information about the theft occurrence to the communication target element via the communication unit13. In addition, the controller20may output a notification audio output for the theft occurrence via the audio unit18. When a usage code is entered (P6) into the input screen IS, the controller20compares the usage code entered with the preset usage code (P7), and determines whether the robot100is stolen to remove the theft occurrence. After comparing the input usage code with the preset usage code (P7), the controller20may determine that the robot100theft occurrence is cleared (P8) when the input usage code matches with the preset usage code. In other words, when the usage code is entered correctly, the controller20may determine that the robot100is manipulated by the authorized user, and the theft occurrence is cleared (P8). Thus, the power supplied to the driving unit11and the output unit14may be maintained. When the input code does not match with the preset usage code, after comparing the input code with the usage code (P7), the controller20may cut off the power supplied to the driving unit11and the output unit14(P9). That is, when the usage code is entered incorrectly, the controller20may determine that an unauthorized user (person who stole the robot100) attempts to manipulate the robot100, then restricts the operation of the robot100by cutting off the power supplied to the driving unit11and the output unit14(P9). In this case, the controller20may generate information regarding at least one of a location in which the theft is occurred and time at which the theft is occurred, and transmit the generated information to the communication target element via the communication unit13(P10). In other words, when the usage code is entered incorrectly, the controller20determines that the robot100is manipulated by the unauthorized user, and transmit theft occurrence information to the communication target element via the communication unit13(P10), allowing a situation of the robot100theft occurrence to be transmitted to the communication target element. As such, when the theft incident is detected, the controller20may cut off the power supplied to the driving unit11and the output unit14, and transmit information of the detected theft occurrence to the communication target element via the communication unit13, thereby restricting the operation of the robot100and providing the information of the detected theft occurrence. The robot100as described above may be implemented by using a method for controlling a moving robot (hereinafter referred to as “control method”) to be described hereinafter. The control method is a method for controlling the moving robot100as shown inFIGS.1to3, which may be applied to the robot100. It may also be applied to robots other than the robot100. The control method may be for controlling the robot100that includes the main body10provided with the handle H, the driving unit11moving the main body10, the sensing unit12sensing at least one of state information of the main body10, the communication unit13communicating with a communication target element of the robot100, the output unit14displaying a control screen of the robot100, and the controller20determining position information of the main body100based on at least one of a result of sensing by the sensing unit12and a result of communication by the communication unit13and controlling the driving unit11to control traveling of the main body10, so that the main body10travels in the travel area1000, which may be for a method of performing an anti-theft mode to prevent the robot100from being stolen. The control method may be a control method performed by the controller20. As illustrated inFIG.14, the control method may include detecting robot100theft occurrence based on the sensing result and the position information (S10), displaying an input screen on the output unit14for requesting an input of a preset usage code (S20), and controlling driving of the driving unit11and the output unit14depending on the used code the input screen (S30). In other words, the robot100may perform the anti-theft mode in order from detecting (S10), displaying (S20), to controlling (S30). The detecting step S10may be a step in which the controller20detects the theft occurrence based on the sensing result and the position information after the anti-theft mode is set. In the detecting step S10, the theft occurrence may be detected by comparing a result of sensing the grip (or gripped) state of the handle H and sensing an inclined (or tilted) state of the main body10with predetermined determination criteria to determine whether at least one of the grip state of the handle H and the inclined state of the main body10corresponds to the determination criteria. In the detecting step S10, when at least one of the gripped state of the handle H and the inclined state of the main body10corresponds to the determination criteria, the current position of the main body10is determined on whether the current position of the main body10is deviated from the travel area1000to identify the theft occurrence according to a determination result. In the detecting step S10, when at least one of the gripped state of the handle H and the inclined state of the main body10corresponds to the determination criteria and the position information corresponds to the outside of the travel area1000, the robot100is determined to be stolen, allowing the theft occurrence to be identified. In the detecting step S10, at least one of the gripped state of the handle H and the inclined state of the main body10corresponds to the determination criteria, and the position information corresponds to the travel area1000, a malfunction or an error in the sensing unit12may be detected. The displaying step S20may be a step in which the controller20displays the input screen on the output unit14when the theft occurrence is detected at the detecting step S20. In the displaying step S20, the input screen may be displayed on the output unit14for requesting an input of the usage code. In the displaying step S20, the input screen may be displayed for a predetermined number of input times. In the displaying step S20, the input screen may be displayed on the output unit14for the number of input times until the usage code is entered correctly. In the displaying step S20, a request of the usage code input may be repeated by the number of input times until the usage code is entered correctly. The controlling step S30may be a step in which the controller20controls driving of the driving unit11and the output unit14according to the usage code entered into the input screen displayed at the displaying step S20. In the controlling step S30, the input usage code is compared with the preset usage code to determine whether the robot100is stolen to cancel the theft occurrence. In the controlling step S30, when the input code matches with the preset usage code, it is determined that the robot100is not stolen and the theft occurrence is cleared, and thus power supplied to the driving unit11and the output unit14may be maintained. In the controlling step S30, when the code input does not match with the preset usage code, it is determined that an unauthorized person (a person who stole the robot100) attempts to manipulate the robot100, then the power supplied to the driving unit11and the output unit14may be cut off. The control method that includes the detecting (S10), the displaying (S20), and the controlling (S30) can be implemented as computer-readable codes on a program-recorded medium. The computer-readable medium may include all types of recording devices each storing data readable by a computer system. Examples of such computer-readable media may include hard disk drive (HDD), solid state disk (SSD), silicon disk drive (SDD), ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage element and the like. The computer-readable medium may also be implemented as a format of carrier wave (e.g., transmission via an Internet). In addition, the computer may also include the controller20. The above-described embodiments of the moving robot and the method for controlling the moving robot according to the present disclosure may be applied and implemented with respect to a control element for a moving robot, a moving robot system, a control system of a moving robot, a method for controlling a moving robot, a method for monitoring an area of the moving robot, a control method of monitoring an area of the moving robot, and the like. In particular, the above-described embodiments may be usefully applied and implemented with respect to a lawn mowing robot, a control system of a lawn mowing robot, a method for detecting theft of a lawn mowing robot, a method for preventing theft of a lawn mowing robot, etc. However, the technology disclosed in this specification is not limited thereto, and may be implemented in any moving robot, a control element for a moving robot, a moving robot system, a method for controlling a moving robot, or the like to which the technical idea of the above-described technology may be applied. While the present disclosure has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims. Therefore, the scope of the present disclosure should not be limited by the described embodiments, but should be determined by the scope of the appended claims and equivalents thereof. While the present disclosure has been particularly shown and described with reference to exemplary embodiments, described herein, and drawings, it may be understood by one of ordinary skill in the art that various changes and modifications thereof may be made. Therefore, the scope of the present disclosure should be defined by the following claims, and various changes equal or equivalent to the claims pertain to the category of the concept of the present disclosure. | 56,284 |
11861055 | DETAILED DESCRIPTION Hereinafter, the embodiment of the invention will be described in detail with reference to the drawings.FIG.1is a diagram illustrating an outline of an overall configuration of a virtual reality system according to an embodiment of the invention,FIG.2is an enlarged view illustrating a video display apparatus and a controller of the virtual reality system in an enlarged manner,FIG.3is a block diagram illustrating a configuration of a computer of the video display apparatus of the virtual reality system,FIG.4is a functional block diagram illustrating a configuration of functional blocks of the video display apparatus of the virtual reality system,FIG.5is a diagram illustrating a display example of the video display apparatus of the virtual reality system,FIG.6is a diagram illustrating a state in which a user moves from an origin position to a current position in the virtual reality system,FIG.7is a diagram illustrating a state in which display progresses from the origin position to the current position and a video changes in the virtual reality system, andFIG.8is a diagram illustrating a state in which the user performs a predetermined motion at the current position in the virtual reality system. Note that in the present embodiment, each direction is based on a direction illustrated in the drawing. The outline of the virtual reality system1of the invention will be described with reference toFIG.1. The virtual reality system1of the present embodiment includes a video display apparatus10, a controller30, and a sign40, and can provide a display of a virtual reality video100. A display of virtual reality can be a display of virtual reality in a three-dimensional space. As illustrated inFIG.2, the video display apparatus10is worn on a head U′ of a user U using the virtual reality system1and can display the video100in a field. The video display apparatus10includes a main body10′ capable of displaying the video100in the field, and a fixing portion10″ for fixing the main body10′ to the head U′ of the user U. The fixing portion10″ is in a shape of a band or a string, and a length of the fixing portion10″ is adjustable so as to be compatible with various sizes of the head U′ of the user U. Note that the video display apparatus10may be a head mount display or a headset. Further, the video display apparatus10may have a glass type (glasses type). The video display apparatus10has a general configuration as a computer, and includes a central processing unit (CPU)10b, a storage device (memory)10c, an input device10d, a display device (liquid crystal display)10e, etc. mutually connected via a bus10aas illustrated inFIG.3. A function of the storage device10cis performed by a storage unit11described later, and the storage device10cfunctions as a computer-readable storage medium. A function of the input device10dis performed by an input unit12described later, and a function of the display device10eis performed by a display unit13described later. As illustrated inFIG.4, the video display apparatus10has respective functional units of the storage unit11, the input unit12, the display unit13, an origin position setting unit14, a current position recognition unit15, a progress direction setting unit16, a distance calculation unit17, a motion detector18, a threshold value setting unit19, a determination unit20, and a control unit21. The storage unit11can store the video100in various fields displayed by the display unit13. The input unit12can input various information, data, and signals from the controller30, a sensor described later, etc. As illustrated inFIG.5, the display unit13can read and display the video100of the field from the storage unit11. The display of the video100of the field on the display unit13may include a predetermined display object13a, more specifically, the display object13aindicating a progress direction A. The display object13amay be, for example, an arrow, and may include an animal such as a bird, or various characters. The progress direction A indicated by the display object13amay be, for example, a direction indicated by the arrow, a direction in which an animal or a character is headed, etc. The origin position setting unit14can set an original position X in virtual reality, more specifically in the video100of the field. Referring to setting of the origin position X, the origin position X can be input and set as the origin position X in display of the video100in the field of virtual reality by operating the controller20as necessary while the user U is located at a predetermined position in the real world such as a floor, a ground, or a rug. The origin position X can be input by a signal from the controller30via the input unit12. The current position recognition unit15can detect and recognize a current position Y of the user U. The current position recognition unit15can recognize the current position Y of the user U by input of detection data or image data from a position detector15asuch as a position detection center, a camera, etc. for detecting the position of the user U provided on the floor, ceiling and wall, etc. The current position recognition unit15may have an image analysis function for analyzing the image data input from the camera to recognize the position of the user U. Input of the detection data or the image data from the position detector15asuch as the position detection center, the camera, etc. can be performed via the input unit12. As illustrated inFIG.6, the progress direction setting unit16can calculate a direction of a vector of the current position Y recognized by the current position recognition unit15with respect to the origin position X set by the origin position setting unit14. Then, the progress direction setting unit16can set the progress direction A in the video100of the field displayed on the display unit13according to the calculated vector direction. Likewise, as illustrated inFIG.6, the distance calculation unit17can calculate a distance B from the origin position X set by the origin position setting unit14to the current position Y recognized by the current position recognition unit15. Here, according to the distance B from the origin position X to the current position Y calculated by the distance calculation unit17, it is possible to change the display state of the display object13aon the display unit13. More specifically, according to the distance B from the origin position X to the current position Y calculated by the distance calculation unit17, as illustrated inFIG.7, it is possible to change the position of the display object13aon the display unit13. For example, when the distance B from the origin position X to the current position Y is relatively long, the display object13acan be displayed at a position relatively distant from the origin position X. When the distance B from the origin position X to the current position Y is short, the display object13acan be displayed at a position relatively close to the origin position X. In addition, likewise, as illustrated inFIG.7, according to the distance B from the origin position X to the current position Y calculated by the distance calculation unit17, it is possible to change the size of the display object13aon the display unit13(the size of the display object13aofFIG.7is smaller than the size of the display object13aofFIG.5). For example, when the distance B from the origin position X to the current position Y is relatively long, the display object13acan be displayed relatively small, and when the distance B from the origin position X to the current position Y is short, the display object13acan be displayed relatively large. The motion detector18can detect a predetermined motion of the user U. More specifically, the motion detector18can detect acceleration in the motion of the user U. The motion detector18can detect acceleration by input from an acceleration sensor18a. Input from the acceleration sensor18acan be performed via the input unit12. For example, the acceleration sensor18acan be provided in the video display apparatus10. The threshold value setting unit19can set a predetermined value, more specifically a threshold value of predetermined acceleration. For setting the threshold value by the threshold value setting unit19, for example, a threshold value stored in the storage unit11in advance can be set by reading an ON signal of a power source of the virtual reality system1as a kick signal. The determination unit20can determine a magnitude relationship between the threshold value set by the threshold value setting unit19and the acceleration detected by the motion detector18. As illustrated inFIG.8, the control unit21can control the change of the video100so that when a predetermined motion of the user U is detected by the motion detector18, the video100of the field is progressed in the progress direction A set by the progress direction setting unit16according to the distance B calculated by the distance calculation unit17(FIG.8illustrates a state in which the user U lifts a leg at the current position Y). That is, the control unit21can control the change of the video100when the determination unit20determines that the detected acceleration is larger than the threshold value. Here, the threshold value of the acceleration is set to a value larger than the acceleration of normal movement of the user U from the origin position X to the current position Y. By setting the threshold value of the acceleration to a value larger than usual, the motion of the user U detected by the motion detector18can be distinguished from a normal motion of movement of the user U from the origin position X to the current position Y. As illustrated inFIG.8, the motion of the user U can be a motion of obtaining predetermined acceleration such as a jump, a dash, or a waving, in addition to a predetermined motion of lifting the leg in the vertical direction. That is, the video100of the field by the display unit13is in a stopped state (state ofFIG.5) in normal movement of the user U from the origin position X to the current position Y (state ofFIG.6), and only when the user U performs a predetermined motion (state ofFIG.8), and the determination unit20determines that the detected acceleration is larger than the threshold value, the video100in the stopped state (state ofFIG.5) can be set to the video100(state ofFIG.7) in a state of being changed to progress in the progress direction A by the distance B by the control unit21. That is, the user U can intend the change of the video100in the field by a motion of the user U, and can avoid a feeling of getting drunk in advance. Note that the control unit21can perform a control operation to change the speed of the change of the video100in the field in accordance with the acceleration and/or an angular velocity in the motion of the user detected by the motion detector19, and cause the display unit13to perform display. For example, when the distance B from the origin position X to the current position Y is relatively long, the speed of the change of the video100in the field can be set to be relatively large. Further, when the distance B from the origin position X to the current position Y is short, the speed of the change of the video100in the field can be set to be relatively small. In this way, even when the distance B from the origin position X to the current position Y is different, it is possible to make a video change time for progressing in the progress direction A of the video100constant, and it is possible to reduce stress on the user U. The controller30is provided with, for example, a button-shaped operation unit32allowing the user U to perform a desired operation on a main body31. That is, in the controller30, the user U can operate the operation unit32as necessary to transmit predetermined command information. More specifically, the controller30is configured to be communicable with the video display apparatus10and can transmit command information to the input unit11of the video display apparatus10. The command information includes information for starting the display of virtual reality, information for selecting a predetermined video100from videos100in a plurality of fields stored in the storage unit11, etc. The sign40can indicate the origin position X in the real world. The sign40can be provided on the floor, the ground, or the rug on which the user U rides in the real world. The sign40may have a three-dimensional shape, the three-dimensional shape may have a protruding shape protruding toward the user U, and the protruding shape may have a shape protruding upward. By standing on or near the sign40and operating the controller30as necessary, the user U can input and set the position of the sign40(or a position near the sign40), that is, an origin position X of the real world, and match the origin position X of the real world with the origin position X of the virtual reality, that is, the origin position X in the display of the video100in the field of virtual reality. Here, each functional unit of the video display apparatus10described above can function by executing a predetermined program200. That is, the program200can cause the computer of the virtual reality system1to function as the storage unit11, the input unit12, the display unit13, the origin setting unit14, the current position recognition unit15, the progress direction setting unit16, the distance calculation unit17, the motion detector18, the threshold value setting unit19, the determination unit20, and the control unit21. The program200is stored in the storage device10cof the video display apparatus10. Next, a description will be given of a method of displaying the video100by the virtual reality system1of the invention based on a flowchart ofFIG.9. That is, first, instep S10, the user U turns ON the power source of the virtual reality system1and operates the operation unit32of the controller30as necessary to transmit the command information such as information for starting the display of the virtual reality, information for selecting a predetermined video100from videos100in a plurality of fields stored in the storage unit11, etc., and the input unit12inputs the command information. In addition, the threshold value setting unit19sets the threshold value of the acceleration. Subsequently, in step S20, the user U provides the sign40. The sign40is provided on the floor, the ground, or the rug on which the user U rides in the real world. The sign40indicates the origin position X in the real world. Subsequently, in step S30, the user U stands on or near the sign40and operates the controller30as necessary. In this way, the origin position setting unit14inputs and sets the position of the sign40(or the position near the sign40) as the origin position X of the real world, and matches the origin position X of the real world with the origin position X of the virtual reality, that is, the origin position X in the display of the video100in the field of the virtual reality field. Subsequently, in step S40, the user U moves as necessary from the origin position X, and the current position recognition unit15detects and recognizes the current position Y of the user U. Subsequently, in step S50, the progress direction setting unit16calculates the direction of the vector of the current position Y recognized by the current position recognition unit15with respect to the origin position X set by the origin position setting unit14, and sets the progress direction A in the video100in the field displayed on the display unit13in accordance with the calculated direction of the vector. Subsequently, in step S60, the distance calculation unit17calculates the distance B from the origin position X set by the origin position setting unit14to the current position Y recognized by the current position recognition unit15, and calculates the magnitude of the vector in the progress direction A. Subsequently, in step S70, the user U performs a predetermined motion, and the motion detector18detects the predetermined motion of the user U as acceleration. Subsequently, in step S80, the determination unit20determines a magnitude relationship between the threshold value set by the threshold value setting unit19and the acceleration detected by the motion detector18, and when it is determined that the detected acceleration is larger than the threshold value, the process proceeds to step S90, and the control unit21controls the change of the video100so that the video100in the field progresses in the progress direction A set by the progress direction setting unit16according to the distance B calculated by the distance calculation unit17. In addition, the control unit21performs a control operation to change the display state of the display object13aon the display unit13according to the distance B from the origin position X to the current position Y calculated by the distance calculation unit17, and causes the display unit13to perform display. Further, the control unit21performs a control operation to change the speed of change of the video100in the field according to the distance B from the origin position X to the current position Y calculated by the distance calculation unit17, and causes the display unit13to perform display. As described above, according to the present embodiment, since the control unit21controls the change of the video100to cause the video100to progress in the progress direction A set by the progress direction setting unit16, the progress direction A can be set in the direction from the origin position X. In addition, when the control unit21controls the change of the video100to cause the video100to progress in the progress direction A set by the progress direction setting unit16upon detecting a predetermined motion of the user U by the motion detector18, it is possible to change the video100of the virtual reality according to the predetermined motion of the user U. In addition, since the display of the video100includes the display object13aindicating the progress direction A, the progress direction A can be confirmed by the display object13a. Further, since the display state of the display object13ais changed according to the distance B from the origin position X to the current position Y, the user U can subjectively grasp the distance B from the origin position X to the current position Y. Furthermore, since the virtual reality system1has the sign indicating the origin position X, the user U can confirm the origin position X in the real world. Note that it is natural that the invention is not limited to the above-described embodiment, and can be variously modified and applied. That is, for example, the change of the video by the control unit21includes a change of the video that causes the video to progress in the progress direction set by the progress direction setting unit16and a change of the video other than the change, and the change of the video and the change of the video other than the change may be performed according to the command information input by the input unit12and transmitted from the controller30. That is, the operation of the controller30is normally performed by the user U manually operating the operation unit32, and thus may be a motion distinguished from movement from the origin position X to the current position Y performed by the user U walking with a foot, which is preferable. In addition, in the above-described embodiment, the control unit21is included in the video display apparatus10. However, the control unit21may be included in the controller30, or included in a plurality of devices such as both the video display apparatus10and the controller30. Further, even though the motion detector18detects acceleration by input from the acceleration sensor18a, the motion detector18may detect various motions such as a hand waving motion, a hip shaking motion, and kicking up of a leg, and the change of the video100may be controlled by the control unit21. That is, the motion detector18can produce a desired effect when a predetermined motion of the user U is detected. Note that in such a case, predetermined sensors are provided on an arm, a waist, and a leg so that each motion can be detected. Furthermore, in the above-described embodiment, the motion detector18detects acceleration by input from the acceleration sensor18a, the threshold value setting unit19sets a predetermined threshold value of the acceleration, and the determination unit20determines a magnitude relationship between the threshold value set by the threshold value setting unit19and the acceleration detected by the motion detector18. However, by providing the speed sensor or the angular velocity sensor instead of the acceleration sensor18a, the motion detector18may detect the speed or the angular velocity by input from the speed sensor or the angular velocity sensor, the threshold value setting unit19may set a predetermined threshold value of the speed or the angular velocity, and the determination unit20may determine a magnitude relationship between the threshold value set by the threshold value setting unit19and the speed or the angular velocity detected by the motion detector18. In addition, the acceleration sensor18a, the velocity sensor, and the angular velocity sensor may be appropriately combined and used. That is, the motion detector18may detect at least one of the speed, the acceleration, and the angular velocity in the motion of the user U, and the control unit19may control the change of the video100when at least one of the speed, the acceleration, and the angular velocity detected by the determination unit20is a predetermined value. The angular velocity sensor may be, for example, a gyro sensor. REFERENCE SIGNS LIST A Progress directionB DistanceU UserU′ HeadX Origin positionY Current position1Virtual reality system10Video display apparatus10′ Main body10″ Fixing portion10aBus10bCentral processing unit10cStorage device10dInput device10eDisplay device11Storage unit12Input unit13Display unit13aDisplay object14Origin position setting unit15Current position recognition unit15aPosition detector16Progress direction setting unit17Distance calculation unit18Motion detector19Threshold value setting unit20Determination unit21Control unit30Controller31Main body32Operation unit40Sign100Video200Program | 22,423 |
11861056 | DESCRIPTION Various examples of electronic systems and techniques for using such systems in relation to various CGR technologies are described. A physical environment (or real environment) refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles (or physical objects or real objects), such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell. In contrast, a CGR environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create a three-dimensional (3D) or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects. Examples of CGR include virtual reality and mixed reality. A virtual reality (VR) environment (or virtual environment) refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment. In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, an MR environment is anywhere between, but not including, a wholly physical environment at one end and a VR environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationary with respect to the physical ground. Examples of MR Include Augmented Reality and Augmented Virtuality An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An AR environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof. An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment. There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one example, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface. FIG.1AandFIG.1Bdepict exemplary system100for use in various CGR technologies. In some examples, as illustrated inFIG.1A, system100includes device100a. Device100aincludes various components, such as processor(s)102, RF circuitry(ies)104, memory(ies)106, image sensor(s)108, orientation sensor(s)110, microphone(s)112, location sensor(s)116, speaker(s)118, display(s)120, and touch-sensitive surface(s)122. These components optionally communicate over communication bus(es)150of device100a. In some examples, elements of system100are implemented in a base station device (e.g., a computing device, such as a remote server, mobile device, or laptop) and other elements of the system100are implemented in a head-mounted display (HMD) device designed to be worn by the user, where the HMD device is in communication with the base station device. In some examples, device100ais implemented in a base station device or a HMD device. As illustrated inFIG.1B, in some examples, system100includes two (or more) devices in communication, such as through a wired connection or a wireless connection. First device100b(e.g., a base station device) includes processor(s)102, RF circuitry(ies)104, and memory(ies)106. These components optionally communicate over communication bus(es)150of device100b. Second device100c(e.g., a HMD) includes various components, such as processor(s)102, RF circuitry(ies)104, memory(ies)106, image sensor(s)108, orientation sensor(s)110, microphone(s)112, location sensor(s)116, speaker(s)118, display(s)120, and touch-sensitive surface(s)122. These components optionally communicate over communication bus(es)150of device100c. In some examples, system100is a mobile device. In some examples, system100is an HMD device. In some examples, system100is a wearable HUD device. System100includes processor(s)102and memory(ies)106. Processor(s)102include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory(ies)106are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s)102to perform the techniques described below. System100includes RF circuitry(ies)104. RF circuitry(ies)104optionally include circuitry for communicating with electronic devices, networks, such as the Internet, intranets, and/or a wireless network, such as cellular networks and wireless local area networks (LANs). RF circuitry(ies)104optionally includes circuitry for communicating using near-field communication and/or short-range communication, such as Bluetooth®. System100includes display(s)120. In some examples, display(s)120include a first display (e.g., a left eye display panel) and a second display (e.g., a right eye display panel), each display for displaying images to a respective eye of the user. Corresponding images are simultaneously displayed on the first display and the second display. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the displays. In some examples, display(s)120include a single display. Corresponding images are simultaneously displayed on a first area and a second area of the single display for each eye of the user. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display. In some examples, system100includes touch-sensitive surface(s)122for receiving user inputs, such as tap inputs and swipe inputs. In some examples, display(s)120and touch-sensitive surface(s)122form touch-sensitive display(s). System100includes image sensor(s)108. Image sensors(s)108optionally include one or more visible light image sensor, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real environment. Image sensor(s) also optionally include one or more infrared (IR) sensor(s), such as a passive IR sensor or an active IR sensor, for detecting infrared light from the real environment. For example, an active IR sensor includes an IR emitter, such as an IR dot emitter, for emitting infrared light into the real environment. Image sensor(s)108also optionally include one or more event camera(s) configured to capture movement of physical objects in the real environment. Image sensor(s)108also optionally include one or more depth sensor(s) configured to detect the distance of physical objects from system100. In some examples, system100uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around system100. In some examples, image sensor(s)108include a first image sensor and a second image sensor. The first image sensor and the second image sensor are optionally configured to capture images of physical objects in the real environment from two distinct perspectives. In some examples, system100uses image sensor(s)108to receive user inputs, such as hand gestures. In some examples, system100uses image sensor(s)108to detect the position and orientation of system100and/or display(s)120in the real environment. For example, system100uses image sensor(s)108to track the position and orientation of display(s)120relative to one or more fixed objects in the real environment. In some examples, system100includes microphones(s)112. System100uses microphone(s)112to detect sound from the user and/or the real environment of the user. In some examples, microphone(s)112includes an array of microphones (including a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real environment. System100includes orientation sensor(s)110for detecting orientation and/or movement of system100and/or display(s)120. For example, system100uses orientation sensor(s)110to track changes in the position and/or orientation of system100and/or display(s)120, such as with respect to physical objects in the real environment. Orientation sensor(s)110optionally include one or more gyroscopes and/or one or more accelerometers. Various aspects of the present disclosure are directed to systems and techniques that provide functionality for controlling representations of virtual objects within a CGR environment. In particular, aspects of the present disclosure are directed to systems and techniques that provide functionality for controlling a representation of a virtual object based on a use context associated with a location of the virtual object within the CGR environment. The systems and techniques described herein allow for a representation of a virtual object to be adapted to the particular use context associated with the location within the CGR environment. FIGS.2A-2Eillustrate exemplary techniques for controlling a representation of a virtual object of a CGR environment based on a use context associated with a location of the virtual object within the CGR environment in accordance with aspects of the present disclosure. In particular,FIG.2Aillustrates user202and electronic device200. In some embodiments, electronic device200may be a wearable electronic device (e.g., an HMD). Examples of a wearable electronic device are described herein, such as with respect to electronic device100adescribed above with reference toFIGS.1A and1B. As shown inFIG.2A, user202wears electronic device200, which is configured to enable user202to perceive CGR environment290. As described above, CGR environment290may include physical objects, or representations thereof, and virtual objects, with virtual objects superimposed upon the physical objects (e.g., in AR implementations), or physical objects superimposed upon the virtual objects (e.g., in AV implementations) to present a coherent CGR environment to user202. In some embodiments, CGR environment290may be a wholly virtual environment (e.g., in VR implementations), in which every object within CGR environment290is a virtual object. Whether entirely or partially virtual implementations, in the example illustrated inFIG.2A, virtual object210may be a representation of a presentation application (e.g., an application configured to facilitate multimedia presentations) and may be presented to user202within CGR environment290. In embodiments, virtual object210may be located at any location within CGR environment290. In the particular example illustrated inFIGS.2A-2E, CGR environment290may include at least location220,222,224,226, and228. As will be appreciated, these locations are described for illustration purposes and not intended to be limiting in any way. That is, any other location within CGR environment290may be applicable to the features and functionalities described herein. In aspects, location220may correspond to a location on a representation of an electronic device within CGR environment290. For example, location220may correspond to a location (e.g., a display, a screen, a surface or case of an electronic device) on display240. Display240may be, for example, a display of a computer, laptop, tablet, phone, display, projector display, etc. Display240may be an actual physical device (e.g., a physical object) or may be a virtual representation of a display (e.g., a virtual object) within CGR environment290. Location222may correspond to a location on a vertical plane of CGR environment290(e.g., a predominantly vertical plane such as a structure that is a vertical plane, a wall, a surface that corresponds to a wall-like structure such as a side of building, bedroom wall, fence, a vertical or auxiliary vertical plane, etc.). In the particular example illustrated inFIG.2A, location222corresponds to a location on a wall of CGR environment290. Location224, and/or location228, may correspond to a location on a horizontal plane of CGR environment290(e.g., a predominantly horizontal plane such as a structure that is a horizontal plane, a desktop, table, countertop, shelf, floor, an elevated horizontal plane such as a horizontal plane that is above another horizontal plane within the CGR environment, a horizontal plane that is not elevated, etc.). In the particular example illustrated inFIG.2A, locations224and228correspond to locations on desktop242, which may be a physical or virtual object. Location226may correspond to a location on a horizontal plane of CGR environment290, but of a different type than locations224and/or228. For example, location226may be a location on a predominantly horizontal plane such as a structure that is a horizontal plane, a floor, a sidewalk, grass, lawn, a surface that one or more people are standing on, a non-elevated horizontal plane such as a horizontal plane that is below another horizontal plane within the CGR, etc. In the particular example illustrated inFIG.2A, location226corresponds to a locations on the floor of CGR environment290. As shown inFIG.2A, virtual object210may be displayed at location220(e.g., by electronic device200). In some embodiments, a location within CGR environment290(e.g., location220) may be associated with or otherwise correspond to at least one use context of a plurality of use contexts. In embodiments, a use context may be related to a type of surface (e.g., a desk, a wall, a computer screen, a floor, etc.) or the type of material of the surface (e.g., sand, grass, concrete, carpet, etc.) that the virtual object will be placed on, and/or may be related to a manner in which the virtual object will be used (e.g., manipulated, interacted with) or displayed (e.g., presented) in the CGR environment. In aspects, location220may be associated with a first use context. For example, as described above, location220may be a location on display240. Display240may be a representation of an electronic device. In this case, the first use context associated with location220may be the type of surface or object of location220, which is an electronic device. Thus, in this case, the first use context may be satisfied when a determination is made that location220is a location on a representation of an electronic device. In other embodiments, the first use context associated with location220may be the manner in which virtual object210will be used when in location220. For example, it may be determined that, at location220, which is an electronic device, virtual object220will be used as an application for multimedia presentations on display240. In this case, it may be determined that virtual object is to be represented as a two-dimensional (2D) window based on the manner in which virtual object will be used. It is noted that, as used herein, a representation of a virtual object may include the content, size, functionality, user interface objects, form, shape, design, graphical presentation of the virtual object within the CGR environment, etc. For example, a virtual object may be represented as a 2D object (e.g., an application icon, an application window, an image, a user interface of an application, etc.). In other examples, the virtual object may be represented as a 3D object within the CGR environment. In some embodiments, a first representation of a virtual object may be a 3D object including particular content, and a second, different representation of the virtual object may be a 3D object including different content from the particular content in the first representation. In some embodiments, a representation of a virtual object within the CGR environment may include audio characteristics. For example, one representation may include particular sounds, noises, spoken words, etc., and a second representation may include different sounds, noises, spoken words, etc. In some cases, the representation of a virtual object may also include the level of sound, in which one representation of a virtual object may include one level of sound, and a different representation may include a higher or lower level of sound. In accordance with the above, when virtual object210is located, at least partially on location220, whether by being moved or dragged to location220or by being displayed on location220, virtual object210is displayed as a 2D window on display240, (e.g., by electronic device200) based on a determination that location220is associated with a use context that is satisfied by a determination that location220is on display240, display240being an electronic device. In some embodiments, virtual object210may be configured such that user202may interact with virtual object210. Interaction with virtual object220may be via input sensors, as described above, configured to detect a user input to interact with virtual objects of CGR environment290. In some embodiments, the input sensors may include a mouse, a stylus, touch-sensitive surfaces, image-sensors (e.g., to perform hand-tracking), etc., which may be configured to allow user202to grab, move, drag, click, select and/or otherwise select virtual object210. As such, in embodiments, a request to move virtual object210to a location within CGR environment290may be received. In the example shown inFIG.2A, a request to move virtual object210from location220to another location within CGR environment290may include user202grabbing or otherwise selecting virtual object210for moving from location220, and may cause virtual object210to depart location220. In some embodiments, as soon as virtual object210is removed from a location (e.g., location220), the current representation of virtual object210may change. For example, as soon as virtual object210is removed from location220, the current representation of virtual object210as a 2D window of a multimedia presentation application may be changed to another representation. In some implementations, the current representation of virtual object210may be changed to some transitional representation, which may not be associated with a particular use context, but rather may be a default representation indicating that virtual object210is transitioning from one location to another. In other implementations, the current representation of virtual object210may not be changed when virtual object210is removed from a location but, instead, the current representation of virtual object210may remain unchanged until the virtual object is positioned in another location which is determined to be associated with a use context for which a different representation of virtual object210may be determined to be displayed. In this case, the current representation of virtual object210may be maintained during transit of virtual object210from the current location to the new location. FIG.2Bshows an example of virtual object210displayed (e.g., by electronic device200) on location224. In this example, in response to the request to move the virtual object to location224, at least one use context corresponding to location224may be determined. For example, location224may correspond to a location on desktop242. In this case, it may be determined that location224is associated with a use context that is satisfied by the type of location of location224(e.g., the type of surface, the air), location224being a location on desktop242(e.g., a location on a horizontal plane). In alternative or additional embodiments, location224on desktop242may be determined to be a location in which virtual object210may be used, e.g., by user202, to make notes regarding a multimedia presentation. In either case, whether because location224is a location on a desktop or because location224is a location in which the virtual object may be used to make annotations to a multimedia presentation, virtual object210may be represented as a 3D object (e.g., a notepad, notebook, book, or any other 3D representation) configured to facilitate a user annotating and/or making notes on the multimedia presentation. Although not illustrated, virtual object210may be moved from location224on desktop242to location228also on desktop242. In embodiments, the representation (e.g., the 3D virtual notepad) of virtual object210may remain the same on location228as in location224, as both locations may be associated with the same use context. Alternatively, although both locations224and228are on desktop242(e.g., the same type of surface), the representation of the virtual object when on location228may be different than the representation when on location224. For example, the representation of virtual object when on location228may be of a different size (e.g., smaller or larger) or may be differently oriented, than the representation when on location224because location228may be determined to not be able to accommodate the size and/or orientation of the representation of virtual object210when on location224. In some embodiments, different locations within the same type of surface (e.g., different locations on desktop242, on wall222, etc.) may be configured for different use contexts. For example, a particular location on desktop242may be configured with a use context in which the representation of virtual object210may be on a particular language, and another location on desktop242may be configured with a use context in which the representation of virtual object210may be on a different language. FIG.2Cshows an example of virtual object210displayed on location222. For example, a request to move virtual object210to location222may be received. The request may include a request to move virtual object210from any other location within CGR environment290(e.g., location220, location224, location226, etc.). In response to the request, virtual object210may be moved to location222, and a representation of virtual object210to be displayed at location222may be determined. In this example, in response to the request to move the virtual object to location222, at least one use context corresponding to location222may be determined. For example, location222may correspond to a location on a vertical plane (e.g., a wall) of CGR environment290. In this case, it may be determined that location222is associated with a use context that is satisfied by the type of location of location222(e.g., the type of surface), location222being a location on a wall of CGR environment290. In alternative or additional embodiments, location222on a wall of CGR environment290may be determined to be a location in which virtual object210may be used to present a multimedia presentation. In either case, whether because location222is a location on a wall or because location222is a location in which the virtual object may be used to present the multimedia presentation, virtual object210may be represented (e.g., displayed by electronic device200) as a large window object configured to facilitate presenting the multimedia presentation. For example, the large window object may be a 2D window, or a 3D representation of a large monitor, displayed as fixed against the wall. In some embodiments, the size of the large window object against the wall may be determined based on the distance of the wall against which the large window object is displayed relative to the location of user202within CGR environment290. In some embodiments, the content (e.g., the information and/or arrangement of information) of the representation of virtual object210on location222may be different than the content in the representations of virtual object210at other locations. For example, while at location224, the 3D notepad used as the representation of virtual object210may include information arranged in a specific arrangement within the 3D notepad. While at location222, the large window display against the wall used as the representation of virtual object210may include different information, which may be arranged in a different arrangement, within the large window display. FIG.2Dshows an example of virtual object210displayed (e.g., by electronic device200) on location226. For example, a request to move virtual object210to location226may be received. The request may include a request to move virtual object210from any other location within CGR environment290(e.g., location220, location222, location224, location228, etc.). In response to the request, virtual object210may be moved to location226and a representation of virtual object210to be displayed (e.g., by electronic device200) at location226may be determined. In this example, in response to the request to move virtual object to location226, at least one use context corresponding to location226may be determined. For example, location226may correspond to a location on a horizontal plane (e.g., the floor) of CGR environment290. It is noted that, in this example, location226corresponds to a location on a horizontal plane that is of a different type than the horizontal plane corresponding to location224, which is a location on desktop242. In this case, it may be determined that location226is associated with a use context that is satisfied by the type of location of location226(e.g., the type of surface), location226being a location on the floor of CGR environment290. In alternative or additional embodiments, location226on the floor of CGR environment290may be determined to be a location in which virtual object210may be used to at least partially immersively (e.g., from a first-person-view mode) present a multimedia presentation. In either case, whether because location226is a location on a wall or because location226is a location in which the virtual object may be used to at least partially immersively present a multimedia presentation, virtual object210may be represented as a 3D podium placed on, or near, location226, the podium configured to facilitate user202presenting the multimedia presentation from the podium. In some embodiments, the representation of virtual objection210at location226may include content212related to the multimedia presentation (e.g., notes, annotations, presentation content, etc.), and may be presented on top of the podium where user202may perceive content212. FIG.2Eshows an example of virtual object210being displayed (e.g., by electronic device200) on fully-immersive mode. In some embodiments, a particular location may be associated with a fully-immersive use context. For example, a location, such as location226on the floor of CGR environment290, may be associated with a use context in which the presentation is to be presented as a fully immersive experience. In response to the request to move virtual object210to location226, virtual object may be moved to location226, and a fully-immersive representation of virtual object210may be displayed. In this case, displaying virtual object210as a fully-immersive representation may include displaying the entire CGR environment290as a virtual auditorium configured to present the multimedia application. In some embodiments, a representation of virtual object210associated with a particular use context may be displayed without having to move the virtual object to a particular location. For example, with reference back toFIG.2A, in some embodiments, an affordance214may be presented within CGR environment290. Affordance214may be a virtual object (e.g., a button, an affordance, a user-interface element, an interactive element, etc.) configured to allow interaction by a user (e.g., user202). Affordance214may correspond to at least one use context. In some embodiments, affordance214may also be associated with virtual object214(e.g., associated with the particular application of virtual object214such as multimedia presentation, calculator, weather, etc.). When user202selects affordance214for virtual object210, the use context corresponding to affordance214may be considered to be satisfied and may cause the associated representation (e.g., the representation of virtual object210associated with the use context) to be displayed. For example, where affordance214corresponds to the use context associated with location224(e.g., desktop), as shown inFIG.2B, a representation of virtual object210, as a 3D notepad, may be displayed by electronic device200. In some cases, the representation of virtual object210may be displayed at the location associated with the use context (e.g., without having to move virtual object210from its current location to the location corresponding with the use context associated with affordance214), or may be displayed at whichever location virtual object210is currently being displayed. In some embodiments, displaying the representation of virtual object210at the location associated with the use context corresponding to affordance214may include moving virtual object210from its current location to the location associated with the use context. In these cases, the moving of virtual object210to the location associated with the use context may be animated. In another example, where affordance214corresponds to the use context associated with a fully-immersive use context, as shown inFIG.2E, a representation of virtual object210as a fully-immersive experience may be displayed by electronic device200in response to user202selecting affordance214. In some embodiments, affordance214may include a plurality of affordances, each affordance in the plurality of affordances corresponding to a particular use context. In these embodiments, each affordance in the plurality of affordances may be a selectable affordance that, when selected, may cause the corresponding use context to be considered satisfied and may cause the associated representation (e.g., the representation of virtual object210associated with the satisfied use context) to be displayed in accordance with the foregoing. It is noted that although the present disclosure describes embodiments in which a virtual object is displayed on a single location within the CGR environment at a time, this is done for illustrative purposes and should not be construed as limiting in any way. Indeed, in some embodiments, separate and, in some cases, different representations of the same virtual object may be displayed at more than one location within the CGR environment concurrently. In embodiments, the separate representations at the different locations may all be different (e.g., may include different information or may have different shapes and/or forms as described above), or some of the representations at the different locations may be the same while other representations at other locations may be different. In some embodiments, a change to the configuration of the virtual object (e.g., a change to an application associated with the virtual object) may trigger a change to all the representations at all the locations or may trigger a change to some representations at some locations but not all representations at all locations. In some cases, a change to a representation at one location within the CGR environment (e.g., a change caused in response to user interaction and/or caused by a change in the associated application) may trigger at least one change to at least one representation of the virtual object at another location(s) within the CGR environment. FIGS.3A-3Cillustrate an example of functionality for controlling a representation of a virtual object based on a use context associated with a location within the CGR environment in accordance with aspects of the present disclosure. In particular,FIG.3Aillustrates user202wearing electronic device200, which may be configured to allow user202to view CGR environment290. In some embodiments, electronic device200may be similar to electronic device100adescribed above with reference toFIGS.1A and1B. CGR environment290includes display340, which may be a physical display or a virtual representation of a display. In any case, a representation of virtual object310may be displayed on location320(e.g., by electronic device200), which is a location on display340. In the example illustrated inFIG.3A, virtual object310may be a calculator application. In this case, location320may be determined to correspond to at least one use context (e.g., a type of location, surface, material, etc., and/or a type of use of the virtual object at the location). For example, location320may be determined to be a location on an electronic device (e.g., a physical device or a computer-generated simulation of a physical device) of CGR environment290. In this case, it may be determined that location320is associated with a use context that is satisfied by the type of location of location320(e.g., the type of surface), location320being a location on an electronic device. Based on the determination that location320is a location on an electronic device, virtual object310may be displayed as a 2D window or widget of the calculator application on display340(e.g., by electronic device200). Thus, as will be appreciated, the representation of virtual object310at location320is based on the use context corresponding to location320. FIG.3Bshows user202interacting with virtual object310at location320. The interaction of user202with virtual object310at location320may include a request to move virtual object310to another location (e.g., location324).FIG.3Cshows virtual object310having been moved to location324in response to the request to move virtual object310. In this example, at least one use context associated with location324may be determined. For example, location324is a location on desktop342. In this case, it may be determined that location324is associated with a use context that is satisfied by the type of location of location324(e.g., the type of surface), location324being a location on desktop342(e.g., a location on a horizontal plane). In alternative or additional embodiments, location324on desktop342may be determined to be a location in which virtual object310(e.g., a calculator application) may be used, e.g., by user202, to manipulate the calculator application in such a manner as to make entries into the calculator application as in a real-world physical calculator, for example by using user202hands or virtual representations thereof. In either case, whether because location324is a location on a desktop or because location324is a location in which the virtual object may be used to make entries into the calculator using a user's hand or virtual representations thereof, virtual object310may be represented as a 3D object (e.g., a 3D representation of a physical calculator) configured to facilitate a user making entries into the calculator application. FIGS.4A-4Cillustrate another example of a representation of a virtual object of a CGR environment based on a use context associated with a location of the virtual object within the CGR environment in accordance with aspects of the present disclosure. In particular,FIG.4Aillustrates user202wearing electronic device200, which is configured to allow user202to view CGR environment290. As mentioned above, in some embodiments, electronic device200may be similar to electronic device100adescribed above with reference toFIGS.1A and1B. CGR environment290includes display440. As described above, display440may be a physical display or a virtual representation of a display. A representation of virtual object410may be displayed by electronic device200on location420, which is a location on display440. In the example illustrated inFIG.4A, virtual object410may be an application for presenting an interactive and/or animated robot. It will be appreciated that the description of an animated robot herein is for illustrative purposes only and should not be construed as limiting in any way. Indeed, the techniques herein are applicable to any application that may be represented as a virtual object within a CGR environment. In this example, location420may be determined to be a location on a representation of an electronic device (e.g., a representation of a display of a physical computer). Based on the determination that location420is a location on a representation of an electronic device, virtual object410may be displayed (e.g., by electronic device200) as a 2D window or widget on display440. FIG.4Bshows virtual object410having been moved to location424. In aspects, virtual object410may be moved to location424in response to a request by a user (e.g., a user interacting with virtual object410to drag or otherwise cause to move virtual object410to location424. In this example, at least one use context associated with location424may be determined. For example, location424is a location on desktop442. In this case, it may be determined that location424is associated with a use context that is satisfied by the type of location of location424(e.g., the type of surface), location424being a location on desktop442(e.g., a location on a horizontal plane). Based on the use context corresponding to location424, virtual object410may be represented (e.g., displayed by electronic device200) as a 3D object (e.g., a 3D representation of an animated robot). In embodiments, the representation of virtual object410when at location424may include different functionality than the representation of the virtual object when at location420. For example, the animated 3D robot on desktop442may be configured to move around desktop442in more than one axis. In addition or in the alternative, the animated 3D robot on desktop442may be able to rotate about its own axis. Additionally, or alternatively, the animated 3D robot on desktop442may be configured to be of a larger size than when in location420. FIG.4Cshows virtual object410having been moved to location426. In aspects, virtual object410may be moved to location426in response to a request by a user (e.g., a user interacting with virtual object410to drag or otherwise cause to move virtual object410to location426. In this example, at least one use context associated with location426may be determined. For example, location426is a location on the floor of CGR environment290. It is noted that, in this example, location426corresponds to a location on a horizontal plane that is of a different type than the horizontal plane corresponding to location424, which is a location on desktop442. In this case, it may be determined that location426is associated with a use context that is satisfied by the type of location of location426(e.g., the type of surface), location426being a location on the floor of CGR environment290. Based on the use context corresponding to location424, virtual object410may be represented (e.g., displayed by electronic device200) as a 3D object (e.g., a 3D representation of an animated robot) on the floor of CGR environment290. In embodiments, the representation of the virtual object when at location426may be different than when in location424. For example, the animated 3D robot on the floor of CGR environment290may be larger than the animated 3D robot at location424on desktop442. In addition, the animated 3D robot on the floor of CGR environment290may be configured to move at a faster rate than the animated 3D robot at location424on desktop442. In some embodiments, some locations within CGR environment290may not be associated with a use context for particular applications or may be prohibited locations with respect to a virtual object associated with a particular application. For example, location422may be a location on a vertical plane (e.g., a wall) of CGR environment290. In this example, location422may not have an associated use context. If user202attempts to move virtual object210to location422, the move may not be allowed, as, e.g., a 3D robot may not be able to navigate on a vertical surface. Alternatively, a default representation of the virtual object may be displayed (e.g., a 2D image or a 2D application window. FIG.5is a flow diagram illustrating method500for controlling a representation of a virtual object of a CGR environment based on a use context associated with a location of the virtual object within the CGR environment. In some embodiments, method500may be performed by system100or a portion of system100. In some embodiments, method500may be performed by one or more external systems and/or devices. In some embodiments, method500may be performed by system100(or a portion of system100) in conjunction with one or more external systems and/or devices. At block502, the system displays, via a display of an electronic device (e.g., a wearable electronic device, an HMD device, etc.), a first representation of a virtual object at a first location within a CGR environment. For example, a first representation of a virtual object may be displayed via a first display (e.g., a left eye display panel) or second display (e.g., a second eye display panel) of an electronic device. In embodiments, the first location may correspond to a first use context of a plurality of use contexts. In embodiments, the plurality of use contexts may include a use context related to a type of surface (e.g., a desk, a wall, a computer screen, a floor, etc.) and/or the type of material (e.g., sand, grass, concrete, carpet, etc.), that the virtual object will be placed on, and/or a use context that corresponds to how the virtual object will be used (e.g., manipulated, interacted with) or displayed (e.g., presented) in the first location of the CGR environment. In some embodiments, the system may be a part of the electronic device, or the electronic device may be a portion of the system. In some embodiments, when the representation of the virtual object is displayed at the first location, the representation of the virtual object may be displayed on a first type of surface (e.g., a desk, a wall, a computer screen, a floor, etc.) and the representation of the virtual object may be displayed based on the first location (e.g., the type of surface that corresponds to the first location). In some embodiments, one or more of the plurality of use contexts may be predefined. For example, one or more of the plurality of use contexts may be predefined based on a particular application corresponding to the virtual object. In some embodiments, a first application may have a first number of predefined use contexts, and a second application may have a second number of predefined use contexts that is different from the first number of predefined use contexts. In some embodiments, the second application may have a different use context than the first application, or vice-versa. At block504, the system receives a request to move the first representation, within the CGR environment, to a second location that is different from the first location. In some embodiments, the request may be received or detected by the system, based on detecting movement of the first representation from the first location to the second location. In some embodiments, one or more user inputs may be detected and, in response to detecting these user inputs, the system may receive the request to move the representation to the second location. In some embodiments, the request to move the first representation from the first location to a second location may be received based on one or more determinations by an outside application, where based on the one or more determinations, the request to move the first representation from the first location to the second location is received. At block506, in response to receiving the request and in accordance with a determination that the second location corresponds to a second use context (e.g., the second use context being different from the first use context) of the plurality of use contexts, the system displays, via the display of the electronic device, at the second location, near the second location, and/or on a surface corresponding to the second location, a second representation of the virtual object based on the second use context, and/or based on one or more applications associated with the virtual object. In embodiments, the second representation may be different from the first representation. For example, the second representation may have a different size, shape, user interface objects, functionality, audio characteristics, surface materials, etc., and/or may be configured with one or more different and/or additional operations than the first representation. In some embodiments, the second use context of the plurality of use context may include a use context that is satisfied when a determination is made that the second location corresponds to a location (e.g., a display, screen, a surface or case of an electronic device) on an electronic device (e.g., a computer, laptop, tablet, phone, display, projector display). In some embodiments, in accordance with the determination that the second location corresponds to the second use context of the plurality of use contexts, as a part of displaying the second representation of the virtual object based on the second use context, the system displays, within the CGR environment, a 2D representation of the virtual object on the electronic device. In some embodiments, the second representation of the virtual object may be the 2D representation on the electronic device. In some embodiments, the second representation may be moved (e.g., dragged off the display of the electronic device) to a location in the virtual environment that corresponds to a physical surface in a physical environment. In some embodiments, the 2D application may be manipulated as being a 3D application on the electronic device. In some embodiments, the second use context of the plurality of use context may include a use context that is satisfied when a determination is made that the second location corresponds to a location on an electronic device (e.g., a computer, laptop, tablet, phone, display, projector display). In these embodiments, in accordance with a determination that the second location corresponds to the second use context of the plurality of use contexts, displaying the second representation of the virtual object based on the second use context may include displaying, within the CGR environment, a 3D representation on the electronic device. In some embodiments, the representation may change depending on the type (e.g., display (e.g., monitor), tablet, personal computer, laptop) of the electronic device. In some embodiments, the second use context of the plurality of use context may include a use context that is satisfied when a determination is made that the second location corresponds to a location on a vertical plane (e.g., a wall, a surface that corresponds to a wall-like structure, a side of a building, a bedroom wall, a fence, etc.). In some embodiments, in accordance with the determination that the second location corresponds to the second use context of the plurality of use contexts, as a part of displaying the second representation of the virtual object based on the second use context, the system displays a 2D representation on the vertical plane (e.g., on the wall) within the CGR environment. In some embodiments, the second representation of the virtual object may be the 2D representation on the electronic device. In some embodiments, the 2D representation displayed on the vertical plane (e.g., on the wall) within the CGR environment may be bigger, may have more visual content, may include one or more additional (or different) user interface objects than a 2D representation displayed on the electronic device. In some embodiments, the representation may change depending on the type (e.g., side of building, bedroom wall, fence) of vertical plane and/or one or more characteristics of a vertical plane (e.g., virtual or physical), such as size, shape (e.g., circle, rectangular), material (e.g., brick, wood, metal), texture (e.g., rough, abrasive), color, opacity, etc. In some embodiments, the size of the second representation may be based on a distance between the display of the electronic device and the vertical plane within the CGR environment. In some embodiments, the 2D representation may be smaller when the vertical plane is closer to the display of the electronic device and larger when the vertical plan is farther away from the display of the electronic device. In some embodiments, the size of the 2D representation may be maintained as the user moves farther away or closer to the 2D representation after the 2D representation is initially displayed. In some embodiments, the size of the 2D representation may be changed as the user moves farther away or closer to the 2D representation after the 2D representation is initially displayed. In some embodiments, the size of the 2D representation may be based on whether the distance is in a certain category (e.g., categories of distance (e.g., far away, close, average distance), where each category of distances corresponds to a different size representation (e.g., extra-large, small, medium)). In some embodiments, the second use context of the plurality of use context includes a use context that is satisfied when a determination is made that the second location corresponds to a location on a horizontal plane (e.g., a desktop, table, countertop, shelf, floor, an elevated horizontal plane, a horizontal plane that is above another horizontal plane, a horizontal plane that is not elevated, etc.) within the CGR environment. In some embodiments, in accordance with the determination that the second location corresponds to the second use context of the plurality of use contexts, as a part of displaying the second representation of the virtual object based on the second use context, the system may display a 3D representation on the horizontal plane within the CGR environment. In some embodiments, the second representation of the virtual object may be the 3D representation on the horizontal plane. In some embodiments, the representation may change depending on the type (e.g., a desktop, table, countertop, shelf) of a horizontal plane and/or one or more characteristics of horizontal plane (e.g., virtual or physical), such as size, shape (e.g., circle, rectangular), material (e.g., brick, wood, metal), texture (e.g., rough, abrasive), color, opacity, etc. In some embodiments, in accordance with a determination that the horizontal plane is a horizontal plane of a first type, the 3D representation may be a representation of a first size. In some embodiments, in accordance with a determination that the horizontal plane is a horizontal plane of a second type, the 3D representation may be a representation of a second size that is different from (e.g., greater than) the first size. In embodiments, the first and second type of horizontal planes may be selected from types of horizontal planes that may include, for example, a predominantly horizontal plane, a structure that is a horizontal plane, a floor, a side walk, grass, lawn, a surface that one or more people are standing, a non-elevated horizontal plane, a horizontal plane that is below another horizontal plane within the CGR environment, etc. In some embodiments, the 3D representation displayed on the horizontal plane of the first type (e.g., desktop, table, countertop, shelf) within the CGR environment may be bigger, may have more visual content, may include one or more additional (or different) user interface objects than a 3D representation displayed on the horizontal plane of the second type (e.g., floor, sidewalk, grass, lawn, a surface that one or more people are standing on). In some embodiments, the second use context of the plurality of use contexts may include a use context that is satisfied when maximized view criteria are satisfied. For example, maximized view criteria may be satisfied when a user interface element (e.g., button, affordance, and/or any other interactive element) is selected, based on a room where the application may be running, based on the second location (e.g., a location where the virtual object is moved to or dropped), a place on a body part (e.g., place on hand) of a user of the device that corresponds to the maximized criteria being satisfied, a gesture, etc. In these embodiments, as a part of displaying the second representation of the virtual object based on the second use context, the system displays a plurality of representations of virtual objects on a plurality of planes within the CGR environment. In some embodiments, displaying a plurality of representations of virtual objects on a plurality of planes within the CGR environment may include changing one or more aspects of the physical environment and/or CGR environment to create a fully or partially immersive experience. For example, a room (e.g., physical or virtual) within the CGR environment may be turned into a virtual auditorium when the application is a presentation application, may be turned into a virtual sports venue (e.g., football stadium) when the application is a sports viewing application (e.g., fantasy sports application, live sports application), may be turned into a virtual store when shopping on a shopping application, etc. In some embodiments, the maximized view may be displayed via a companion application (e.g., fantasy sports application, live sports application, shopping application, presentation application, etc.). In some embodiments, the companion application may correspond to the virtual object and/or may be a companion application to an application that corresponds to the virtual object. In some embodiments, the selectable virtual object that corresponds to a maximized view affordance may be displayed (e.g., a selectable virtual object that is displayed currently with a representation, such as the first representation, of the virtual object). In some embodiments, the maximized view criteria may include a criterion that is satisfied when the selectable virtual object corresponding to a maximized view affordance is selected (e.g., a tap or swipe on the virtual object). In some embodiments, the determination may be made that the second location corresponds to the second use context of the plurality of use contexts. In some embodiments, the first representation may include first visual content (e.g., representations of text, buttons, audio/video, user interface elements, etc.). In some embodiments, the second representation may not include the first visual content. In some embodiments, the determination may be made that the second location corresponds to the second use context of the plurality of use contexts. In some embodiments, the first representation may include third visual content that is displayed at a third size. In some embodiments, the second representation may include the third visual content that is displayed at a fourth size that is different from (e.g., larger or smaller representations of text, buttons, audio/video, user interface elements, etc.) the third size. In some embodiments, the determination may be made that the second location corresponds to the second use context of the plurality of use contexts. In some embodiments, the first representation may include a first selectable object (e.g., one or more selectable user interface elements). In some embodiments, the second representation may not include the first selectable object. In some embodiments, the determination may be made that the second location corresponds to the second use context of the plurality of use contexts. In some embodiments, the first representation is a fourth size. In some embodiments, the second representation is a fifth size that is different from (e.g., larger or small) the fourth size. In some embodiments, as a part of displaying the second representation of the virtual object based on the second use context, the system may transition display of the first representation to display of the second representation when the first representation is within a predetermined distance (e.g., a distance that is near the second location, when the first representation reaches the second location) from the second location. In some embodiments, when the first representation is moved from the first location, display of the first representation is maintained until the first representation reaches or is within a certain distance of the second location. In some embodiments, in accordance with a determination that the second location corresponds to a fourth use context of the plurality of use contexts, wherein the fourth use context is satisfied when the second location corresponds to a prohibited location (e.g., location prohibited by an application in which the virtual object corresponds and/or one or more other applications and/or systems), the system forgoes to display, within the CGR environment, a representation of the virtual object based on the fourth use context. In some embodiments, even when the second location corresponds to a location that satisfies a use context (e.g., second use context) but for the prohibition of displaying a different use context, the first representation may continue to remain displayed because display of a different representation than the first representation is prohibited and/or display of a representation that corresponds to the use context (e.g., second use context), that would be satisfied but for the prohibition of displaying a different representation, is prohibited. In some embodiments, in accordance with the determination that the second location corresponds to the fourth use context of the plurality of use contexts, the system may display, within the CGR environment, an indication (e.g., a message or symbol that is displayed to note that a representation that corresponds to the fourth use context cannot be displayed or is prohibited) that the second location is a prohibited location (e.g., location prohibited by an application in which the virtual object corresponds and/or one or more other applications and/or systems). At block508, in response to receiving the request and in accordance with a determination that the second location corresponds to a third use context (e.g., the third use context is different from the first use context and the second use context) of the plurality of use contexts, the system may display, via the display of the electronic device, at the second location (e.g., on a surface corresponding to the second location), a third representation of the virtual object based on the third use context (and/or based on one or more applications associated with the virtual object), where the third representation is different from the first representation and the second representation. Aspects of the present disclosure are directed to systems and techniques that provide functionality for controlling concurrent display of representations of a virtual object within a CGR environment. In embodiments, controlling the concurrent display of representations of a virtual object may include displaying a first representation on first surface (e.g., physical or virtual surface) of the CGR environment, and displaying a second representation on second surface of the CGR environment different from the first surface. In embodiments, controls may be provided for requesting a display of the second representation of the virtual object concurrently with the first representation of the virtual object. FIGS.6A-6Cillustrate exemplary techniques for controlling concurrent display of representations of a virtual object within a CGR environment in accordance with aspects of the present disclosure. In particular,FIG.6Aillustrates user202wearing electronic device200, which is configured to allow user202to view CGR environment290. As mentioned above, in some embodiments, electronic device200may be similar to electronic device100adescribed above with reference toFIGS.1A and1B. As illustrated inFIG.6A, CGR environment290includes display640. As described above, display640may be a physical display or a virtual representation of a display. A first representation620of virtual object610may be displayed by electronic device200at a first surface of the CGR environment. For example, first representation620of virtual object610may be displayed on display640. In the example illustrated inFIG.6A, first representation620is a 2D representation displayed on display640. In embodiments, first representation620may be displayed on any surface (e.g., physical or virtual) within CGR environment290. First representation620may include various graphical elements associated with the virtual object. For example, as illustrated, virtual object610is associated with a calculator application and includes various graphical elements associated with a calculator application. It will be appreciated that exemplifying virtual object610using a calculator application is done for illustrative purposes, and it is not intended to be limiting in any way. Therefore, virtual object610may be associated with any other type of application (e.g., calendar, multimedia application, presentation, etc.). In some embodiments, a control may be provided for requesting a display of a second representation of virtual object610. A user (e.g., user202) may request the concurrent display, and the request may be received by device200. The request to display of a second representation of virtual object610may include a request to display the second representation of virtual object610concurrently with first representation620. The control for requesting concurrent display may include any technique for providing a selection (e.g., by user202). For example, in some embodiments, the control for requesting concurrent display may include affordance611presented within CGR environment290. In some embodiments, affordance611may be provided within first representation620or may be provided outside first representation620. In some embodiments, affordance611may be a virtual object (e.g., a button, an affordance, a user-interface element, an interactive element, etc.) displayed within CGR environment290and configured to allow interaction by a user (e.g., user202). In other embodiments, affordance611may be a graphical element displayed on a physical display (e.g., rather than a virtual element). In embodiments, the control for requesting concurrent display may include a gesture that may include moving or dragging virtual object610out of display640. For example, user202may perform a gesture (e.g., using an appendage, an input sensor, etc.) in which virtual object610may be dragged or moved out of display640. This dragging gesture may be determined to be a request to display the second representation of virtual object610concurrently with first representation620. In some embodiments, user202may drag virtual object610out of display640and may continue dragging virtual object to a location within CGR environment290where the second representation of virtual object610is to be displayed. In some embodiments, the second representation of virtual object610may be displayed within CGR environment290in response to receiving the request to concurrently display representations of virtual object610. In embodiments, the request to concurrently display representations of virtual object610may cause an animation in which the second representation of virtual object610comes out (e.g., pops out) of first representation620. This is illustrated inFIG.6B. FIG.6Cillustrates second representation621of virtual object610displayed within CGR environment290in response to receiving the request to concurrently display representations of virtual object610. In embodiments, second representation621may be displayed on any surface (e.g., physical or virtual) within CGR environment290. In embodiments, second representation621may be separate and/or different from first representation620. For example, as shown inFIG.6C, first representation620may be a 2D representation of virtual object610displayed on display640, and second representation621may be a 3D representation of virtual object610displayed outside of display640, on a second and different surface of CGR environment290. In some embodiments, a 2D representation of an object (e.g., an object within a particular application or a particular type of application (e.g., a calculator application or a keynote presentation application, a presentation application, a media or entertainment application, a productivity application)) may be displayed concurrently with a 3D representation of the object. In some embodiments, the 3D representation may be displayed with or without a 3D representation of the particular application or the particular type of application. In some embodiments, first representation620and second representation621, although associated with the same virtual object, may provide different or the same functionalities. For example, first representation620and second representation621may share a common set of UI elements. In this example, first representation620may be a 2D representation of an application (e.g., a calculator) that includes a particular set of UI elements for user interaction with the application. Second representation621may be a 3D representation of an application (e.g., a calculator) that includes the same particular set of UI elements for user interaction as first representation620. In some embodiments, however, first representation620and second representation621may have different sets of UI elements. For example, first representation620may include a particular set of UI elements, while second representation621may include a different set of UI elements. In embodiments, one set of UI elements in the different sets of UI elements may include at least one UI element that is not included in the other set of UI elements. In other embodiments, the different sets of UI elements have no UI elements in common. As will be appreciated, by providing different functionalities, the concurrent display of representations of a virtual object provides an improved system, as the system may be configured to adapt a representation of a virtual object with functionality dependent on the type of representation (e.g., a 2D representation or a 3D representation). In some embodiments, one representation of the virtual object may be a virtual representation, while another representation of the virtual object may not be a virtual representation. For example, display640may be a physical display, and first representation620may be a graphical representation of virtual object610displayed on physical display640. In this case, first representation620may not be a virtual representation in that first representation620is actually displayed in the real-world on the physical display and is perceived by user202via the transparent or translucent display of electronic device200. In this example, second representation621may be a virtual representation of virtual object610in that second representation621is not actually displayed in the real-world on a physical display, but it is rather displayed on the display of electronic device200and is superimposed over the real-world physical display. In this manner, a user may be provided with the ability to request display of a 3D representation of a virtual object by interacting with controls provided in a 2D representation of the same virtual object. In some embodiments, first representation620and second representation621may both be virtual representations. In embodiments, modifications to one representation of the virtual object may selectively cause modifications to another representation of the virtual object. For example, while first representation620and second representation621are concurrently displayed, a request to modify first representation620may be received. In embodiments, a request may be received (e.g., from user202) to modify first representation620, for example, to modify the size, the UI elements, the shape, the theme, etc. In embodiments, the request (e.g., user input) to modify first representation620may cause a corresponding modification to second representation621(e.g., size, UI elements, shape, theme, etc.). In aspects, both first representation620and second representation621may be modified in accordance with the request to modify. In some embodiments, every time a modification to the first representation621is requested, a corresponding modification is made to second representation621. In other embodiments, a first request to modify the first representation621may cause a corresponding modification to second representation621. However, a second request to modify the first representation621may not cause a corresponding modification to second representation621. In this case, a modification to second representation621is forgone when receiving the second request to modify first representation620. It is noted that although the foregoing discussion describes selectively modifying second representation621based on a request to modify first representation620, this is done for illustrative purposes and not by way of limitation. Thus, the same techniques may be used to selectively modify first representation620based on a request to modify second representation621. FIG.7is a flow diagram illustrating method700for controlling a concurrent display of representations of a virtual object within a CGR environment. In some embodiments, method700may be performed by system100or a portion of system100. In some embodiments, method700may be performed by one or more external systems and/or devices. In some embodiments, method700may be performed by system100(or a portion of system100) in conjunction with one or more external systems and/or devices. At block702, the system displays, via a display of an electronic device (e.g., a wearable electronic device, an HMD device, etc.), a 2D representation of a virtual object at a first surface (and/or location) of a CGR environment. For example, a first representation of a virtual object may be displayed via a first display (e.g., a left eye display panel) or second display (e.g., a second eye display panel) of an electronic device on a representation of a display within the CGR environment. In some embodiments, the first surface may be a virtual surface within the CGR environment. For example, the first surface may be a virtual representation of a physical display. In other embodiments, the first surface may be a real-world physical surface of the CGR environment. For example, the first surface may be a surface of a physical display. The 2D representation of the virtual object may be a virtual representation (e.g., a virtual representation superimposed over the first surface via a translucent display of the electronic device) or may be a real-world graphical representation (e.g., a real-world graphical representation displayed on a real-world physical display). In some embodiments, the 2D representation of the virtual object may include a set of UI element for user interaction with the virtual object. In embodiments, the 2D representation of the virtual object may also include at least one control for requesting concurrent display of a second representation of the virtual object. At block704, the system receives a request to display a 3D representation of the virtual object concurrently with the 2D representation. In embodiments, the request to concurrently display may include a user input. The request may be input by a user using a control element (e.g., a button, an affordance, a user-interface element, an interactive element, etc.) displayed along with 2D representation (e.g., within the 2D representation or outside the 2D representation). For example, the user may select the control element, and the selection may cause a request for a concurrent display to be received by the system. In some embodiments, the request to concurrently display the 2D representation and the 3D representation may include a gesture to move or drag the 2D representation out of, or from, the first surface. For example, a user202may grab, click, and/or otherwise select (e.g., using an appendage, an input device, an input sensor, etc.) the 2D representation displayed at the first surface and may move or drag the 2D representation away from the first surface. In some aspects, the dragging gesture may be determined to be the request for concurrent display. In embodiments, the request to display a 3D representation of the virtual object concurrently with the 2D representation may cause an animation to be played in which the 3D representation is configured to come out (or pop out) of the 2D representation. In embodiments, the animation may include a sound that may be played during the animation. At block706, in response to the request for concurrent display, the system concurrently displays, via the display of the electronic device, the 2D representation at the first surface and the 3D representation at a second surface of the CGR environment. In embodiments, the second surface may be different from the first surface. In embodiments, the second surface may be a virtual surface or may be a real-world physical surface within the CGR environment. For example, the second surface may be a physical, real-world surface of a desk, or may be a virtual representation of a surface of a physical desk. In embodiments, the second surface at which the 3D representation may be displayed may be determined by user input. For example, a user may drag the 2D representation out from the first surface and continue dragging to the second surface. In this manner, the 3D representation may be displayed in whichever surface within the CGR environment the dragging gesture stops. In other implementations, for example, where a control element in the 2D representation is used to request the concurrent display, the second surface may be predetermined. In some implementations, the user may, prior to requesting concurrent display, indicate a surface at which the 3D representation is to be displayed. For example, a user may first indicate (e.g., via a user input (e.g., user input detected using input sensors that may include a mouse, a stylus, touch-sensitive surfaces, image-sensors (e.g., to perform hand-tracking), etc.)), a surface within the CGR environment, other than the first surface. Upon requesting concurrent display, the 3D representation may be displayed at the surface indicated by the user. In some embodiments, the 3D representation of the virtual object may include a set of UI elements for user interaction. In embodiments, the set of UI elements of the 3D representation may be different than the set of UI elements of the 2D representation. For example, one set of UI elements may include UI elements that are not included in the other set of UI elements. Various aspects of the present disclosure are directed to systems and techniques that provide functionality for controlling a representation of a virtual object based on characteristics of an input mechanism. In embodiments, a representation of a virtual object may be based on a characteristic of the input mechanism (e.g., movement direction, distance, gesture type, etc. of the input mechanism) with respect to the virtual object. For example, in embodiments, a representation of a virtual object may be modified or maintained depending on whether an input mechanism associated with the virtual object is within a predetermined distance from a first representation of the virtual object. In other embodiments, for example, a representation of a virtual object may be modified or maintained depending on whether an input mechanism associated with the virtual object is determined to be moving towards or away from a first representation of the virtual object. In yet other embodiments, for example, a representation of a virtual object may be modified or maintained depending on whether a gesture associated with an input mechanism is determined to indicate a potential for interaction by a user with a first representation of the virtual object. As will be appreciated, the functionality provided by the systems and techniques described herein provide for an advantageous system in which representations of virtual objects may be adapted to characteristics of input mechanisms, thereby providing an improved user interface. FIGS.8A and8Billustrate exemplary techniques for controlling a representation of a virtual object within a CGR environment based on characteristics of an input mechanism in accordance with aspects of the present disclosure. In particular,FIG.8Aillustrates CGR environment890, including input mechanism800and virtual object810. In embodiments, CGR environment890may be presented to a user (e.g., user202) wearing an electronic device (e.g., electronic device200) configured to allow user202to view CGR environment890. As mentioned above, in some embodiments, electronic device200may be similar to electronic device100adescribed above with reference toFIGS.1A and1B. As shown inFIG.8A, first representation810of a virtual object may be displayed by electronic device200. In embodiments, first representation810may be a 3D representation of the virtual object, and the virtual object may be associated with a particular application. For example, as illustrated inFIG.8A, first representation810may be associated with a calculator application. It will be appreciated that exemplifying first representation810, and other representations of a virtual object, using a particular application (e.g., a calculator application) is done for illustrative purposes, and it is not intended to be limiting in any way. Therefore, first representation810may be associated with any type of application (e.g., calendar, multimedia application, presentation, etc.). In embodiments, first representation810may be configured to facilitate non-direct interaction between a user and first representation810. As used herein, non-direct interaction may refer to a user interaction with a representation of a virtual object that does not directly manipulate elements of the representation of the virtual object. A non-limiting example of a non-direct interaction may be a user perceiving information provided by a user interface (UI) element of the representation of the virtual object without direct manipulation of the UI element by the user. In contrast, direct interaction, as used herein, may refer to a user interaction with a representation of a virtual object in which UI elements of the representation of the virtual object representation may be directly manipulated by the user. For example, the user may push a button, may interact with an interactive element, may click a selectable item and/or an affordance, etc. First representation810may include UI elements811and815. In embodiments, UI element815may represent at least one UI element configured to provide (e.g., output) information associated with the virtual object represented by first representation810. For example, UI element815may be a display of first representation810. As such, UI element815may be configured for non-direct interaction such that a user may perceive the output without directly manipulating UI element815. UI element811may represent at least one UI element that may be configurable to a configuration that facilitates user interaction (e.g., direct interaction or non-direct interaction). For example, UI element811may be a button, an affordance, a user-interface element, an interactive element, etc., and/or any combination thereof. When UI element811is configured to facilitate direct interaction, a user may select, click, select, and/or otherwise manipulate UI element811. In some embodiments, UI element811may be configured to facilitate non-direct interaction by displaying UI element as a 3D element. In this case, the user may perceive UI element811as a 3D element. In embodiments, input mechanism800may include a mechanism configured to facilitate interaction with the representations of the virtual object. For example, input mechanism may include a mechanism for a user (e.g., user202) to manipulate at least one element of a representation of the virtual object or to perceive data provided by an element of the representation of the virtual object. In embodiments, input mechanism800may include a representation of an appendage of the user (e.g., a finger, hand, leg, foot, etc.), a user's gaze (e.g., head gaze, eye gaze, etc.), an input device (e.g., a mouse, a stylus, etc.) (e.g., that is different from the electronic device, that is in operative communication with the electronic device, that is physically connected to (e.g., or a part of) the electronic device), etc. In embodiments, the representation of an appendage of the user may include a virtual representation of the appendage and/or may include data representing characteristics of the appendage (e.g., location, orientation, distance to a particular point, etc.) within the CGR environment. In aspects, input mechanism800may be detected using input sensors (e.g., touch-sensitive surfaces, image-sensors, etc.) configured to perform hand-tracking, head gaze-tracking, eye gaze-tracking, finger-tracking, etc. As shown inFIG.8A, input mechanism800may include a user's appendage (e.g., a finger). As shown inFIG.8A, and discussed above, first representation810may be displayed within CGR environment890, and first representation810may be configured to facilitate non-direct interaction by a user rather than direct interaction (e.g., by providing UI elements811and815configured for non-direct interaction). As also shown inFIG.8A, input mechanism800may be at a current location that is distance831from first representation810. In some embodiments, a predetermined distance830from first representation810may be provided, although in some implementations, predetermined distance830may not be shown within CGR environment890. Predetermined distance830may be configured to operate as a threshold, such that when the current location of the input mechanism is not within predetermined distance830from first representation810, the displaying of first representation810may be maintained. For example, as distance831may be determined to be greater than predetermined distance830, the current location of input mechanism800may be determined not to be within a predetermined distance830from first representation810. In embodiments, whether the displaying of first representation810may be modified or maintained may be based on a characteristic of input mechanism800. In some embodiments, the characteristic of input mechanism800may include a movement direction, a distance to a representation of the virtual object, a gesture type, etc. In accordance with the determination that the current location of input mechanism800is not within predetermined distance830from first representation810, the displaying of first representation810may be maintained without displaying another representation of the virtual object. Conversely, as will be discussed below, and as illustrated in the example shown inFIG.8B, in accordance with a determination that the current location of input mechanism800is within predetermined distance830from first representation810, the displaying of first representation810may be modified, and a second representation of the virtual object may be displayed. In aspects, the second representation of the virtual object may be different from first representation810. In some embodiments, the determination of whether the location of input mechanism800is within predetermined distance830from first representation810may be performed in response to detecting a movement of input mechanism800. In these cases, if no movement of input mechanism800is detected, the determination of whether the location of input mechanism800is within predetermined distance830from first representation810may not be performed. In some embodiments, the determination of whether the location of input mechanism800is within predetermined distance830from first representation810may be performed when a detected movement is determined to be towards first representation810. In these cases, if the movement of input mechanism800is determined to be away from first representation810, the determination of whether the location of input mechanism800is within predetermined distance830from first representation810may not be performed even though a movement of input mechanism800may be detected. In some implementations, first representation810may be initially displayed within CGR environment890in response to a determination that input mechanism800is not within predetermined distance830from a location at which first representation810is to be displayed. For example, a determination may be made to initially display a representation of a virtual object at a first location within CGR environment890. In this example, the first representation of the virtual object may be configured for non-direct interaction. Further, in this example, CGR environment890may not include any representation of the virtual object at the first location, although in some cases at least one other representation of the virtual object may be displayed at another location within CGR environment890. In response to the determination to initially display a representation of the virtual object at the first location within CGR environment890, a determination may be made as to whether the current location of input mechanism800is within predetermined distance830from the first location or not. If it is determined that the current location of input mechanism800is not within predetermined distance830from the first location, the first representation (e.g., first representation810) may be displayed at the first location. In some embodiments, if it is determined that the current location of input mechanism800is within predetermined distance830from the first location, a second representation (e.g., second representation820described below) configured for direct interaction may be displayed at the first location. As shown inFIG.8B, input mechanism800may be moved (e.g., in direction833) from a previous location (e.g., as illustrated inFIG.8A) to a current location with a distance832to first representation810. The movement from the previous location to the current location may be detected (e.g., using input sensors as described above). In response to detecting the movement of input mechanism800from the previous location to the current location, a determination may be made as to whether the current location of input mechanism800to first representation810may be within predetermined distance830or not. For example, distance832from the current location of input mechanism800to the first representation810may be compared against predetermined distance830. In accordance with a determination that the distance832is greater than predetermined distance830, the current location of input mechanism800may be determined to not be within predetermined distance830from first representation810. Conversely, in accordance with a determination that the distance832is not greater than predetermined distance830, the current location of input mechanism800may be determined to be within predetermined distance830from first representation810. In embodiments, in accordance with a determination that the current location of input mechanism800is within predetermined distance830from first representation810, the displaying of first representation810may be modified. In embodiments, modifying the displaying of first representation810may include ceasing to display first representation810and displaying second representation820, where second representation820may be different from first representation810. In some embodiments, second representation820may be displayed at the same location and/or on the same surface where first representation810was displayed. In embodiments, second representation820may be configured for direct interaction between the user (e.g., user202) and second representation820(e.g., elements of second representation820). For example, whereas first representation810includes UI element811, as shown inFIG.8A, configured for non-direct interaction (e.g., UI elements displayed as protruding 3D UI elements), second representation820may include UI element821configured for direct interaction. In this example, UI element821may include at least one UI element displayed as flat buttons, or as 2D elements, where the flat buttons may not protrude from second representation820. As will be appreciated, a flat 2D UI element (e.g., a 2D button) displayed upon a physical table (e.g., on the same plane as the physical table, may be more apt to provide physical feedback when a user manipulates the 2D element. For example, as the user manipulates the 2D element, the user receives the feedback provided by the physical table upon which the virtual 2D element I displayed. In addition, displaying second representation820configured for direct interaction may also encourage the user (e.g., user202) to interact with second representation820. In some embodiments, modifying first representation810, which may include displaying second representation820, may include animating the modification. For example, one of the differences between first representation810and second representation820may be that UI element811of first representation810is displayed as protruding 3D UI elements and UI element821of second representation820is displayed as flat 2D UI elements. In this example, the modification of first representation810may include animating the UI elements such that the protruding 3D UI elements of first representation810are presented as receding into the flat 2D UI elements of second representation820. In embodiments, the animation may also include a sound that may be played while the animation is occurring. In another embodiment, modifying the first representation of the virtual object may include moving the first representation to a location closer to the user (e.g., user202). For example, based on the characteristic of the input mechanism800(e.g., the current location of input mechanism800is within a predetermined distance (e.g., predetermined distance830) from the current location of the first representation (e.g., first representation810)), a second representation of the virtual object may be displayed. In embodiments, the second representation of the virtual object may be the same as the first representation but in a location that is closer to the user than the current location of the first representation. In some embodiments, the second representation displayed at the new location may be a different representation of the first representation, for example, in accordance with the above description. In further embodiments, the characteristic of the input mechanism on which the determination to modify or maintain the first representation810may be based may include a determination of whether the direction of the movement of input mechanism800is toward or away from first representation810. For example, as shown inFIG.8B, input mechanism800may be moved in direction833, which is a direction toward first representation810. In this case, in accordance with the determination that the direction of the movement of input mechanism800is toward first representation810, the displaying of first representation810may be modified and a second representation (e.g., second representation820configured to facilitate direct interaction by a user) of the virtual object may be displayed. Conversely, in accordance with the determination that the direction of the movement of input mechanism800is away from first representation810, the displaying of first representation810may be maintained without displaying another representation (e.g., second representation820) of the virtual object. In aspects, the second representation of the virtual object may be different than first representation810. In yet further embodiments, the characteristic of the input mechanism on which the determination to modify or maintain the first representation810may include a determination of whether a particular type of gesture has been made by input mechanism800. In aspects, the particular type of gesture may be a gesture that may indicate a potential for direct user interaction. For example, as shown inFIG.8B, input mechanism800may be a pointing hand. In embodiments, a pointing hand may be considered a type of gesture that indicates a potential for user interaction. As will be appreciated, a user desiring to interact with a virtual object, such as a virtual object represented with UI elements for user input, using a finger may do so by forming his or her hand into a pointing hand with the finger pointing out. In this sense, the pointing hand may indicate that the user intends or desires to interact with the virtual object. As such, when a determination is made that input mechanism has made a gesture that indicates a potential for user interaction (e.g., pointing hand, grabbing hand, etc.), a determination may be made to modify a current representation configured for non-direct interaction (e.g., first representation810) into a representation configured for direct interaction (e.g., second representation820). In aspects, the modification of the current representation configured for non-direct interaction into a representation configured for direct interaction may be in accordance with the foregoing description. In another example, a determination to maintain the displaying of first representation810configured for non-direct interaction may be based on a gesture that does not indicate a potential for user interaction. For example, a gesture may be detected that may include the user (e.g., user202) crossing his or her arms, and/or leaning back. In this case, the gesture may be considered a type of gesture that does not indicate a potential for user interaction. As such, when a determination is made that the user has crossed his or her arms, and/or has leaned back, a determination may be made to maintain a current representation configured for non-direct interaction (e.g., first representation810) without displaying a representation configured for direct interaction (e.g., second representation820). In some embodiments, detecting a gesture that does not indicate a potential for user interaction may cause a determination to modify a current representation configured for direct interaction (e.g., second representation820) into a representation configured for non-direct interaction (e.g., first representation810). It is noted that although the foregoing examples, and the examples that follow, may be focused on a description of modifications of a representation of a virtual object configured for non-direct interaction into a representation of the virtual object configured for direct interaction, this is done for illustrative purposes and not intended to be limiting in any way. In some embodiments, a representation of a virtual object configured for direct interaction may be modified into a representation of the virtual object configured for non-direct interaction based on characteristics of the input mechanism. For example, in some implementations, a display of a representation configured for direct interaction (e.g., first representation810described above) may be modified to display a representation configured for non-direct interaction (e.g., second representation820described above) based on a detected movement of an input mechanism, based on a characteristic of the input mechanism (e.g., in accordance with a determination that the location of the input mechanism is not within a predetermined distance from the representation configured for direct interaction (e.g., first representation810)). As such, the present disclosure provides techniques for selectively and dynamically configuring a representation of a virtual object enhanced interaction (e.g., direct or non-direct) based on the characteristics of the input mechanism. Thus, the representation of the virtual object may be configured for direct or non-direct interaction when it is more advantageous based on the characteristics of the input mechanism. Additionally, although the foregoing discussion describes second representation820as configured for direct interaction with flat 2D UI elements, it will be appreciated that this is done for illustrative purposes and not by way of limitation. As will be appreciated, a representation of a virtual object may be configured for direct interaction by other methods (e.g., orientation, size, angle, shape, color, brightness, language, location, distance, direction, etc.). For example, in embodiments, based on a characteristic of the input mechanism (e.g., in accordance with a determination that the current location of an input mechanism is within a predetermined distance from a first representation of a virtual object), the displaying of the first representation may be modified, and the modification may include displaying a second representation different from the first representation. In these embodiments, the second representation may include a different orientation, size, angle, shape, color, brightness, language, location, distance, direction, etc. from the first representation, where the modification may be configured to allow, encourage, enable, and/or otherwise facilitate direct interaction with the second representation of the virtual object. Some of these embodiments will be described in further detail below. FIGS.9A and9Billustrate another example of techniques for controlling a representation of a virtual object within a CGR environment based on characteristics of an input mechanism in accordance with aspects of the present disclosure. As shown inFIG.9A, first representation910of a virtual object may be displayed via a display of electronic device200. In embodiments, first representation910may be a 3D representation of the virtual object, and the virtual object may be associated with a particular application (e.g., calendar, multimedia application, presentation, etc.), as discussed above. In the example illustrated inFIG.9A, first representation910may be associated with a calculator application. In embodiments, first representation910may be configured to facilitate non-direct interaction with the associated virtual object. For example, first representation910may include UI elements911and915. In embodiments, UI element915may represent at least one UI element configured to provide (e.g., output) information associated with the virtual object represented by first representation910. For example, UI element915may be a display (e.g., a virtual display) of first representation910. In this case, first representation910may be configured to facilitate non-direct interaction by a user by being displayed at an orientation that facilitates the user (e.g., user202) non-direct interaction with UI element915. For example, first representation910may be displayed at an orientation that includes angle912. In embodiments, angle912may be an angle that is configured to place first representation910at an orientation that enables the user to see, hear, or otherwise perceive, UI element915. In this manner, angle912facilitates the user non-direct interaction with UI element915. In embodiments, angle912may be measured with respect to a surface (e.g., surface916) on which first representation910is displayed. In embodiments, the orientation at which first representation910may be displayed may be determined based on the location of the user. For example, the user's gaze (e.g., head gaze and/or eye gaze) may be determined (e.g., by detecting the location of the user's head and/or eyes and then determining the user's gaze), and the determined user's gaze may then be used to determine an orientation at which to display first representation910such that UI elements configured for non-direct interaction (e.g., UI element915) are facing the user's gaze. In embodiments, UI element911of first representation910, may be configured for non-direct interaction. In this case, UI element811may be displayed as protruding buttons, or as 3D elements, where the flat buttons may not protrude from first representation910. In this manner, UI element911, as shown inFIG.9A, is not configured for direct interaction. As shown inFIG.9A, and as discussed above, first representation910may be configured to facilitate non-direct interaction by a user rather than direct interaction (e.g., by providing protruding 3D UI element911and by orienting first representation910at angle912). As also shown inFIG.9A, input mechanism800may be at a current location that is distance931from first representation910. In some embodiments, predetermined distance930from first representation910may be provided. In embodiments, in accordance with a determination that the current location of input mechanism800is not within predetermined distance930from first representation910, the displaying of first representation910may be maintained. For example, first representation910configured for non-direct interaction may continue to be displayed without displaying another representation of the virtual object and/or without making changes to first representation910. Conversely, as will be discussed below, in accordance with a determination that the current location of input mechanism800is within predetermined distance930from first representation910, the displaying of first representation910may be modified and a second representation of the virtual object may be displayed. In aspects, the second representation of the virtual object may be different than first representation910. As shown inFIG.9B, input mechanism800may be moved, e.g., in direction933, from a previous location (e.g., as illustrated inFIG.9A) to a current location with a distance932to first representation910. The movement from the previous location to the current location may be detected (e.g., using input sensors as described above). In response to detecting the movement of input mechanism800from the previous location to the current location, a determination may be made as to whether the current location of input mechanism800to first representation910may be within predetermined distance930or not. For example, distance932from the current location of input mechanism800to the first representation910may be compared against predetermined distance930. In accordance with a determination that the distance932is greater than predetermined distance930, the current location of input mechanism800may be determined to not be within predetermined distance930from first representation910. Conversely, in accordance with a determination that the distance932is not greater than predetermined distance930, the current location of input mechanism800may be determined to be within predetermined distance930from first representation910. In embodiments, in accordance with a determination that the current location of input mechanism800is within predetermined distance930from first representation910, the displaying of first representation910may be modified. In embodiments, modifying the displaying of first representation910may include ceasing to display first representation910and displaying second representation920, where second representation920may be different from first representation910. In some embodiments, second representation920may be displayed at the same location and/or on the same surface where first representation910was displayed. In embodiments, second representation920may be configured to facilitate direct interaction by the user (e.g., user202) with the associated virtual object. For example, whereas first representation910is displayed at an orientation with angle912, which facilitates the user being able to perceive (e.g., see, hear, etc.) information provided by UI element915(e.g., non-direct interaction), second representation920may be displayed at an orientation that facilitates the user directly interacting (e.g., directly manipulating, selecting, clicking, dragging, and/or otherwise selecting) with UI elements of second representation920(e.g., UI element921). For example, second representation920may be displayed within CGR environment890at an orientation that is longitudinal with surface916. As such, second representation920may be displayed as lying flat on surface916. As will be appreciated, a flat surface may be easier to interact with than an angled surface. As such, by modifying the representation of the virtual object from an angled orientation to a flat orientation, or vice-versa, the representation of the virtual object is selectively adapted for enhanced direct-interaction based on the characteristics of the input mechanism. In some embodiments, second representation920may be displayed at an orientation having a non-zero angle with respect to surface916that is different from angle912. In addition, whereas first representation910includes UI element911, as shown inFIG.9A, configured for non-direct interaction (e.g., UI elements displayed as protruding 3D UI elements, where the protruding 3D UI elements may protrude (or pop out) from first representation910), second representation920may include UI element921configured for direct interaction, as previously described. For example, UI element921may include at least one UI element displayed as flat 2D UI elements displayed upon a physical object, which facilitates physical feedback as the user manipulates the 2D UI elements. In some embodiments, modifying first representation910, which may include displaying second representation920, may include animating the modification. For example, the modification of first representation910may include animating a change in orientation of first representation910such that first representation910is displayed as moving from the current orientation (e.g., angled at angle912) to the orientation of second representation920(e.g., flat on surface916). In addition, or in the alternative, the modification of first representation910may include animating the UI elements such that the protruding 3D UI elements of first representation910are presented as receding into the flat 2D UI elements of second representation920. In embodiments, the animation may also include a sound that may be played while the animation is occurring. FIGS.10A and10Billustrate another example of techniques for controlling a representation of a virtual object within a CGR environment based on characteristics of an input mechanism in accordance with aspects of the present disclosure. In particular,FIGS.10A and10Billustrate an example in which a representation of a virtual object is modified based on characteristics of an input mechanism, and in which the modification includes adding UI elements for user interaction and changing the size of the representation. As shown inFIG.10A, first representation1010of a virtual object may be displayed via a display of electronic device200. In embodiments, first representation1010may be a 3D representation of the virtual object, and the virtual object may be associated with a particular application (e.g., calendar, multimedia application, presentation, etc.), as discussed above. In the example illustrated inFIG.10A, first representation1010may be associated with a calculator application. In embodiments, first representation1010may be configured to facilitate non-direct interaction with the associated virtual object. For example, first representation910may include UI elements1012. UI element1012may represent at least one UI element configured to provide (e.g., output) information associated with the virtual object represented by first representation1010. For example, UI element1012may be a display (e.g., a virtual display) of first representation1010. In some embodiments, first representation1010may have a size. In some embodiments, first representation1010may not include any UI elements configured for user input (e.g., a button, an affordance, a user-interface element, an interactive element, etc.). As shown inFIG.10A, and as discussed above, first representation1010may be displayed within CGR environment890, and first representation1010may be configured to facilitate non-direct interaction by a user rather than direct interaction. As also shown inFIG.10A, input mechanism800may be at a current location that is distance1031from first representation1010. In some embodiments, predetermined distance1030from first representation1010may be provided. In embodiments, in accordance with a determination that the current location of input mechanism800is not within predetermined distance1030from first representation1010, the displaying of first representation1010may be maintained. For example, first representation1010configured for non-direct interaction may continue to be displayed without displaying another representation of the virtual object and/or without making changes to first representation1010. Conversely, as will be discussed below, in accordance with a determination that the current location of input mechanism800is within predetermined distance1030from first representation1010, the displaying of first representation1010may be modified and a second representation of the virtual object may be displayed. In aspects, the second representation of the virtual object may be different than first representation1010. As shown inFIG.10B, input mechanism800may be moved from a previous location (e.g., as illustrated inFIG.9A) to a current location with a distance1030to first representation1010. The movement from the previous location to the current location may be detected (e.g., using input sensors as described above). In response to detecting the movement of input mechanism800from the previous location to the current location, a determination may be made as to whether the current location of input mechanism800to first representation1010may be within predetermined distance1030or not. In accordance with a determination that the current location of input mechanism800is within predetermined distance1030from first representation1010, the displaying of first representation1010may be modified. In embodiments, modifying the displaying of first representation1010may include ceasing to display first representation1010and displaying second representation1020, where second representation1020may be different from first representation1010. In some embodiments, second representation1020may be displayed at the same location and/or on the same surface where first representation1020was displayed. In embodiments, second representation1020may be configured to facilitate direct interaction by the user (e.g., user202) with the associated virtual object. For example, whereas first representation1010may not include UI elements911configured for user input, second representation1020may include UI element1021configured for user interaction, as previously described. For example, UI element1021may include at least one UI element displayed as flat 2D UI elements. In addition, second representation1020may be displayed having a size that is different than the size of first representation1010. For example, second representation1020may be displayed with a size larger than the size of first representation1010. In some embodiments, second representation1020may be displayed with a size smaller than the size of first representation1010. As previously described, in some embodiments, modifying first representation1010, which may include displaying second representation1020may include animating the modification. For example, the modification of first representation1010may include animating a change in size of first representation1010such that first representation1010is displayed as growing or shrinking, as appropriate, from the current size to the size of second representation1020. In addition, or in the alternative, the modification of first representation1010may include animating the UI elements such that the protruding 3D UI elements of first representation910are presented as receding into the flat 2D UI elements of second representation920. In embodiments, the animation may also include a sound that may be played while the animation is occurring. FIGS.11A and11Billustrate another example of techniques for controlling a representation of a virtual object within a CGR environment based on characteristics of an input mechanism in accordance with aspects of the present disclosure. In particular,FIGS.11A and11Billustrate an example in which a representation of a virtual object is modified based on characteristics of an input mechanism (e.g., a user's gaze). As shown inFIG.11A, first representation1110of a virtual object may be displayed via a display of electronic device200. In embodiments, first representation1110may be a representation of the virtual object, and the virtual object may be associated with a particular application (e.g., calendar, multimedia application, presentation, etc.) as discussed above. In the example illustrated inFIG.11A, first representation1110may be associated with a calendar application. In embodiments, first representation1110may have a size and may be displayed at location1152. In embodiments, first representation1110may not be configured for user interaction, whether direct or non-direct interaction. For example, the size of first representation1110may be a small size, and the small size may not enable a user to perceive any information from or interact with any UI elements of first representation1110. In some embodiments, first representation1110may not include any UI elements. As shown inFIG.11A, a gaze1150of user202, wearing electronic device200, may be detected. In aspects, detected gaze1150can be a head gaze (e.g., the direction in which the user's head is facing), an eye gaze (e.g., the direction in which the user's eyes are looking), a combination thereof, etc. Gaze1150of user202may be determined to be focused, placed, or otherwise directed to location1151, which may be different from location1152where first representation1110is displayed. In aspects, in accordance with the determination that gaze1150is directed to a location that is different than the location of first representation1110, the displaying of first representation1110, at the current location and having the size, may be maintained without displaying another representation of the virtual object and/or without making any changes to first representation1110. FIG.11Bshows that gaze1150of user202has changed to a different direction than the direction directed to location1151. In embodiments, the change in gaze may be detected (e.g., via input sensors). In response to the detected change in the user's gaze, a determination of the direction of the new direction of the gaze may be made. For example, it may be determined that the new direction of gaze1150may be directed to location1152. Location1152may be the location at which first representation1110is being displayed. In embodiments, in accordance with a determination that gaze1150is directed to a location that is the same as the location of first representation1110, the displaying of first representation1110may be modified. In some embodiments, determining to modify the displaying of first representation1110in accordance with a determination that gaze1150is directed to a location that is the same as the location of first representation1110may include a determination that the gaze1150has remained directed to the location that is the same as the location of first representation1110for at least a predetermined period of time. When it is determined that gaze1150has remained directed to the location that is the same as the location of first representation1110for a period of time that is less than the predetermined period of time (e.g., the direction of gaze1150is moved to a different direction before the predetermined period of time expires), the displaying of first representation1110may not be modified, but instead may be maintained without displaying another representation of the virtual object and/or without making any changes to first representation1110. When it is determined that gaze1150has remained directed to the location that is the same as the location of first representation1110for a period of time that is at least the same as the predetermined period of time (e.g., the direction of gaze1150does not move to a different direction before the predetermined period of time expires), the displaying of first representation1110may be modified. In embodiments, modifying the displaying of first representation1110may include ceasing to display first representation1110and displaying second representation1120, where second representation1120may be different from first representation1110. In some embodiments, second representation1120may be displayed at the same location and/or on the same surface where first representation1120was displayed. In embodiments, second representation1120may be different from first representation1110, and second representation1120may be configured to facilitate interaction by the user (e.g., user202). For example, second representation1120may be configured to include UI elements1112. UI elements1112may include at least one UI element configured for user interaction, such as a display. In some embodiments, second representation1120may alternatively or additionally have a size different than the size of first representation1110. For example, second representation1120may have a size that is larger or smaller than the size of first representation1110. In embodiments, the size of second representation1120may be based on a distance between the location of second representation1120(e.g., location1152) and the location of the user's head and/or eyes (e.g., location1153). In some embodiments, second representation1120may be configured for non-direct interaction, but may not be configured for direct-interaction. For example, second representation1120may not include any UI elements configured for direct interaction with a user (e.g., a button, an affordance, a user-interface element, an interactive element, etc.). In this case, the techniques described above with respect toFIGS.8A,8B,9A,9B,10A, and10Bmay be used to selectively modify second representation1120into a configuration for direct interaction based on a characteristic of an input mechanism (e.g., a representation of an appendage, a mouse, a stylus, etc.) in accordance with the disclosure herein. In this manner, a representation of a virtual object may be selectively and dynamically modified from a non-interaction configuration to a non-direct interaction configuration based on a characteristic of an input mechanism (e.g., a user's gaze), and then may be further modified from the non-direct interaction configuration to a direct interaction configuration based on another characteristic of the input mechanism or based on a characteristic of another input mechanism (e.g., a representation of an appendage, input device, etc.). FIGS.12A and12Billustrate another example in which a representation of a virtual object within a CGR environment is modified based on a user's gaze. In particular,FIG.12Ashows user202wearing electronic device200configured to allow user202to view CGR environment890. As shown inFIG.12A, first representation1210of a virtual object may be displayed via a display of electronic device200at location1251and with a particular size. In aspects, location1251may be on a wall of CGR environment890. In embodiments, first representation1210may be a representation of the virtual object, and the virtual object may be associated with a particular application (e.g., calendar, multimedia application, presentation, etc.) as discussed above. In the example illustrated inFIG.12A, first representation1210may be associated with a calendar application. In embodiments, first representation1210may not be configured for user interaction, whether direct or non-direct interaction. For example, the size of first representation1210may be a small size, and the small size may not enable a user to perceive any information from or interact with any UI elements of first representation1210. As shown inFIG.12A, a gaze1250of user202may be determined to be directed to location1252, which may be different from location1251where first representation1210is displayed. In aspects, in accordance with the determination that gaze1250is directed to a location that is different than the location of first representation1210, the displaying of first representation1210may be maintained without displaying another representation of the virtual object and/or without making any changes to first representation1210. FIG.12Bshows that gaze1250of user202has changed to a different direction than the direction directed to location1252. In embodiments, the change in gaze may be detected (e.g., via input sensors). In response to the detected change in the user's gaze, a determination of the direction of the new direction of the gaze may be made. For example, it may be determined that the new direction of gaze1250may be directed to location1251. Location1251is the location at which first representation1210is being displayed. In embodiments, in accordance with a determination that gaze1250is directed to a location that is the same as the location of first representation1210, the displaying of first representation1210may be modified. For example, first representation1210may cease to be displayed, and second representation1220may be displayed, where second representation1220may be different from first representation1210. In some embodiments, second representation1220may be displayed at the same location and/or on the same surface where first representation1210was displayed. In embodiments, second representation1220may be configured to include UI elements1221. UI elements1221may include at least one UI element configured for user interaction, such as a display. In some embodiments, second representation1220may alternatively or additionally have a size different than the size of first representation1210. For example, second representation1220may have a size that is larger or smaller than the size of first representation1210. In embodiments, the size of second representation1220may be based on a distance between the location of second representation1220(e.g., location1251) and the location of the user's head and/or eyes. In some embodiments, second representation1120may be configured for non-direct interaction, but may not be configured for direct-interaction. For example, second representation1120may not include any UI elements configured for direct interaction with a user (e.g., a button, an affordance, a user-interface element, an interactive element, etc.). In some embodiments, determining to modify the displaying of first representation1210in accordance with a determination that gaze1250is directed to a location that is the same as the location of first representation1210may include a determination that the gaze1250has remained directed to the location that is the same as the location of first representation1210for at least a predetermined period of time, as described with reference toFIGS.11A and11B. As previously described, in embodiments, modifying the first representation, which may include displaying the second representation may include animating the modification. For example, the modification of the first representation may include animating a change in size of the first representation such that the first representation is displayed as growing or shrinking, as appropriate, from the current size to the size of the second representation. In addition, or in the alternative, the modification of the first representation may include animating the UI elements of the first representation such that the UI elements are presented as receding into the first representation. In embodiments, the animation may also include a sound that may be played while the animation is occurring. It is noted that, in embodiments, the implementations of the techniques described herein may include any combination of the features and functionalities described above. For example, a representation of a virtual object may be modified to have any one of, and/or any combination of, a different size, different UI elements, different types of UI elements (e.g., flat UI elements, protruding UI elements, etc.), a different orientation, a different location, a different shape, a different brightness, etc. FIG.13is a flow diagram illustrating method1300for controlling a representation of a virtual object within a CGR environment based on characteristics of an input mechanism. In some embodiments, method1300may be performed by system100or a portion of system100. In some embodiments, method1300may be performed by one or more external systems and/or devices. In some embodiments, method1300may be performed by system100(or a portion of system100) in conjunction with one or more external systems and/or devices. At block1302, the system displays, via a display of an electronic device (e.g., a wearable electronic device, an HMD device, etc.), a first representation of a virtual object within a CGR environment. For example, a first representation of a virtual object may be displayed via a first display (e.g., a left eye display panel) or second display (e.g., a second eye display panel) of an electronic device on a representation of a display within the CGR environment. In embodiments, the first representation of the virtual object may be a virtual representation (e.g., a virtual representation superimposed over a first surface of the CGR environment via a translucent display of the electronic device). In embodiments, the first representation of the virtual object may be configured to facilitate non-direct interaction with the virtual object. For example, the first representation of the virtual object may include at least one UI element of UI elements configured for non-direct interaction such that a user may perceive an interact with the UI elements without directly manipulating the UI elements (e.g., a UI element configured for output). In embodiments, the first representation of the virtual object may include at least one UI element of UI elements that may be configurable to facilitate non-direct interaction, but are not configured for direct interaction (e.g., the UI elements may be displayed as protruding 3D UI elements). For example, the UI elements may include a button, an affordance, a user-interface element, an interactive element, etc., and/or any combination thereof. When the UI elements are configured to facilitate direct interaction, a user may select, click, select, and/or otherwise manipulate the UI elements. In embodiments, a movement of an input mechanism may be detected. The input mechanism may include a mechanism configured to facilitate interaction with the virtual object. For example, the input mechanism may include a mechanism for a user to manipulate at least one element of the representation of the virtual object, or to perceive data provided by the virtual object. In embodiments, the input mechanism may include a representation of an appendage of the user (e.g., a finger, hand, leg, foot, etc.), a user's gaze (e.g., head gaze, eye gaze, etc.), an input device (e.g., a mouse, a stylus, etc.), etc. In embodiments, the representation of an appendage of the user may include a virtual representation of the appendage and/or may include data representing characteristics of the appendage (e.g., location, orientation, distance to a particular point, etc.) within the CGR environment. In aspects, using input sensors (e.g., touch-sensitive surfaces, image-sensors, etc.) configured to perform hand-tracking, head gaze-tracking, eye gaze-tracking, finger-tracking, etc., a movement of the input mechanism may be detected. For example, the input mechanism may move from a previous location to a current location. In embodiments, in response to the detected movement of the input mechanism, a determination may be made as to whether the current location of the input mechanism is within the predetermined distance from the first representation or not. However, when no movement of the input mechanism is detected, the determination of whether the current location of the input mechanism is within the predetermined distance from the first representation or not may not be performed. In some embodiments, the determination of whether the current location of the input mechanism is within the predetermined distance from the first representation or not may be performed when a detected movement is determined to be towards the first representation. In these cases, if the movement of the input mechanism is determined to be away from the first representation, the determination of whether the current location of the input mechanism is within the predetermined distance from the first representation or not may not be performed even though a movement of the input mechanism may be detected. At block1304, in accordance with a determination that the current location of the input mechanism is within a predetermined distance from the first representation of the virtual object, the system displays, via the display of the electronic device, a second representation of the virtual object within the CGR environment. In embodiments, the second representation of the virtual object may be different from the first representation of the virtual object. In embodiments, in response to displaying the second representation of the virtual object, the first representation may cease to be displayed. In some embodiments, the second representation may be displayed at the same location and/or on the same surface where the first representation was displayed. In embodiments, the second representation may be configured to facilitate direct interaction by a user with the associated virtual object. For example, the second representation may include at least one UI element of UI elements configured for direct interaction. In embodiments, the UI elements may include at least one UI element displayed as a flat 2D UI element displayed upon a physical object. In embodiments, the UI elements may include any one of and/or any combination of a button, an affordance, a user-interface element, an interactive element, etc. In some embodiments, the second representation may have a size that is different than the size of the first representation. For example, the size of the second representation may be greater than the size of the first representation. In embodiments, the second representation may include a portion of the first representation, and the portion of the first representation included in the second representation may be larger than the size of the same portion in the first representation. In some embodiments, the second representation of the virtual object may be displayed at a location that is different than the current location of the first representation. In embodiments, the location at which the second representation of the virtual object may be displayed may be a location that is closer to the user than the current location of the first representation. In some embodiments, the second representation displayed at the new location may be the same representation as the first representation. In some embodiments, the first representation may be a 3D representation of the virtual object, and the second representation may be a 2D representation of the virtual object. In embodiments, the second representation may include at least a portion of the virtual object that is not displayed in the first representation of the virtual object. As described above, one aspect of the present technology is the gathering and use of data available from various sources to provide specialized resource management of low-power devices with additive displays (e.g., HMD devices with additive displays) to conserve battery life for users and to provide specialized content to users of the low-power devices. The present disclosure contemplates that, in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to conserve battery life of a user's low-power device. Accordingly, for example, the use of such personal information data the system to properly manage resources to conserve battery life for the low-power devices. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates examples in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of managing resources for low-powered devices, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide eye tracking data, such as pupil location, pupil dilation, and/or blink rate for specialized resource management. In yet another example, users can select to limit the length of time the eye-tracking data is maintained or entirely prohibit the development of a baseline eye tracking profile. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed examples, the present disclosure also contemplates that the various examples can also be implemented without the need for accessing such personal information data. That is, the various examples of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, resources of low-powered devices can be managed and content (e.g., status updates and/or objects) can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the system controlling the low-power device, or publicly available information. | 145,647 |
11861058 | DESCRIPTION OF EMBODIMENTS Embodiments of the present invention will be described with reference to the drawings. 1st Embodiment The overall system configuration used to provide a livestreaming service in the first embodiment will now be described with reference toFIG.1. The livestreaming service of the present embodiment is a livestreaming service that allows a distributor to become an avatar in a VR space and distribute a live broadcast program. In this livestreaming service, viewers can also participate in the program as viewer avatars in the same VR space as the distributor. The livestreaming service is provided using a server1, a distributor terminal3, and viewer terminals5connected to the network. There are five viewer terminals5inFIG.1, but the number of viewer terminals5is larger in reality and any number of viewer terminals5can participate. The server1receives livestreaming video in VR space from the distributor terminal3via the network, and distributes the livestreaming video to viewer terminals5. Specifically, the server1receives motion data of the distributor avatar from the distributor terminal3and distributes the motion data to viewer terminals5. The viewer terminals5reflect the received motion data in the distributor avatar rendered in VR space. When VR space is rendered and displayed on a viewer terminal5, the viewer terminal5has the model data needed to render the VR space. For example, a viewer terminal5may receive model data such as an avatar in VR space from the server1or may store the model data in advance. When a viewer participates in the program, the server1receives motion data of the viewer avatar from the viewer terminal5and distributes the motion data to the distributor terminal3and to other viewer terminals5. The distributor terminal3and the other viewer terminals5reflect the received motion data in the viewer avatar rendered in the VR space. The model data for the viewer avatar may be received from the server1when the viewer participates, or may be stored in advance by the distributor terminal3and the viewer terminals5. Viewers who participate in the program can cheer the distributor via viewer avatars. The distributor avatar responds with a reaction to a viewer avatar that has cheered. When the distributor avatar reacts to cheering, the server1makes a motion toward a viewer avatar that cheered a motion performed by the distributor avatar different from a motion toward a viewer avatar that did not cheer the motion performed by the distributor avatar. Specifically, the server1distributes motion data in which some of the motion by the distributor avatar is changed to a reaction motion to the viewer terminal5of a viewer who cheered, and normal motion data is delivered to the viewer terminal5of a viewer who has not cheered. In other words, when the distributor avatar reacts, the motion of the distributor avatar is different for viewers that cheer and viewers that do not cheer. The distributor terminal3is a terminal used by the distributor for livestreaming. The distributor terminal3can be, for example, a personal computer connected to an HMD. As shown inFIG.2, the distributor wears an HMD100and holds a controller101in both hands to control the distributor avatar. The HMD100detects movement of the distributor's head. Head movement detected by the HMD100is reflected in the distributor avatar. The distributor can move his or her head and look around the VR space. The HMD100renders the VR space in the direction the distributor is facing. The HMD100imparts parallax to the right-eye image and the left-eye image so that the distributor can see the VR space in three dimensions. The controllers101detect the movement of the distributor's hands. The hand movements detected by the controllers101are reflected in the distributor avatar. The distributor terminal3sends motion data that reflects detected movement by the distributor in the distributor avatar to the server1. This motion data is distributed to viewer terminals5by the server1. The distributor terminal3also receives motion data from viewer avatars who are participating in the distributor's program from the server1. The distributor terminal3reflects the received motion data in the viewer avatars rendered in the VR space. The controllers101include a control means such as buttons. When the distributor operates these buttons, the distributor avatar can be made to perform predetermined movements. One of the operations related to the present embodiment can be, for example, an operation for indicating the timing for responding with a reaction to cheering by a viewer. When a viewer avatar cheers the distributor avatar and the distributor has indicated the timing for replying with a reaction, the distributor terminal3sends a reaction signal to the server1. The server1makes the reaction motion to the viewer avatar who cheered different from the motion to the viewer avatars that did not cheer, and causes the distributor avatar to react to the cheering. A viewer terminal5is a terminal used by a viewer to watch a livestream. Like the distributor terminal3, the viewer terminal5can be a personal computer connected to an HMD. The viewer can participate in the distributor's program. When the viewer performs an operation to participate in the program, a viewer avatar that the viewer can control appears in the VR space along with the distributor avatar. As shown inFIG.2, the viewer also wears an HMD100and holds controllers101in both hands to control the viewer avatar. The viewpoint of a viewer who participates in a program is the viewpoint of the viewer avatar in the VR space. In other words, the HMD100renders the VR space from the perspective of the viewer avatar. The viewer can cheer the distributor by operating a controller101. When the viewer cheers for the distributor, the viewer terminal5sends a cheering signal to the server1. The configuration of the server1, the distributor terminal3, and a viewer terminal5in the present embodiment will now be described with reference toFIG.3. The server1shown inFIG.3includes a distribution unit11, a reaction unit12, a cheering detection unit13, and a reaction data storage unit14. The distribution unit11sends and receives data necessary for the livestreaming service. Specifically, the distribution unit11receives voice data from the distributor and motion data for the distributor avatar from the distributor terminal3, and distributes the voice data and the motion data to viewer terminals5. The distribution unit11also receives motion data for a viewer avatar from the viewer terminal5of a viewer who is participating in the program, and distributes the motion data to the distributor terminal3and to viewer terminals5. The distribution unit11may receive and distribute the voice data from a viewer. The distribution unit11may also receive and distribute a comment (text information) inputted by a viewer. The reaction unit12replaces normal motion sent to a viewer terminal5of a cheering viewer with reaction motion based on a predetermined timing. Normal motion is motion based on movement by the distributor detected by the HMD100and the controllers101. Reaction motion is reaction motion stored in the reaction data storage unit14and will be described later. The reaction unit12retrieves reaction motion from the reaction data storage unit14and sends it to the viewer terminal5of the cheering viewer as an individualized reaction motion to the cheering viewer avatar. An example of a reaction performed by the distributor avatar is a motion in which the distributor avatar turns toward the viewer avatar and winks. Because the VR space is rendered from the viewpoint of the viewer avatar, the viewer can feel that the distributor avatar has turned toward him or her in response to cheering. When the distributor avatar looks at the viewer avatar, an effect such as a beam of light or hearts may be displayed in the direction of the distributor avatar's gaze. Other examples of reactions include turning the body of the distributor avatar toward the viewer avatar or having the distributor avatar draw closer to the viewer avatar. The reaction may change depending on the amount of cheering by the viewer (for example, the number of gifts sent). The reaction motion is selected from the motion data stored in the reaction data storage unit14described later. During a reaction motion, the distributor avatar may turn away from viewer avatars that are not cheering. The reaction unit12determines the timing for the distributor avatar to react based on a reaction signal received from the distributor terminal3. Alternatively, the reaction unit12may set a predetermined time after detecting a cheer as the reaction timing irrespective of the reaction signal. The reaction unit12may also determine the reaction timing based on movement by the distributor avatar. For example, the reaction timing may be set when the distributor avatar has raised both hands. The distributor may also send a reaction motion. Here, the reaction motion received from the distributor terminal3is sent to the viewer terminal5of the viewer who cheered but the reaction motion is replaced with a normal motion and sent to the viewer terminals5of viewers who have not cheered. The cheering detection unit13detects a viewer who has cheered the distributor, and instructs the reaction unit12to perform a reaction motion toward the viewer avatar of the viewer. The cheering detection unit13may detect cheering by a viewer by receiving a cheering signal from the viewer terminal5. The cheering detection unit13may also detect cheering for the distributor based on movement by the viewer avatar. For example, the cheering detection unit13may detect that a viewer has cheered the distributor when the viewer avatar waves his or her arms at the distributor avatar. Alternatively, the cheering detection unit13may detect that a viewer has cheered the distributor when cheering by the viewer is audibly detected. The reaction data storage unit14stores reaction motion data. The reaction data storage unit14may store a variety of reaction motions. The distributor terminal3includes a VR function unit31and a reaction indicating unit32. The VR function unit31has functions required for livestreaming in a VR space, such as rendering the VR space and reflecting movement by the distributor in the distributor avatar in the VR space. For example, the VR function unit31determines a motion to be performed by the distributor avatar based on movement of the distributor's head detected by the HMD100and the movement of the distributor's hands detected by the controllers101, and sends this motion data to the server1. The reaction indicating unit32receives a reaction operation from the distributor and sends a reaction signal to the server1. For example, when a button on a controller101has been operated, a reaction signal is sent to the server1. When the server1determines when to reply with a reaction to cheering, the distributor terminal3does not need to receive a reaction operation from the distributor. A viewer terminal5includes a VR function unit51and a cheering indicating unit52. The VR function unit51has functions required for viewing a livestream in a VR space, such as rendering the VR space and reflecting movement by the viewer in the viewer avatar in the VR space. The cheering indicating unit52receives a cheering operation from the viewer and sends a cheering signal to the server1. The cheering indicating unit52causes the viewer avatar to make a cheering motion when needed. The viewer can cheer, for example, by sending a gift or waving a hand. When the viewer sends a gift, a cheering signal is sent from the cheering indicating unit52to the server1. A cheering operation is assigned to a button on a controller101. When the viewer operates this button, a cheering signal may be sent and the viewer avatar may make a cheering motion. Each unit in the server1, the distributor terminal3, and the viewer terminal5may be configured using a computer provided with an arithmetic processing unit and a storage device, and the processing of each unit may be executed by a program. This program can be stored in a storage device provided in the server1, the distributor terminal3, or the viewer terminal5. The program can be recorded on a recording medium such as a magnetic disk, an optical disk or a semiconductor memory, or can be provided over a network. Next, the start of livestreaming and participation of viewers in the livestream will be described with reference to the sequence chart inFIG.4. When the distributor uses the distributor terminal3to perform an operation that starts a livestream, the distributor terminal3notifies the server1that a livestream has started (step S11). When the livestreaming has started, the distributor terminal3detects operations and movements performed by the distributor and sends motion data for controlling the distributor avatar to the server1(step S12). The server1distributes the motion data to the viewer terminals5A-5C watching the livestream. During livestreaming, motion data is continuously sent from the distributor terminal3to the server1and motion data is continuously distributed from the server1to the viewer terminals5A-5C. When viewer A performs an operation to participate in the program using the viewer terminal5A, the viewer terminal5A notifies the server1that he or she will participate in the program (step S13). When viewer A is participating in the program, viewer terminal5A detects operations and movements performed by the viewer and sends motion data for controlling the viewer avatar to the server1(step S14). The server1distributes the motion data to the distributor terminal3and the other viewer terminals5B-5C. While participating in the program, motion data is continuously sent from viewer terminal5A to the server1and motion data is continuously distributed from the server1to the distributor terminal3and the other viewer terminals5B-5C. When viewer B performs an operation in the same manner as viewer A to participate in the program using viewer terminal5B, viewer terminal5B notifies the server1that the viewer will participate in the program (step S15), and starts sending motion data for controlling the viewer avatar of viewer B to the server1(step S16). When viewer C performs an operation to participate in the program using viewer terminal5C, the viewer terminal5C notifies the server1that the viewer will participate in the program (step S17), and starts sending motion data for controlling the viewer avatar of viewer C to the server1(step S18). FIG.5is a diagram showing the VR space in which the viewers are participating. In the VR space, as shown in the figure, only the distributor avatar300is on the stage and the viewer avatars500A-500E participating in the program are below the stage. The viewers can see not only the distributor avatar300but also the viewer avatars500A-500E of the viewers A to E who are participating in the program. The distributor can see the VR space from the vantage point of the distributor avatar300. Viewers A-E can see the VR space from the vantage points of their viewer avatars500A-500E. In other words, the distributor terminal3renders the VR space based on the position and direction of the face of the distributor avatar300, and the viewer terminals5render the VR space based on the position and direction of the face of each of the viewer avatars500A-500E. InFIG.5, viewers A-E see the distributor avatar300from different positions and at different angles. The vantage point of viewers who are not participating in the program can be, for example, the position of a virtual camera placed by the distributor in a certain position in the VR space. Next, the reaction of the distributor to cheering by a viewer will be described with reference to the sequence chart inFIG.6. When viewer A operates viewer terminal5A to cheer the distributor, viewer terminal5A sends a cheering signal to the server1, and the server1notifies the distributor terminal3that viewer A has cheered (step S21). Here, it is assumed that viewers B and C have not cheered the distributor. When the distributor has confirmed that viewer A has cheered him or her, he or she operates the distributor terminal3to send a reaction signal to the server1(step S22). The distributor performs a reaction operation to send a reaction signal, but the distributor himself or herself does not have to react. The distributor terminal3sends a normal motion based on movement by the distributor to the server1(step S23). When the reaction timing is determined by the server1, the process of sending a reaction signal in step S22is unnecessary. When the server1receives the reaction signal, it replaces the normal motion with a reaction motion and sends the reaction motion to viewer terminal5A that has sent a cheering signal (step S24), and sends the normal motion to the viewer terminals5B-5C that have not sent a cheering signal (steps S25, S26). When a plurality of viewer terminals5have sent a cheering signal, a personal reaction motion is sent to each of the viewer terminals5that sent a cheering signal. FIG.7is a diagram showing the VR space in which the distributor reacts to viewers. In this figure, it is assumed that viewers A and E (viewer avatars500A and500E) have cheered. The distributor avatar300reacts individually to viewer avatars500A and500E that are the cheering viewer avatars500A and500E on the timing for the distributor avatar300to reply with a reaction to cheering. Specifically, when the distributor avatar300reacts, the server1sends a personal reaction motion for viewer avatar500A to the viewer terminal5A of viewer avatar500A, and sends a personal reaction motion for viewer avatar500E to the viewer terminal5E of viewer avatar500E. The same normal motion is sent to viewer terminals5B,5C, and SD of viewer avatars500B,500C, and500D. In the VR space as seen by viewer A, distributor avatar300A has approached viewer avatar500A. In the VR space as seen by viewer E, distributor avatar300E has approached viewer avatar500E. In the VR space as seen by viewers B, C and D, the distributor avatar300is center stage. In other words, when the distributor avatar reacts, viewer A sees distributor avatar300A making a personal reaction motion toward viewer avatar500A, viewer E sees distributor avatar300E making a personal reaction motion toward viewer avatar500E, and viewers B, C and D see the distributor avatar300making the same normal motion. When the reaction by the distributor avatar has ended, the same normal motions by the distributor avatar are distributed to the viewer terminals5A-5E. The reaction motion may be a motion such as turning toward each of the viewer avatars500A,500E without changing the position of the distributor avatar300. In this case as well, a personal reaction motion is sent to the viewer avatars500A,500E that have cheered. When the distributor avatar300reacts to cheering, by making the personal reaction motion by the distributor avatar300A,300E toward the viewer avatars500A,500E that cheered different from the motion toward viewer avatars500B,500C,500D that did not cheer, viewers A and E can each get a reaction from the distributor avatar. 2nd Embodiment In the second embodiment, the viewer terminals perform the processing for replacing the normal motion of the distributor avatar with a reaction motion. The overall system configuration is similar to the configuration in the first embodiment. The configuration of the server1, the distributor terminal3, and the viewer terminals5used in the livestreaming service of the second embodiment will now be described with reference toFIG.8. The server1includes a distribution unit11. The distribution unit11sends and receives data necessary for the livestreaming service in a manner similar to the distribution unit11in the server1of the first embodiment. The distributor terminal3includes a VR function unit31. The VR function unit31has functions necessary for livestreaming in a VR space that are similar to those in the VR function unit31of the distributor terminal3in the first embodiment. The viewer terminals5include a VR function unit51, a cheering indicating unit52, a reaction unit53, and a reaction data storage unit54. The VR function unit51has functions necessary for livestreaming in a VR space that are similar to those in the VR function unit31of the viewer terminals5in the first embodiment. The VR function unit51also replaces the normal motion performed by the distributor avatar received from the server1with a reaction motion based on an instruction from the reaction unit53when rendering the VR space. The cheering indicating unit52receives a cheering operation from the viewer and notifies the reaction unit53. The cheering indicating unit52also causes the viewer avatar to make a cheering motion if needed. This differs from the first embodiment in that a cheering signal is not sent to the server1. Note that the cheering indicating unit52may send a cheering signal to the server1as in the first embodiment. When the cheering indicating unit52inputs a cheering operation, the reaction unit53replaces the motion performed by the distributor avatar received from the server1with a reaction motion on a predetermined timing. The predetermined timing is determined by the reaction unit53. The predetermined timing can be, for example, the time at which the cheering motion performed by the distributor avatar ends. In order to determine the reaction timing, a reaction signal may be received from the server1or the distributor terminal3. The reaction data storage unit54stores reaction motion data in the same manner as the reaction data storage unit14of the server1in the first embodiment. Next, the processing performed by the viewer terminal5will be explained with reference to the flowchart inFIG.9. When the cheering indicating unit52inputs a cheering operation, the viewer avatar is made to cheer the distributor avatar and the reaction unit53is notified that the distributor has been cheered (step S31). The distributor may also be informed that the viewer has cheered. The reaction unit53determines whether or not the reaction timing has occurred (step S32). When the reaction timing has occurred, the reaction unit53replaces the motion performed by the distributor avatar received from the server1with a reaction motion stored in the reaction data storage unit14(step S33). The reaction to cheering by the viewer will now be explained with reference to the sequence chart inFIG.10. The viewer terminal5A inputs a cheering operation from the viewer A (step S41). The distributor terminal3sends a normal motion to the server1(step S42), and the server1distributes the normal motion to the viewer terminals5A-5C (steps S43to S45). When the reaction timing occurs, the viewer terminal5A replaces the normal motion received by the server1with a reaction motion (step S46). In this way, the viewer terminal5A can detect cheering and replace the motion performed by the distributor avatar with a reaction motion in the viewer terminal5A itself so that the viewer can get a reaction from the distributor avatar. 3rd Embodiment In the third embodiment, the distributor terminal performs the processing for replacing the normal motion performed by the distributor avatar with a reaction motion. In the third embodiment, the distributor terminal3directly performs livestreaming to the viewer terminals5without going through a server1. Livestreaming may of course be performed via a server1as well. Note that livestreaming may also be performed in the second embodiment without going through a server1. The configuration of distributor terminal3and the viewer terminals5used in the livestreaming service of the third embodiment will now be described with reference toFIG.11. InFIG.11, there is a single viewer terminal5, but the distributor terminal3performs livestreaming to a plurality of viewer terminals5in reality. The distributor terminal3includes a VR function unit31, a reaction indicating unit32, a reaction unit33, a cheering detection unit34, and a reaction data storage unit35. The VR function unit31has functions necessary for livestreaming in a VR space that are similar to those in the VR function unit31of the distributor terminal3in the first embodiment. The reaction indicating unit32receives the reaction operation from the distributor and notifies the reaction unit33. The reaction unit33replaces the normal motion sent to the viewer terminal5of a viewer that has cheered with the reaction motion in response to an instruction from the reaction indicating unit32. The cheering detection unit34detects the viewer who has cheered the distributor, and notifies the reaction unit33of the viewer who cheered the distributor. The reaction data storage unit35stores reaction motion data in the same manner as the reaction data storage unit14in the server1of the first embodiment. The processing performed by the distributor terminal3will now be explained with reference to the flowchart inFIG.12. When the cheering detection unit34detects cheering by a viewer, it notifies the reaction unit33(step S51), and the reaction unit33waits for an instruction from the reaction indicating unit32(step S52). When the reaction indicating unit32has received a reaction operation and has notified the reaction unit33, the reaction unit33sends a reaction motion stored in the reaction data storage unit35to the viewer terminal5of the viewer who cheered (step S53), and sends a normal motion to the viewer terminals5of viewers who did not cheer (step S54). The reaction to cheering by a viewer will now be explained with reference to the sequence chart inFIG.13. When viewer A operates viewer terminal5A to cheer the distributor, viewer terminal5A sends a cheering signal to the distributor terminal3(step S61). When the distributor confirms that viewer A has cheered him or her, he or she inputs a reaction operation to the distributor terminal3(step S62). The distributor terminal3sends a reaction motion to viewer terminal5A that sent the cheering signal (step S63), and sends a normal motion to the other viewer terminals5B,5C (step S64, S65). Real-time distribution was performed in each of the embodiments described above. However, the present invention is not limited to real-time distribution. In each of the embodiments described above, a program in VR space that was distributed in the past or that has been prepared for distribution (“time-shifted distribution”) can be used. In time-shifted distribution, data for VR space rendering including motion data for the distributor avatar is stored in a server or some other configuration. The server distributes the data (for example, distributor voice data, distributor avatar motion data, etc.) needed to play the program in VR space to a viewer terminal in response to a request from the viewer terminal. The viewer terminal renders the VR space based on data that has been received and plays the program. In time-shifted distribution, operations such as pause, rewind, and fast forward can be performed. The data for a time-shifted program is stored in advance, but the distributor avatar can be made to react to cheers from the viewer by applying any of the embodiments described above. When reaction motion data is stored in a viewer terminal as described in the second embodiment, as soon as the viewer terminal detects a cheering operation performed by the viewer or once a predetermined amount of time has elapsed, reaction motion data for the distributor avatar stored in advance in the viewer terminal is retrieved, and the motion data for the distributor avatar received from the server is replaced with this reaction motion data. In this way, the viewer can see a performance in which the distributor responds to cheering by the viewer. Segments may be established in which motions performed by the distributor avatar cannot be replaced. For example, motion replacement can be prohibited while the distributor avatar is singing so that reaction motions occur during interludes. In time-shifted distribution, the reaction motions performed by the distributor avatar may be directed at the virtual camera rendered by the viewer terminal in VR space when the viewer is not participating in the program. When the first embodiment and the third embodiment described above are applied to time-shifted distribution, the server or distributor terminal storing the reaction motion data detects cheering by a viewer, replaces motion performed by the distributor avatar with a reaction motion, and sends the reaction motion only to the viewer terminal. In the embodiments described above, when cheering of the distributor avatar by viewer avatars is detected and the distributor avatar reacts to the cheering, a reaction motion is performed for the viewer avatars that cheered and a normal motion, not a reaction motion, is performed for viewer avatars that do not cheer. Because reaction motions are performed individually for the cheering viewer avatars, viewers more reliably get a reaction from the distributor. In each of the embodiments described above, motion data, appearance data, and born-digital data, etc. in a VR space, including replaced data, is sent from a server1or a distributor terminal3to the viewer terminals5and the viewer terminals5receive and render the data. However, in both real-time distribution and time-shifted distribution, the server1or the distributor terminal3may render the VR space. For example, in the first embodiment where the server1replaces motions, information such as the position and direction of the virtual camera may be sent from the distributor terminal3or a viewer terminal5to the server1, the VR space may be rendered by the server1to generate images based on information from the virtual camera, and the generated image may be sent to a viewer terminal5. In both of these configurations, a viewer terminal5where cheering has been detected can view the rendered images after the motion has been replaced with a reaction motion. In the third embodiment, the distributor terminal3may render the VR space and send images as well. REFERENCE SIGNS LIST 1: Server11: Distribution unit12: Reaction unit13: Cheering detection unit14: Reaction data storage unit3: Distributor terminal31: VR function unit32: Reaction indicating unit33: Reaction unit34: Cheering detection unit35: Reaction data storage unit5: Viewer terminal51: VR function unit52: Cheering indicating unit53: Reaction unit54: Reaction data storage unit100: HMD101: Controller | 30,743 |
11861059 | DETAILED DESCRIPTION The present system may be configured to generate and/or modify three-dimensional scenes comprising animated characters based on individual asynchronous motion capture recordings. The present system may enable one or more users to record and/or create virtual reality content by asynchronously recording individual characters via motion capture. The individual recordings may be combined into a compiled virtual reality scene having animated characters that are animated based on the motion capture information. The one or more users may individually record the motion, sound, and/or actions to be manifested by individual ones of the characters by initiating recording for a given character and performing the motion, sound, and/or other actions. The motion, sound, and/or actions to be manifested by individual ones of the characters within the compiled virtual reality scene may be characterized by motion capture information recorded by one or more sensors and/or other components of the computing device and/or the system. The motion capture information may be recorded individually and/or asynchronously such that an aggregation of the individual characters recorded (e.g., the compiled virtual reality scene) reflects the multiple characters acting, performing, and/or interacting contemporaneously within the same virtual reality scene (i.e., the compiled virtual reality scene). FIG.1illustrates a system10configured to generate and/or modify three-dimensional scenes comprising animated characters based on individual asynchronous motion capture recordings, in accordance with one or more implementations. System10may include one or more sensors18configured to generate to capture the motion, the sound, and/or the other actions made by the one or more users. Capturing the motion of the one or more users may include capturing the physical movement and/or muscle articulation of at least a portion of the user's body (e.g., arms, legs, torso, head, knees, elbows, hands, feet, eyes, mouth, etc.). Capturing the motion of one or more users may include capturing the body position, movement and muscle articulation for large scale body poses and motions, and/or movement and muscle articulation for small scale things (e.g., eye direction, squinting, and/or other small scale movement and/or articulation). System10may include one or more displays that present content to one or more users. The content may include three-dimensional content, two-dimensional content, and/or other content. For example, the content may include one or more of virtual reality content, augmented reality content, mixed-media content, and/or other three-dimensional and/or two-dimensional content. Presentation of the virtual reality content via a display16may simulate presence of a user within a virtual space that is fixed relative to physical space. System10may include one or more physical processors20configured by machine-readable instructions15. System10may be configured to receive selection of a first character to virtually embody within the virtual space. Virtually embodying the first character may enable a first user to record the motion, the sound, and/or other actions to be made by the first character within the compiled virtual reality scene. System10may receive a first request to capture the motion, the sound, and/or other actions for the first character. System10may record first motion capture information characterizing the motion, the sound, and/or other actions made by the first user as the first user virtually embodies the first character. The first motion capture information may be captured in a manner such that the actions of the first user may be manifested by the first character within the compiled virtual reality scene. System10may be configured to receive selection of a second character to virtually embody. The second character may be separate and distinct from the first character. Virtually embodying the second character may enable the first user or another user to record one or more of the motion, the sound, and/or other actions to be made by the second character within the compiled virtual reality scene. System10may receive a second request to capture the motion, the sound, and/or other actions for the second character. The system may be configured to record second motion capture information that characterizes the motion, the sound, and/or other actions made by the first user or other user as the first user or the other user virtually embodies the second character. The second motion capture information may be captured in a manner such that the actions of the first user or the other user may be manifested by the second character contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene. System10may be configured to generate the compiled virtual reality scene including animation of the first character, the second character, and/or other characters such that the first character, the second character, and/or other characters appear animated within the compiled virtual reality scene contemporaneously. The compiled virtual reality scene may include one or more of a clip, show, movie, short film, and/or virtual reality experience recorded and/or generated based on motion capture of one or more users. By way of non-limiting example, motion capture may include tracking the motion, physical movements, and/or muscle articulations of one or more users. Motion capture may include one or more of body tracking, physical location tracking, facial tracking, eye tracking, hand tracking, foot tracking, elbow tracking, knee tracking, and/or any type of tracking that may enable recording and/or capture of users' motions, physical movements, muscle articulations, expressions, postures, reflexes, and/or other motions and/or movements. The compiled virtual reality scene may include animations of one or more characters, virtual objects, virtual scenery, virtual scenery objects, and/or other virtual items. The animations may be based on the motion capture of the one or more users while virtually embodying individual ones of the characters included in the compiled virtual reality scene. In some implementations, the animations may be based on user inputs received via one or more input methods (e.g., controlled based inputs, and/or other inputs). By way of non-limiting example, one or more users may individually record the characters (e.g., one at a time per user) that appear within the same compiled virtual reality scene contemporaneously. When recording the actions to be manifested by a character within the compiled virtual reality scene, system10may be configured to present an editing scene to the user. The editing scene may include the manifestations of one or more users previous motions, sounds, and/or actions by one or more characters such that user may be able to interact with the previously recorded character(s) while recording the motions, sounds, and/or actions to be manifested by another character. In some implementations, the editing scene may include a recording input option to initiate recording of user's actions to be manifested by the character the user selected. As such, the user recording their actions for a given character may be able to interact and/or react with previously recorded characters contemporaneously as will be reflected in the compiled virtual reality scene. As used herein, “virtual reality” may refer to what is traditionally considered virtual reality as well as augmented reality and/or other similar concepts. In some implementations, “virtual reality” may refer to a form of virtual reality/augmented reality hybrid and/or include an aspect and/or ability to view content in an augmented reality way. For example, creators may generate traditional virtual reality content but use augmented reality cameras to keep the user's peripheral vision open so they can keep an eye on the physical world around them. In some implementations, system10may comprise one or more of a user interface14(which may include a display16and/or other components as described herein), sensors18, a processor20, electronic storage30, and/or other components. In some implementations, one or more components of system10may be included in a single computing device12. In some implementations, computing device12may be associated with the user. For example, computing device12may be owned by the user, carried by the user, operated by the user, and/or associated with the user in other ways. Computing device12may include communication lines, or ports to enable the exchange of information with a network, and/or other computing platforms. Computing device may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to computing device12. Computing device12may include, for example, a cellular telephone, a smartphone, a laptop, a tablet computer, a desktop computer, a television set-top box, smart TV, a gaming console, a virtual reality headset, and/or other devices. In some implementations, individual components of system10(e.g., display16, sensors18) may be coupled to (e.g., wired to, configured to wirelessly communicate with) computing device12without being included in computing device12. In some implementations, server40may be configured to communicate with computing device12via a client computing device. In some implementations, computing device12may include one or more components (e.g., hardware and/or software) configured to facilitate recording of the user motions, sounds, and/or other actions for use by system10. The user motions may include physical movement and/or muscle articulation of at least a portion of the user's body (e.g., arms, legs, torso, head, knees, elbows, hands, feet, eyes, mouth, etc.). Recording user motions may account for body position, movement and muscle articulation for large scale body poses, and/or movement and muscle articulation for small scale things (e.g., eye direction, squinting, and/or other small scale movement and/or articulation). This may include, for example, recoding user movements, muscle articulations, positions, gestures, actions, noises, dialogue, and/or other motions, sounds, and/or actions. The one or more components configured to facilitate recording of user motions, sounds, and/or other actions may include, for example, sensors18. In some implementations, the one or more components configured to facilitate recording of user motions, sounds, and/or other actions may include, for example, one or more user input controllers (e.g., special controllers for puppeteering, etc.). System10may include one or more processor(s)20configured by machine-readable instructions21to execute computer program components. The processor may be configured to execute computer program components22-28. The computer program components may be configured to enable an expert and/or user to interface with the system and/or provide other functionality attributed herein to the computing devices, the sensors, the electronic storage, and/or the processor. The computer program components may include one or more of: a display component22, a selection component24, a motion capture component26, a scene generation component28, and/or other components. Display component22may be configured to cause the display(s)16to present the virtual reality content to one or more users. Presenting virtual reality content to the one or more users may simulate the user(s)' presence within a virtual space. The virtual reality content may include one or more of an editing scene, the compiled virtual reality scene, and/or other virtual reality content. The display component may be configured to provide and/or transmit the virtual reality content for presentation over the network to one or more computing devices for viewing, recording, editing, and/or otherwise creating and/or sharing the compiled virtual reality scene. In some implementations, the virtual reality content may include an editing scene. The editing scene may be an editing version of the compiled scene that is presented to the one or more users while recording motion capture information for one or more characters. In some implementations, a user may be able to change the timing, physical placement, scale, and/or other attributes of the motion capture information and/or the compiled virtual reality scene via the editing scene. The display component may be configured to generate, provide, and/or transmit information for providing the virtual reality space and/or virtual reality content to users via the one or more computing device(s). The presented virtual reality content may correspond to one or more of a view direction of the user, a physical position of the user, a virtual position of the user within the virtual space, and/or other information. In some implementations, the display may be included in a virtual reality headset worn by the user. It should be noted that the description of the display provided herein is not intended to be limiting. Rather, the description of the display is intended to include future evolutions of virtual reality display technology (which may not even be display based, for example). For example, the display may include cameras and/or systems for augmented reality, and/or other augmented reality components, light field imaging devices that project an image onto the back of a user's retina (e.g., near-eye light field displays, etc.) virtual reality technology that utilizes contact lenses, virtual reality technology that communicates directly with the brain, and/or other display technology. Views of the virtual space may correspond to a location in the virtual space (e.g., a location in a scene). The location may have a topography, express contemporaneous interaction between one or more characters and/or a user, and/or include one or more objects positioned within the topography that are capable of locomotion within the topography. In some implementations, the topography may be a 3-dimensional topography. The topography may include dimensions of the space, and/or surface features of a surface or objects that are “native” to the space. In some instances, the topography may describe a surface (e.g., a ground surface) that runs through at least a substantial portion of the space. In some instances, the topography may describe a volume with one or more bodies positioned therein (e.g., a simulation of gravity-deprived space with one or more celestial bodies positioned therein). The views of the virtual space may be presented to the user such that a user may move through the virtual space and interact with the virtual space as the user would move through and interact with a corresponding physical space. For example, a user may walk and/or run through the virtual space, sit down, stand up, stop and observe an object in the virtual space, look up/down/left/right/etc., lean to look around an object in the virtual space, and/or other movements and/or interactions. The above description of the views of the virtual space is not intended to be limiting. The virtual space may be expressed in a more limited, or richer, manner. For example, in some implementations, views determined for the virtual space may be selected from a limited set of graphics depicting an event in a given place within the virtual space. In some implementations, views determined for the virtual space may include additional content (e.g., text, audio, pre-stored video content, and/or other content) that describe, augment, and/or overlay particulars of the current, previous, and/or future state of the place. System10may include user interface14. User interface14may include display16, one or more input controls, (not illustrated) and/or other components. Display16may be configured to present the virtual space and/or the virtual reality content to the user. Presentation of the virtual reality content via display16may simulate the presence of a user within a virtual space. The virtual space may be fixed relative to physical space. System10may include multiple displays16, and/or be configured to communicate with one or more servers, computer devices, and/or displays associated with other users. The one or more display(s) may be configured to present options for recording the motion and/or the sound for one or more of the characters within the virtual space. The options may include one or more of: a start/stop option for recording motion capture information; character selection options from which a user may select one or more characters to include in the compiled virtual reality scene; scene selection options from which a user may select one or more virtual scenery themes, virtual scenery objects, and/or virtual items; and/or other options. The display may be controlled by processor20to present, select, record, and/or otherwise generate the virtual reality content. FIG.2, depicts computing device12illustrated as a virtual reality headset that is worn on the head of a user200. The virtual reality content may be presented to the user in a virtual space via a display included in the headset. The virtual reality headset may be configured such that a perception of a three dimensional space is created by two stereoscopic movies, one generated for each eye, that are each being rendered in real time and then displayed. The convergence of these two movies in real time—one image to each eye (along with how those views are reactive to viewer head rotation and position in space)—may create a specific kind of immersive 3D effect and/or a sensation of presence in a virtual world. Presenting the virtual reality content to the user in the virtual space may include presenting one or more views of the virtual space to the user. Users may participate in the virtual space by interacting with content presented to the user in the virtual space. The content presented to the user may include a virtual space having one or more virtual events, characters, objects, and/or settings; an editing scene; a compiled virtual reality scene; and/or other content. In some implementations, the virtual reality content may be similarly presented to the user via one or more screens, projection devices, three dimensional image generation devices, light field imaging devices that project an image onto the back of a user's retina, virtual reality technology that utilizes contact lenses, virtual reality technology that communicates directly with (e.g., transmits signals to and/or receives signals from) the brain, and/or other devices configured to display the virtual reality content to the user. FIG.3illustrates a server40configured to communicate with computing device via a network, in accordance with one or more implementations. In some implementations, server40may be configured to provide the virtual space by hosting the virtual reality content over a network, such as the Internet. Server40may include electronic storage, one or more processors, communication components, and/or other components. Server40may include communication lines, or ports to enable the exchange of information with a network and/or other computing devices. Server40may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server40. For example, server40may be implemented by a cloud of computing devices operating together as server40. Server40may be configured to execute computer readable instructions to perform one or more functionalities attributed to the computing device(s)12, and/or one or more other functions. By way of non-limiting example, sever40may include one or more processors configured by machine-readable instructions to host, generate, transmit, provide, and/or facilitate presentation of virtual reality content to the computing device(s)12; provide an editing scene to the computing device(s)12; receive motion capture information from one or more of the computing device(s)12; generate the complied virtual reality scene (e.g., based on the motion capture information received from the one or more computing device(s)); and/or otherwise facilitate animation of characters based on motion capture and/or generation of a compiled virtual reality scene based on individual asynchronous motion capture recordings. For example, server40may be configured such that one or more users record the motion, the sound, and/or the actions for one or more characters for a compiled virtual reality scene individually. By way of non-limiting example, the motion capture for individual ones of the multiple characters that are to appear animated in a compiled virtual reality scene may be recorded asynchronously by different users, via different computing devices, and/or located at different physical locations. External resources300may include sources of information that are outside of system10, external entities participating with system10, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources300may be provided by resources included in system10. Returning toFIG.1, sensors18may be configured to generate output signals conveying information related to related to motion, sound, and/or other actions made by one or more users in physical space. Sensors18may, for example, record user movements, muscle articulations, positions, gestures, actions, noises, dialogue, and/or other motions, sounds, and/or actions. Sensors18may be configured to capture the motion and/or the sound made by the one or more users. The motion(s) captured via sensors18may include physical movement and/or muscle articulation of at least a portion of the user's body (e.g., arms, legs, torso, head, knees, elbows, hands, feet, eyes, mouth, etc.). Recording user motions may account for body position, movement and muscle articulation for large scale body poses, and/or movement and muscle articulation for small scale things (e.g., eye direction, squinting, and/or other small scale movement and/or articulation). In some implementations, the sensors may include one or more cameras and/or other optical sensors, inertial sensors, mechanical motion sensors, magnetic sensors, depth sensors, microphones, gyrosensor, accelerometer, laser position sensor, pressure sensors, volumetric sensors, voxel recordings/sensors, and/or other sensors. FIG.4illustrates an editing scene400presented to a user including options402for virtual scenery themes selectable by the user. Editing scene400may be presented by a display the same as or similar to display16. The user may interact with the editing scene and/or select one or more options within the editing scene via one or more input controls404. Editing scene400may represent the virtual space in which the user's presence is simulated. The editing scene400may be virtual reality content presented to the user via a display (e.g., a head mounted display and/or other display). Editing scene400may present options402for virtual scenery themes. The user may be able to select a virtual scenery theme406to apply to the virtual space for the compiled virtual reality scene. Responsive to the user selecting the virtual scenery theme406to apply to the virtual space, the virtual scenery theme406may be applied to the virtual space. By way of non-limiting example, virtual scenery theme406may include a desert such that applying virtual scenery theme406includes presenting a desert virtual reality content such that the user's presence within the desert is simulated. Returning toFIG.1, selection component24may be configured to receive selection of one or more characters to virtually embody within the virtual space. Selection of the one or more characters to virtually embody may be input by the one or more users via user interface14. Virtually embodying a character within the virtual space may include experiencing the virtual space as the character. Experiencing the virtual space as the character may include, for example, viewing the virtual space from the perspective of the character, moving within the virtual space as the character, interacting with the virtual space and/or other characters within the virtual space as the character, controlling the character's actions within the virtual space by performing the actions in real world physical space, and/or otherwise embodying the character within the virtual space. Virtually embodying a character may enable a user to record the motion and/or the sound made by the character within the compiled virtual reality scene. In some implementations, selection of a first character to virtually embody within the virtual space may be received by selection component24. Responsive to receiving selection of the first character by the selection component24, the system may enable the first user to record the motion and/or the sound to be made by the first character within the compiled virtual reality scene. Selection of the first character may be received by selection component24from one or more input controls associated with user interface14. Selection component24may be configured to receive selection of a second character for a user virtually embody. The second character may be separate and distinct from the first character. Virtually embodying the second character may enable a first user or another user to record one or more of the motion, the sound, and/or other actions to be made by the second character within the compiled virtual reality scene. In some implementations, selection of the second character may be received by selection component24after first motion capture information characterizing the motion, the sound, and/or the other actions made by the first user as the first user virtually embodies the first character is recorded. In some implementations, selection of the second character is received from a second display for a second computing device associated with a second user. The second motion capture information may characterize the motion, sound, and/or other actions made by the second user. The second user may be a different user than the first user. A first computing device associated with a first user may be separate and distinct from a second display for a second computing device associated with a second user. In some implementations, selection component24may be configured to receive selection of a third character for a user to virtually embody. The third character may be separate and/or distinct from the first character, the second character, and/or other characters within the editing scene and/or the compiled virtual reality scene. Virtually embodying the third character may enable the first user or a different user to record one or more of the motion, the sound, and/or other actions to be made by the third character within the compiled virtual reality scene. By way of non-limiting example, the actions manifested within the compiled virtual reality scene by one, two, or all three of the first character, the second character, and/or third character may correspond to the actions of one user, two different users, three different users, and/or any number of different users. In some implementations, the first computing device and the second computing device may be physically located in different locations such that information indicating portions of the compiled virtual reality scene must be transmitted over a network to one or more server(s)40, one or more computing device(s)12, and/or external resources300. In some implementations, selection component24, may be configured to individually receive selection of any number of characters for a user and/or multiple users to virtually embody individually and/or together. Motion capture information may be recorded for the any number of characters such that the compiled virtual reality scene may include any number of animated characters (e.g., two or more, three or more, four or more, five or more, “n” or more, etc.) manifesting the motions, sounds, and/or other actions of the user and/or multiple users. One or more users may create the virtual space depicted in the compiled virtual reality scene by selecting, placing, and/or modifying virtual reality content items within the virtual space. The virtual reality content items may include one or more characters, virtual objects, virtual scenery themes, virtual scenery items, and/or other virtual reality content items. The characters may include virtual characters within a virtual space, virtual objects, virtual creatures, and/or other characters. For example, the characters may include avatars and/or any other character(s). A compiled virtual reality scene may include multiple animated characters (e.g., two or more, three or more, four or more, five or more, “n” or more, etc.). The characters may be animated, partially animated, reflections of the users, live action, and/or other types of characters. The characters may be animated based on motion capture information corresponding to the motions, sounds, and/or actions made by one or more users. The one or more users may place and/or select avatars in the virtual space and initiate recording of their motions and/or sounds for individual avatars that are to be manifested by the individual avatars in the compiled scene. The individual motion capture recordings of the one or more users may correspond to individual avatars. The individual motion capture recordings may be recorded asynchronously (e.g., one-at-a-time, etc.). Some of the individual motion capture recordings may not be captured asynchronously. The motion capture recordings for some of the characters to be animated within a compiled virtual reality scene may be captured together, and/or other motion capture recordings for other characters to be animated contemporaneously within the same compiled virtual reality scene may be captured asynchronously. By way of non-limiting example, two or more of the individual motion capture recordings for one or more characters to be animated within a compiled virtual reality scene may be captured together and/or at the same time. Another individual motion capture recording for another character may be captured at a different time and/or separately. In some implementations, selection component24may be configured to receive selection of one or more characters, virtual objects, virtual scenery themes, virtual scenery items, and/or characters for placement within the editing scene. The arrangement one or more virtual objects, virtual scenery themes, virtual scenery items, and/or characters within the editing scene may be reflected in one or more segments of the compiled virtual reality scene. In some implementations, responsive to one or more characters being placed within the editing scene, the selection component24may be configured to receive selection of an individual character the user chooses to virtually embody. FIG.5illustrates an editing scene500presented to a user including options for characters502a user may place within editing scene500, in accordance with one or more implementations. Editing scene500may be presented by a display the same as or similar to display16. The user may interact with the editing scene and/or select one or more options within the editing scene via one or more input controls504. Editing scene500may represent the virtual space in which the user's presented is simulated. Editing scene500may be virtual reality content presented to the user via a display (e.g., a head mounted display and/or other display). Editing scene500may present options for characters502that may be placed within the editing scene. The user may be able to select one or more characters to place within the editing scene and/or to be reflected in the compiled virtual reality scene. Responsive to the user selecting avatar506, the user may be able to place avatar506within the editing scene. By way of non-limiting example, avatar506may include an alien. FIG.6illustrates an editing scene600presented to user601responsive to user selecting virtual object602, in accordance with one or more implementations. Editing scene600may be presented by display603(e.g., the same as or similar to display16). User601may interact with editing scene600and/or control the placement of virtual object602via one or more input controls604. Editing scene600may represent the virtual space in which the user's601presence is simulated. Editing scene600may be virtual reality content presented to user601via display603(e.g., a head mounted display and/or other display). Editing scene600may enable a user601to move object602to a user dictated location within editing scene600. By way of non-limiting example, object602include a prop to be presented within the compiled virtual reality scene. FIG.7Aillustrates editing scene700in which user701selects character702to virtually embody, in accordance with one or more implantations.FIG.7Billustrates editing scene700wherein user701is virtually embodying character702, in accordance with one or more implantations. Editing scene700may be presented via display703the same as or similar to display16. User701may select character702to virtually embody via one or more input controls704. Editing scene700may represent the virtual space in which the user's presence is simulated. Editing scene700may include virtual reality content presented to the user via display703(e.g., a head mounted display and/or other display). Editing scene700may include other character(s)706in addition to character from which user701may select to virtually embody. As illustrated inFIG.7B, responsive to user701selecting character702to virtually embody, user701may be able to experience the virtual space as character702. Editing scene700may include a recording option (e.g., “REC” button, etc.)707that may be selected by user701to initiate recording of motion capture information reflecting the motion, sound, and/or other actions made by user701to be manifested by character702within a compiled virtual reality scene. Returning toFIG.1, selection component24may be configured to receive of a facial expression for one or more of the characters. As such, the motion capture information may include one or more facial expressions for the characters. The facial expressions may be timed and/or selected to correspond with the motions, the sound, and/or other actions performed by the user. In some implementations, selection component24may be configured to receive selection of a first facial expression for the first character. The first motion capture information may include the first facial expression for the first character. As such, one or more of the actions of the first user may be manifested in the compiled virtual reality scene by the first character with the first facial expression. In some implementations, other expressions, motions, and/or actions for one or more other portions of the character may be selected by one or more users and/or received by selection component24. The other expressions, motions, and/or actions for other portions of one or more of the characters may be manifested in the compiled virtual reality scene. The other expressions, motions, and/or other actions may be selected for one or more of the body, hands, feet, eyes, face, mouth, toes, and/or other portions of one or more of the characters. In some implementations, the motion capture information may include motion capture information recorded by a user as the user virtually embodies one or more of the characters, and/or motion capture information selected via one or more user controls and/or the editing scene (e.g., separate from the users' virtual embodiment of the character). The motion capture information selected via the editing scene may include selected expressions, motions, and/or actions for one or portions of individual characters received by selection component24. By way of non-liming example, the user may be able to select one or more of other expressions, motions, and/or actions for one or more portions of one or more of the characters by using their thumbs and/or fingers to control user input controls to make selections. FIG.8illustrates editing scene800wherein a user is presented with facial expression options802for character804, in accordance with one or more implementations. Responsive to the user selecting facial expression801, facial expression801may be reflected by character804while character804manifests the actions of user within the compiled virtual reality scene. In some implementations, a virtual reflective surface806may be presented within the virtual space such that the user is able to preview and/or view the facial expression selected. By way of non-limiting example, the user may select facial expression801to be reflected by character804for the portion of the scene prior to requesting to capture the motion and/or the sound for character804based on the motion and/or sound made by the user while virtually embodying character804. User input controls (not pictured) may be used to make one or more selections within editing scene800. By way of non-liming example, the user may be able to select one or more of the facial expression options802by using their thumbs and/or fingers to control the user input controls and select one or more the facial expression options802. Motion capture component26may be configured to receive requests to capture the motion, the sound, and/or other actions for one or more characters. The requests may be to capture the motion, the sound, and/or other actions to be manifested within the compiled virtual reality scene by the one or more characters. A request to capture the motion, sound, and/or other action for a character may initiate recording of the motion, sound, and/or other actions of a user to be manifested by the character within the compiled virtual reality scene. Motion capture component26may be configured to record motion capture information characterizing the motion, the sound, and/or other actions made by a user as the user virtually embodies a character. Motion capture component26may record the motion capture information based on the output signals generated by the sensors. In some implementations, motion capture component26may receive a first request to capture the motion, the sound, and/or other actions for the first character; a second capture request to capture the motion, the sound, and/or other actions for the second character, a third request to capture motion, the sound, and/or other actions for the third character; and/or another request to capture the motion, sound, and/or other actions for other characters. Motion capture component26may be configured to record first motion capture information characterizing the motion, the sound, and/or other actions made by the first user as the first user virtually embodies the first character. The first motion capture information may be captured in a manner such that the actions of the first user may be manifested by the first character within the compiled virtual reality scene. Motion capture component26may be configured to receive a second request to capture the motion, the sound, and/or other actions for the second character. By way of non-limiting example, the second request may be received after the first motion capture information is recorded. In some implementations, the second request to capture the motion, the sound, and/or other action for the second character may be responsive to receiving selection of the second character for the user to virtually embody. Motion capture component26may be configured to record second motion capture information. The second motion capture information may characterize the motion, the sound, and/or the other actions made by the first user or another user as the first user or the other user virtually embodies the second character. The second motion capture information may be captured in a manner such that the actions of the first user or the other user are manifested by the second character within the compiled virtual reality scene. The actions of the first user or the other user may be manifested by the second character within the compiled virtual reality scene contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene. By way of non-limiting example, the first character, the second character, and/or other characters may appear to interact with each other and/or react to occurrences (e.g., actions by one or more characters, and/or other occurrences within the virtual space) performed within the same compiled virtual reality scene. In some implementations, the motion capture component26may be configured to capture sound and/or motion for a given character separately. For example, sound and/or voice information may be added to animated characters after their motion has been recorded. By way of non-limiting example, a first user may record the motion for the first character and/or a second user may (e.g., asynchronously) record the sound for the first character. In some implementations, responsive to motion capture component26receiving the second request to capture the motion, the sound, and/or other actions for the second character, the editing scene may be presented to the first user or the other user. The editing scene may include a manifestation of the first user's actions by the first character presented contemporaneously to the first user or the other user while the second motion capture information is being recorded. One or more display(s)16and/or computing device(s)12may be configured to configured to present the editing scene including one or more manifestations of one or more users' actions by one or more characters based on previously recorded motion capture information while recording subsequent motion capture information characterizing the motion and/or the sound made by the users as the users virtually embody one or more characters. FIG.9illustrates an editing scene900depicting a manifestation of a user's previous actions by first character902, in accordance with one or more implementations. The manifestation of a user's previous actions by first character902may be based on previously recorded motion capture information may be presented to user904while user904is virtually embodying a second character906(not illustrated) and/or recording second motion capture information that characterizes the motion and the sound made by user904to be manifested by second character906in the compiled virtual reality scene contemporaneously with first character902. User904may be able to see, react to, and/or otherwise interact with previously recorded first character902while recording second character906that user904is virtually embodying. User904may be able to view previously recorded first character902via a display908. User input controls910may be used to make one or more selections within editing scene900. Returning toFIG.1, the same user that virtually embodied the first character and/or initiated recording of the first motion capture information for the first character based on their actions, may be the user that virtually embodies the second character and/or initiates recording of the second motion capture information for the second character based on their actions. In some implementations, the user that virtually embodied the first character and/or initiated recording of the first motion capture information for the first character based on their actions, may be a different user than the other user that virtually embodies the second character and/or initiates recording motion capture information for the second character based on his actions. In some implementations, a third request to capture the motion, the sound, and/or other actions for the third character may be received by motion capture component26. Motion capture component26may be configured to record third motion capture information. The third motion capture information may characterize the motion, the sound, and/or other actions made by the first user or the different user as the first user or the different user virtually embodies the third character. The third motion capture information may be captured in a manner such that the actions of the first user or the different user are manifested by the third character contemporaneously with the actions of the first user manifested by the first character, the actions of the first user or the other user manifested by the second character, and/or the actions of another user manifested by another character within the compiled virtual reality scene. In some implementations, first motion capture information may characterize the motion or the sound made by the first user, the second motion capture information may characterize the motion and/or the sound made by the second user, and/or the third motion capture information may characterize the motion and/or the sound made by the first user. In some implementations, the first motion capture information may characterize the motion or the sound made by the first user, the second motion capture information may characterize the motion and/or the sound made by a second user, and/or the third motion capture information may characterize the motion and/or the sound made by a third user. The first user, the second user, and/or the third user may be different. In some implementations, the first user, the second user, and/or the third user may be are associated with different computing devices located at different physical locations. In some implementations, the motion, sound, and/or other actions of three or more users may be captured and/or manifested by any number of characters within the compiled virtual reality scene. By way of non-limiting example, recording of the first motion capture information may take place in one part of the world and/or be uploaded for sharing (e.g., via the cloud/server, etc.). Continuing the non-limiting use example, the second motion capture information may be recorded based on the motions, sounds, and/or actions of another user, who reacts to the first motion capture information. The second motion capture information may be recorded in response to the other user obtaining the recording of the first motion capture information via the could/server. The second motion capture information may be shared via the cloud/server, etc. Scene generation component28may be configured to generate the compiled virtual reality scene. The compiled virtual reality scene may include animation of the first character, the second character, the third character, and/or other characters. The compiled virtual reality scene may be generated such that the first character, the second character, the third character, and/or other characters appear animated within the compiled virtual reality scene contemporaneously. As such, the motion, sound, and/or other actions for the different characters within a given compiled virtual reality scene may be recorded asynchronously, but still appear animated within the compiled virtual reality scene contemporaneously (e.g., according to the same timeline, etc.). As such, asynchronously recorded characters may appear to interact with each other and/or react to each other, even though the characters' actions may be recorded independently, separately, one at a time, with different computing devices, by different users, and/or at different physical locations. In some implementations, a user may be able to select when (e.g., a point in time, a time period, etc.) within, the compiled virtual reality scene, their actions will be manifested by a given character. As such, returning to selection component24, selection of a start time within a timeline of the compiled scene may be received. The start time within the timeline of the compiled scene may indicate when the first character should start manifesting the actions of the first user within the compiled virtual reality scene during playback of the compiled virtual reality scene. As such, the timing of the second characters reactions, interaction, motion, sound, and/or other actions may be dictated by one or more users (e.g., the users performing the motion capture, a director organizing multiple users performing the motion capture for different characters within the compiled virtual reality scene, and/or other users). In some implementations, a user may be able to change the timing, physical placement, scale, and/or other attributes of the motion capture information, the compiled virtual reality scene, and/or one or more characters and/or objects within the editing scene and/or the compiled virtual reality scene. Selection component24may be configured to receive selection of one or more changes to one or more portions of the motion capture information and/or the compiled virtual reality scene from one or more users. The compiled virtual reality scene, and/or portions of the compiled virtual reality scene may be transmitted to one or more computing devices associated with one or more users for viewing, adding one or more characters, editing one or more characters, adding and/or editing the virtual objects, adding and/or editing a virtual scenery theme, and/or for other reasons. In some implementations, one or more users may be able to asynchronously create and/or define differential two-dimensional portions, three-dimensional and/or 360 portions, and/or media content items from the compiled virtual reality scene. The created and/or defined portions may be transmitted to one or more computing devices for viewing, sharing, editing, and/or for other reasons. In some implementations, an editor of the compiled virtual reality scene may control and/or dictate which users may add to, contribute to, edit, view, and/or otherwise interact with the compiled virtual reality scene. By way of non-limiting example the editor may assign one or more users to perform the motion capture recordings for one or more characters to be animated within the compiled virtual reality scene. In some implementations, the first user may record and/or indicate companion instructions and/or directions for subsequent users re what they should do and/or record in the scene. The compiled virtual reality scene may be shared, transmitted, hosted online, and/or otherwise communicated to one or more computing device(s)12for viewing by one or more users via one or more display(s)16. The presence of one or more users viewing the compiled virtual reality scene may be simulated within the compiled virtual reality scene. As such, for example, the one or more users may be able to look around, move around, walk-through, run-through, and/or otherwise view and/or interact with the compiled virtual reality scene. FIG.10illustrates a compiled virtual reality scene1000, in accordance with one or more implementations. Compiled virtual reality scene1000may include animation of the first character902, second character906, and/or other characters. First character902, second character906, and/or other characters may appear animated within compiled virtual reality scene1000contemporaneously. In some implementations, the motion, sound, and/or other actions of first character902may be based on first motion capture information. The motion, sound, and/or other actions of second character intersects may be based on second motion capture information. In some implementations, the first motion capture information and the second motion capture information may be recorded and/or captured asynchronously, be recorded by different computing devices located at one or more physical locations, characterize the motion and/or sound made by the same user, and/or characterize the motion and/or sound made by different users. Returning toFIG.1, sensors18may be configured to generate output signals conveying information related to motion, sounds, view direction, location, and/or other actions of the user and/or other information. The view direction of the user may correspond to a physical direction toward which a gaze of the user is directed, an orientation of one or more parts of the user's body (e.g., the user's head may be tilted, the user may be leaning over), a position of a user within the virtual space, and/or other directional information. The information related to motion, sounds, and/or other actions of the user may include any motion capture information they may be captured in accordance with existing and/or future methods. These examples are not intended to be limiting. In some implementations, sensors18may include one or more of a GPS sensor, a gyroscope, an accelerometer, an altimeter, a compass, a camera based sensor, a magnetic sensor, an optical sensor, an infrared sensor, a motion tracking sensor, an inertial sensor, a CCB sensor, an eye tracking sensor, a facial tracking sensor, a body tracking sensor, and/or other sensors. User interface14may be configured to provide an interface between system10and the user through which the user may provide information to and receive information from system10. This enables data, cues, results, and/or instructions and any other communicable items, collectively referred to as “information,” to be communicated between the user and system10. By way of a non-limiting example, user interface14may be configured to display the virtual reality content to the user. Examples of interface devices suitable for inclusion in user interface include one or more controllers, joysticks, track pad, a touch screen, a keypad, touch sensitive and/or physical buttons, switches, a keyboard, knobs, levers, a display (e.g., display16), speakers, a microphone, an indicator light, a printer, and/or other interface devices. In some implementations, user interface includes a plurality of separate interfaces (e.g., multiple displays16). In some implementations, user interface14includes at least one interface that is provided integrally with processor20. In some implementations, user interface14may be included in computing device12(e.g., a desktop computer, a laptop computer, a tablet computer, a smartphone, a virtual reality headset, etc.) associated with an individual user. In some implementations, user interface14may be included in a first computing device (e.g., a virtual reality headset) that is located remotely from a second computing device (e.g., server40shown inFIG.3). It is to be understood that other communication techniques, either hard-wired or wireless, are also contemplated by the present disclosure as user interface14. For example, the present disclosure contemplates that user interface14may be integrated with a removable storage interface provided by electronic storage30. In this example, information may be loaded into system10from removable storage (e.g., a smart card, a flash drive, a removable disk) that enables the user to customize the implementation of system10. Other exemplary input devices and techniques adapted for use with system10as user interface14include, but are not limited to, an RS-232 port, RF link, an IR link, modem (telephone, cable or other), a USB port, Thunderbolt, a Bluetooth connection, and/or other input devices and/or techniques. In short, any technique for communicating information with system10is contemplated by the present disclosure as user interface14. Display16may be configured to present the virtual reality content to the user. Display16may be configured to present the virtual reality content to the user such that the presented virtual reality content corresponds to a view direction of the user. Display16may be controlled by processor20to present the virtual reality content to the user such that the presented virtual reality content corresponds to a view direction, location, and/or physical position of the user. Display16may include one or more screens, projection devices, three dimensional image generation devices, light field imaging devices that project an image onto the back of a user's retina, virtual reality technology that utilizes contact lenses, virtual reality technology that communicates directly with (e.g., transmitting signals to and/or receiving signals from) the brain, and/or other devices configured to display the virtual reality content to the user. The one or more screens and/or other devices may be electronically and/or physically coupled, and/or may be separate from each other. As described above, in some implementations, display16may be included in a virtual reality headset worn by the user. In some implementations, display16may be a single screen and/or multiple screens included in a computing device12(e.g., a cellular telephone, a smartphone, a laptop, a tablet computer, a desktop computer, a television set-top box/television, smart TV, a gaming system, a virtual reality headset, and/or other devices). In some implementations, display16may include a plurality of screens physically arranged about a user such that when a user looks in different directions, the plurality of screens presents individual portions (e.g., that correspond to specific view directions and/or fields of view) of the virtual reality content to the user on individual screens. Processor20may be configured to provide information processing capabilities in system10. Processor20may communicate wirelessly with user interface14, sensors18, electronic storage30, external resources not shown inFIG.1, and/or other components of system10. In some implementations, processor20may communicate with user interface14, sensors18, electronic storage30, external resources not shown inFIG.1, and/or other components of system10via wires. In some implementations, processor20may be remotely located (e.g., within server40shown inFIG.3) relative to user interface14, sensors18, electronic storage30, external resources not shown inFIG.1, and/or other components of system10. Processor20may be configured to execute computer program components. The computer program components may be configured to enable an expert and/or user to interface with system10and/or provide other functionality attributed herein to user interface14, sensors18, electronic storage30, and/or processor20. The computer program components may include a display component22, a selection component24, a motion capture component26, a scene generation component28, and/or other components. Processor20may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor20is shown inFIG.1as a single entity, this is for illustrative purposes only. In some implementations, processor20may comprise a plurality of processing units. These processing units may be physically located within the same device (e.g., a server, a desktop computer, a laptop computer, a tablet computer, a smartphone, a virtual reality headset, and/or other computing devices), or processor20may represent processing functionality of a plurality of devices operating in coordination (e.g., a plurality of servers, a server and a computing device12). Processor20may be configured to execute components22,24,26, and/or28by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor20. It should be appreciated that although components22,24,26, and28are illustrated inFIG.1as being co-located within a single processing unit, in implementations in which processor20comprises multiple processing units, one or more of components22,24,26, and/or28may be located remotely from the other components (e.g., such as within server40shown inFIG.3). The description of the functionality provided by the different components22,24,26, and/or28described herein is for illustrative purposes, and is not intended to be limiting, as any of components22,24,26, and/or28may provide more or less functionality than is described. For example, one or more of components22,24,26, and/or28may be eliminated, and some or all of its functionality may be provided by other components22,24,26, and/or28. As another example, processor20may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components22,24,26, and/or28. In some implementations, one or more of components22,24,26, and/or28may be executed by a processor incorporated in a remotely located server, and/or other components of system10. Electronic storage30may comprise electronic storage media that electronically stores information. The electronic storage media of the electronic storage may include one or both of storage that is provided integrally (i.e., substantially non-removable) with the respective device and/or removable storage that is removably connectable to the respective device. Removable storage may include for example, a port or a drive. A port may include a USB port, a firewire port, and/or other port. A drive may include a disk drive and/or other drive. Electronic storage may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storage may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage may store files, software algorithms, information determined by processor(s)20, and/or other information that enables the respective devices to function as described herein. FIG.11illustrates a method1100for animating characters based on motion capture, and/or for playing back individual asynchronous motion capture recordings as a compiled virtual reality scene, in accordance with one or more implementations. The operations of method1100presented below are intended to be illustrative. In some implementations, method1100may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method1100are respectively illustrated inFIG.11and described below is not intended to be limiting. In some implementations, method1100may be implemented by one or more computing devices, and/or in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method1100in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method1100. At an operation1102, output signals may be generated. The output signals may convey information related motion, sound, and/or other actions made by one or more users in physical space. The sensors may be configured to capture the motion, the sound, and/or other actions made by the one or more users. In some implementations, operation1102may be performed by one or more sensors that are the same as or similar to sensors18(shown inFIG.1and described herein). At an operation1104, virtual reality content may be presented to one or more users. The virtual reality content may be presented via one or more displays. Presentation of the virtual reality content via display may simulate the presence of a user within a virtual space that is fixed relative to physical space. The one or more displays maybe. To present opted for recording the motion, the sound, and/or other actions for one or more of the characters within the virtual space. In some implementations, operation1104may be performed by a display that is the same as or similar to display16(shown inFIG.1and described herein). At an operation1106, selection of a first character to virtually embody within the virtual space may be received. Virtually embodying the first character may enable a first user to record the motion, the sound, and/or other actions to be made by the first character within the compiled virtual reality scene. Operation may be performed by selection component that is the same as or similar to selection component24(shown inFIG.1and described herein). At an operation1108, a first request to capture the motion, the sound, and/or other actions for the first character may be received. In some implementations, operation1108may be performed by a motion capture component that is the same as or similar to motion capture component26(shown inFIG.1and described herein). At an operation1110, first motion capture information may be recorded. The first motion capture information may characterize the motion, the sound, and/or other actions made by the first user as the first user virtually embodies the first character. The first motion capture information may be captured in a manner such that the actions of the first user are manifested by the first character within the compiled virtual reality scene. Operation1110may be performed by a motion capture component that is the same as or similar to motion capture component26(shown inFIG.1and described herein). At an operation1112, selection of a second character to virtually embody may be received. The second character may be separate and distinct from the first character. Virtually embodying the second character may enable the first user or another user to record one or more of the motion, the sound, and/or other actions to be made by the second character within the compiled virtual reality scene. Operation1112may be performed by a motion capture component the same as or similar to motion capture component26(shown inFIG.1and described herein). At an operation1114, a second request to capture the motion, the sound, and/or other actions for the second character may be received. Operation1114may be performed by a component the same as or similar to selection component24(shown inFIG.1and described herein). At an operation1116, second motion capture information may be recorded. The second motion capture information may characterize the motion, the sound, and/or other actions made by the first user or other user as the first user or the other user virtually embodies the second character. The second motion capture information may be captured in a manner such that the actions of the first user or the other user may be manifested by the second character contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene. Operation1116may be performed by a motion capture component the same as or similar to motion capture component26(shown inFIG.1and described herein). At an operation1118, the compiled virtual reality scene may be generated. The compiled virtual reality scene may include animation of the first character, the second character, and/or other characters such that the first character and the second character appear animated within the compiled virtual reality scene contemporaneously. Operation1112may be performed by a scene generation component the same as or similar to scene generation component28(shown inFIG.1and described herein). Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation. As another example, the present disclosure contemplates that technological advances in display technology such as light field imaging on the back of a retina, contact lens displays, and/or a display configured to communicate with (e.g., transmit signals to and/or receive signals from) a user's brain fall within the scope of this disclosure. | 69,033 |
11861060 | MODES FOR CARRYING OUT THE INVENTION Hereafter, modes for carrying out the present invention will be described with reference to the drawings. In each of the drawings, the dimensions and scale of each portion may appropriately differ from actual dimensions and scale. Furthermore, since the embodiments to be described below are preferred specific examples of the present invention, various types of technically preferable limits are given. However, the scope of the present invention is not limited to these modes unless otherwise specified in the following description. A. Embodiment An embodiment of the present invention will be described below. 1. Overview of Head Mounted Display An overview of a Head Mounted Display1(hereafter a “HMD1”) according to the embodiment will be described below with reference toFIGS.1to12. 1.1. Configuration of Head Mounted Display and Usage Thereof First, the configuration of the HMD1and usage thereof will be described with reference toFIGS.1and2. FIG.1is an exploded perspective view for an example configuration of the HMD1according to the embodiment.FIG.2is an explanatory diagram for an example usage concept of the HMD1according to the embodiment. As shown inFIG.1, the HMD1includes a terminal apparatus10and wearable equipment90. The terminal apparatus10(an example of “an information processing apparatus”) includes a display12. In the embodiment, an example case is assumed in which a smartphone is employed for the terminal apparatus10. However, the terminal apparatus10may be dedicated to a display apparatus for the HMD1. As shown inFIG.2, the wearable equipment90is a component for wearing the HMD1on a user U's head. As shown inFIG.1, the wearable equipment90includes: a pair of temples91L and91R for wearing the HMD1on the user U's head; a mounting space92for mounting the terminal apparatus10on the HMD1; and a pair of openings92L and92R. The openings92L and92R are provided at positions that correspond to those of the user U's eyes when the user U wears the HMD1on the head. There may be provided lenses at portions of the openings92L and92R. When the user U wears the HMD1on the head, the user U is able to view with the left eye the display12through the opening92L or a lens provided in the opening92L, the display12being included in the terminal apparatus10and the terminal apparatus10being inserted in the mounting space92. The user U is able to view with the right eye the display12through the opening92R or a lens provided in the opening92R, the display12being included in the terminal apparatus10and the terminal apparatus10being inserted in the mounting space92. As shown inFIG.2, the user U wearing the HMD1on the head is able to change orientation of the HMD1by changing orientation of the head of the user U. For the sake of clarity, a coordinate system fixed to the HMD1, which is referred to as an “apparatus coordinate system ΣS.” will be used. The “apparatus coordinate system ΣS” refers to a three-axis orthogonal coordinate system that has an XS-axis, a YS-axis and a ZS-axis orthogonal to one another and has the origin at a predetermined position of the HMD1, for example. In the embodiment, as shown inFIG.2, an example case is assumed in which when the user U wears the HMD1, the apparatus coordinate system ΣSis set as follows. When viewed by the user U, a +XSdirection represents a direction that is in front of the user U. When viewed by the user U, a +YSdirection represents a direction that is on the left. When viewed by the user U, a +ZSdirection represents an upward direction. As shown inFIG.2, the user U wearing the HMD1on the head is able to change the orientation of the HMD1by changing the orientation of the head such that the HMD1rotates in the rotational direction around the XS-axis, that is, a roll direction QX. Likewise, the user U is able to change the orientation of the HMD1by changing the orientation of the head such that the HMD1rotates in the rotational direction around the YS-axis, that is, a pitch direction QY. The user U is able to change the orientation of the HMD1by changing the orientation of the head such that the HMD1rotates in the rotational direction around the ZS-axis, that is, a yaw direction QZ. In other words, the user U wearing the HMD1on the head is able to change the orientation of the HMD1by changing the orientation of the head such that the HMD1rotates in a desired rotational direction that is obtained by combining some or all of the roll direction QX, the pitch direction QYand the yaw direction QZ, that is, the rotational direction Qwaround a desired rotational axis Ws. In the following description, an apparatus coordinate system ΣSfixed to the HMD1at a reference time t0will be referred to as a “reference apparatus coordinate system ΣS0.” In the embodiment, the orientation of the HMD1at time t after the reference time t0will be described as an orientation that is obtained by rotating the HMD1at the reference time t0by an angle θWin the rotational direction QWaround the rotational axis WS. In other words, in the embodiment, the apparatus coordinate system ΣSat time t after the reference time t0will be described as a coordinate system having an axis that is obtained by rotating each axis of the reference apparatus coordinate system ΣS0by the angle θWaround the rotational axis WS. The terminal apparatus10captures an image of a virtual space SP-V with a virtual camera CM that is present in the virtual space SP-V. The terminal apparatus10causes the display12to display a display image GH representative of a result of an image captured by the virtual camera CM. 1.2 Virtual Space and Virtual Camera The virtual space SP-V and the virtual camera CM will be described with reference toFIGS.3to6. FIG.3is an explanatory diagram for the virtual space SP-V. In the embodiment, as shown inFIG.3, an example case is assumed in which the following are provided in the virtual space SP-V: environment components Ev composing the virtual space SP-V, such as a virtual ground, mountains, trees or the like; a virtual character V; optional objects CB representative of options; an enter button Bt (an example of a “predetermined object”) for selecting one or more options selected by the user from among the options; a virtual message board Bd for displaying information on the options; and the virtual camera CM for capturing an image of the virtual space SP-V. In the embodiment, an example case is assumed in which K optional objects CB[1] to CB[K] for representing K options are provided in the virtual space SP-V (K represents a natural number satisfying K≥2). In the following description, the kth optional object CB from among the K optional objects CB[1] to CB[K] is referred to as an “optional object CB[k] (k represents a natural number satisfying 1≤k≤K). In the embodiment, as shown inFIG.3, an example case is assumed in which the virtual camera CM is composed of a left-eye virtual camera CM-L and a right-eye virtual camera CM-R. For the sake of clarity, as shown inFIG.3, a coordinate system fixed to the virtual space SP-V, which is referred to as “a virtual space coordinate system ΔV”, will be used. Here, the “virtual space coordinate system ΔV” refers to a three-axis orthogonal coordinate system that has an XV-axis, a YV-axis, and a ZV-axis orthogonal to one another and has the origin at a predetermined position in the virtual space SP-V, for example. FIG.4is an explanatory diagram for the virtual camera CM in the virtual space SP-V.FIG.4shows an exemplary case in which the virtual space SP-V is viewed in planar view from the +ZVdirection.FIG.4also shows an exemplary case in which the virtual camera CM captures an image of the character V in a direction that is in front of the character V. In the following description, as shown inFIG.4, a “position PC” will be defined as follows: the position PC indicates the midpoint between a position PC-L of the virtual camera CM-L in the virtual space SP-V and a position PC-R of the virtual camera CM-R in the virtual space SP-V. Furthermore, in the following description, as shown inFIG.4, a “virtual straight line LC-L” will be defined as follows: the virtual straight line LC-L represents a virtual straight line that intersects with a position PC-L and extends in an optical axis direction of the virtual camera CM-L. Likewise, a “virtual straight line LC-R” will be defined as follows: the virtual straight line LC-R represents a virtual straight line that intersects with position PC-R and extends in an optical axis direction of the virtual camera CM-R. Furthermore, in the following description, a virtual straight line LC (an example of a “virtual line”) will be described by the virtual straight line LC representative of a virtual straight line that intersects with the position PC. The virtual straight line LC extends in a direction indicated by the sum of a unit vector representative of the optical axis direction of the virtual camera CM-L and a unit vector representative of the optical axis direction of the virtual camera CM-R. In the embodiment, an example case is presumed in which the virtual camera CM is present at the position PC, and the optical axis of the virtual camera CM is a virtual straight line LC. Furthermore, in the embodiment, an example case is assumed in which the direction in which the virtual straight line LC-L extends is the same as that of the virtual straight line LC-R. For this reason, in the embodiment, the direction in which the virtual straight line LC extends is the same as each of the directions in which the virtual straight line LC-L extends and the virtual straight line LC-R extends. FIG.5is a drawing of an example display image GH representative of a result of an image of the virtual space SP-V captured by the virtual camera CM. InFIG.5, a case is assumed in which the virtual camera CM captures an image of the character V in a direction that is in front of the character V, as shown inFIG.4. As shown inFIG.5, the display12displays, on a left-eye viewing area12-L that is viewed through the opening92L, a result of an image captured by the virtual camera CM-L, e.g., a character image GV-L representative of a result of an image of the character V captured by the virtual camera CM-L. Likewise, the display12displays, on a right-eye viewing area12-R that is viewed through the opening92R, an image captured by the virtual camera CM-R. e.g., a character image GV-R representative of a result of an image of the character V captured by the virtual camera CM-R. In other word, the user U is able to view the character image GV-L with the left eye and view the character image GV-R with the right eye. For this reason, as will be described later with reference toFIG.7and other drawings, the user U is able to view, on the display12, virtual objects, such as the character V and the like in the virtual space SP-V, for example, a visible image GS represented as a three-dimensional object. The “three-dimensional object” is simply required to be an object that is disposed in the virtual three-dimensional space. For example, the “three-dimensional object” may be a three-dimensional object that is disposed in the virtual three-dimensional space, may be a two-dimensional object that disposed in the virtual three-dimensional space, or may be a one-dimensional object that is disposed in the virtual three-dimensional space. The “virtual object” may be an object or region in which a display mode of color, pattern or the like differs from the surroundings, in the virtual space SP-V. For the sake of clarity, as shown inFIG.6, a coordinate system fixed to the virtual camera CM in the virtual space SP-V, which is referred to as “a camera coordinate system ΣC,” will be used. Here, the camera coordinate system ΣCrefers to three orthogonal coordinates that has an XC-axis, a YC-axis and a ZC-axis orthogonal to one another and has the origin at the position PC where the virtual camera CM exists in the virtual space SP-V, for example. In the embodiment, an example case is assumed in which when the user U wears the HMD1, the camera coordinate system ΣCis set as follows. When viewed by the user U, a +XCdirection represents a direction that is in front of the user U. When viewed by the user U, a +YCdirection represents a direction that is to the left. When viewed by the user U, a +ZCdirection represents an upward direction. In other words, in the embodiment, an example case is assumed in which when viewed by the user U wearing the HMD1, the XC-axis is the same direction as the XS-axis, the YC-axis is the same direction as the YS-axis, and the ZC-axis is the same direction as the ZS-axis. Furthermore, in the embodiment, an example case is assumed in which the XC-axis corresponds to the virtual straight line LC. In other words, in the embodiment, an example case is assumed in which the virtual straight line LC extends in the direction that is in front of the user U wearing the HMD1. As shown inFIG.6, the virtual camera CM is rotatable in a desired rotational direction that is obtained by combining some or all of a roll direction QCXrepresentative of the rotational direction around the XC-axis, a pitch direction QCYrepresentative of the rotational direction around the YCaxis, and a yaw direction QCZrepresentative of the rotational direction around the ZC-axis. In the embodiment, an example case is given in which when the HMD1rotates in the rotational direction QWaround the rotational axis WSby an angle θW, the virtual camera CM rotates by an angle θCin the rotational direction QCWaround the rotational axis WC. Here, the rotational axis WCcorresponds to, for example, a straight line that intersects the position PC. Specifically, the rotational axis WCrepresents a straight line in which the component of a unit vector representative of the direction of the rotational axis WS in the apparatus coordinate system ΣSis the same as that of a unit vector representative of the direction of the rotational axis WCin the camera coordinate system ΣC. Furthermore, the angle θCis equal to the angle θW, for example. In the following description, the camera coordinate system ΣCat the reference time t0will be referred to as a reference camera coordinate system ΣC0. In other words, the camera coordinate system ΣCat time t will be described as a coordinate system that has coordinate axes obtained by rotating each coordinate axis of the reference camera coordinate system ΣC0by the angle θCaround the rotational axis WC. 1.3. Images Displayed on Display The visible image GS displayed on the display12will be described below with reference toFIGS.7to12. FIGS.7to10show examples of change in the visible image GS displayed on the display12from time t1to time t5, time t1coming after the reference time t0. Among these drawings,FIG.7shows an example of a visible image GS displayed on the display12at time t1.FIG.8shows an example of a visible image GS displayed on the display12at time t2after time t1.FIG.9shows an example of a visible image GS displayed on the display12at time t4after time t2.FIG.10shows an example of a visible image GS displayed on the display12at time t5after time t4.FIG.11shows an example of a visible image GS displayed on the display12for a period from time tb1to time tb5. In the embodiment, an example case is assumed in which time tb1is the same as the time t5. In the following description, anytime in the period from time tb1to time tb5is referred to as time tb. For the sake of clarity, change in the visible image GS in the period from time t1to time tb5, shown inFIGS.7to11, is occasionally referred to as “screen-change examples.” In the screen-change examples, a case is assumed in which six optional objects CB[1] to CB[6] exist in the virtual space SP-V (that is, a case of “K=6”). In the following description, a virtual straight line LC at time t may on occasion be described as virtual straight line LC[t]. As shown inFIGS.7to1, in the embodiment, the visible image GS includes some or all of the following: the character V viewed, as a three-dimensional object disposed in the virtual space SP-V, by the user U wearing the HMD1; the optional objects CB[1] to CB[K] viewed, as three-dimensional objects disposed in the virtual space SP-V, by the user U; the message board Bd viewed, as a three-dimensional object disposed in the virtual space SP-V, by the user U; the enter button Bt viewed, as a three-dimensional object disposed in the virtual space SP-V, by the user U; and the environment components Ev disposed in the virtual space SP-V, by the user U. In the embodiment, for the sake of clarity, a case is assumed in which, in the virtual space SP-V, positions of the optional objects CB[1] to CB[K], the message board Bd, and the enter button Bt remain unchanged. However, the present invention is not limited to such an aspect. In the virtual space SP-V, the positions of the optional objects CB[1] to CB[K], the message board Bd, and the enter button Bt may change. For example, the optional objects CB[1] to CB[K], the message board Bd, and the enter button Bt each may be disposed at a constant position all the time when viewed by the camera coordinate system ΣC. That is, the positions of each of the optional objects CB[1] to CB[K], the message board Bd, and the enter button Bt in the virtual space coordinate system ΣVmay change according to a change in orientation of the virtual camera CM. In screen-change examples shown inFIGS.7to11, a case is assumed as follows. A virtual straight line LC[t1] intersects with the character V at time t. A virtual straight line LC[t2] intersects with an optional object CB[2] at time t2. A virtual straight line LC[t3] intersects with an optional object CB[3] at time t3. A virtual straight line LC[t4] intersects with an optional object CB[6] at time t4. A virtual straight line LC[t5] intersects with the enter button Bt at time t5. After that, a virtual straight line LC[tb] continues to intersect with the enter button Bt until time tb5. The user U wearing the HMD1identifies one or more optional objects CB from among the K options displayed in the K optional objects CB[1] to CB[K], and after that selects the identified one or more optional objects CB, thereby enabling selecting one or more options corresponding to one or more optional objects CB. Specifically, the user U wearing the HMD1first operates the orientation of the HMD1such that an optional object CB[k] and a virtual straight line LC intersect each other (an example of a “predetermined positional relationship”) in the virtual space SP-V, thereby enabling identifying the optional object CB[k] as an optional object subject. However, the present invention is not limited to such an aspect. The user U wearing the HMD1may operates the orientation of the HMD1to identify the optional object CB[k] as the optional object subject such that a distance between the optional object CB[k] and the virtual straight line LC is less than or equal to a predetermined distance (another example of a “predetermined positional relationship”) in the virtual space SP-V. In the embodiment, a case is assumed in which the following (a-i) and (a-ii) are displayed in the visible image GS in different display modes: (a-i) an optional object CB that is identified as an optional object subject; and (a-ii) an optional object CB that is not yet identified as an optional object subject. Specifically, in the embodiment, an example case is assumed in which the optional object CB which is identified as the optional object subject and the optional object CB which is not yet identified as the optional object subject are displayed in the visible image GS in different colors. However, the present invention is not limited to such an aspect. The optional object CB which is identified as the optional object subject and the optional object CB which is not yet identified as the optional object subject may be displayed in different shapes, may be displayed in different sizes, may be displayed in different brightness levels, may be displayed in different transparency levels, or may be displayed in different patterns. In the screen-change examples shown inFIGS.7to11, as described above, the virtual straight line LC intersects with each of the optional objects CB[2], CB[3] and CB[6]. For this reason, in the screen-change examples, these optional objects CB[2], CB[3] and CB[6] each is identified as an optional object subject. In the embodiment, the user U wearing the HMD1identifies one or more optional objects CB from among the optional objects CB[1] to CB[K], and after that the user U operates the orientation of HMD1such that the enter button Bt and the virtual straight line LC intersect each other for a predetermined time length ΔT1(an example of “predetermined time length”) in the virtual space SP-V, thereby enabling selection of the one or more optional objects CB identified as the optional object subjects. In the following description, a “selection-decided period (an example of “first period”)” will be defined by the time length ΔT1at which the enter button Bt and the virtual straight line LC intersect each other, when one or more optional objects CB, which are identified as the optional object subjects, are selected. The screen-change examples shown inFIGS.7to11show an exemplary case in which the period from time tb1to time tb5corresponds to the selection-decided period. In the embodiment, as shown inFIG.11, in the selection-decided period, a gauge image GB is displayed, as a virtual object disposed in the virtual space SP-V, on the visible image GS. FIG.12is an explanatory diagram for an example of a gauge image GB according to the modification. In the modification, as shown inFIG.12, a case is assumed in which the display mode of the gauge image GB changes over time. Specifically, in the modification, an example case is assumed in which the gauge image GB includes at least one of the following: an image GB1representative of a time length from the current time to the end time of the selection-decided period; and an image GB2representative of a time length from the start of the selection-decided period to the current time. Furthermore, in the embodiment, an example case is assumed in which the ratio of the image GB1to the gauge image GB decreases over time and the ratio of the image GB2to the gauge image GB increases over time. For example, in an example ofFIG.12, all of the entire gauge image GB is filled with the image GB1at time tb at which the selection-decided period starts. Subsequently, the ratio of the image GB2to the gauge image GB increases as the time progresses from time tb2to time tb3, and from tb3to time tb4. After that, the entire gauge image GB is filled with the image GB2at time tb5at which the selection-decided period ends. For this reason, in an example ofFIG.12, the user U wearing the HMD1is able to visually acknowledge the remaining time of the selection-decided period from the gauge image GB. In the embodiment, the gauge image GB is displayed at the same time when the selection-decided period starts. However, the present invention is not limited to such an aspect. For example, the gauge image GB may be displayed after a certain time has elapsed from the start time of the selection-decided period. In the embodiment, as shown inFIG.11, the gauge image GB is displayed at a position where at least a portion of the gauge image GB and at least a portion of the enter button Bt overlap when viewed by the virtual camera CM. However, the present invention is not limited to such an aspect. For example, the gauge image GB may be displayed at a freely selected position on the visible image GS. As described above, in examples shown inFIGS.7to11, in the period from time t1to time t5, the virtual straight line LC intersects with each of the optional objects CB[2], CB[3] and CB[6]. After that, in the selection-decided period having the time length ΔT1from time tb1to time tb5, the virtual straight line LC intersects with the enter button Bt. For this reason, in the screen-change examples shown inFIGS.7to11, the user U wearing the HMD1is able to identify the optional objects CB[2], CB[3] and CB[6] as the optional object subjects in the period from time t1to time t5. After that, the user U is able to selects the optional objects CB[2], CB[3] and CB[6] in the period from time tb1to time tb5. 2. Configuration of Terminal Apparatus The configuration of the terminal apparatus10will be described below with reference toFIGS.13and14. FIG.13is a block diagram for an example of a configuration of the terminal apparatus10. As shown inFIG.11, the terminal apparatus10includes: the display12that displays an image; a controller11that controls each component of the terminal apparatus10and executes display processing for displaying the display image GH on the display12; an operator13that receives an input operation carried out by the user U of the terminal apparatus10; an orientation information generator14that detects a change in orientation of the terminal apparatus10and outputs orientation information B representative of a detection result; and a storage15that stores therein various information including a control program PRG for the terminal apparatus10. In the embodiment, for example, a three-axis angular velocity sensor1002(seeFIG.14) is employed as the orientation information generator14. Specifically, the orientation information generator14includes an X-axis angular velocity sensor that detects a change in orientation in the roll direction QXper unit time, a Y-axis angular velocity sensor that detects a change in orientation in the pitch direction QYper unit time, and a Z-axis angular velocity sensor that detects a change in orientation in the yaw direction QZ. The orientation information generator14periodically outputs the orientation information B representative of detection results obtained by the X-axis angular velocity sensor, the Y-axis angular velocity sensor and the Z-axis angular velocity sensor. The controller11includes a display controller111, an orientation information acquirer112, an identifier113and selector114. The orientation information acquirer112(an example of an “acquirer”) acquires orientation information B output from the orientation information generator14. The display controller111controls, based on the orientation information B acquired by the orientation information acquirer112, the orientation of the virtual camera CM in the virtual space SP-V. The display controller111generates image information DS indicative of a result of an image captured by the virtual camera CM, and supplies the image information DS to the display12, to cause the display12to display the display image GH. The identifier113identifies an optional object CB that intersects with the virtual straight line LC, as the optional object subject. The selector114selects the optional object CB identified by the identifier113, as the optional object subject. FIG.14shows an example of a hardware configuration diagram for the terminal apparatus10. As shown inFIG.14, the terminal apparatus10includes: a processor1000that controls each component of the terminal apparatus10; a memory1001that stores thereon various information, the angular velocity sensor1002that detects a change in orientation of the terminal apparatus10and outputs the orientation information B indicative of a detection result; a display apparatus1003that displays various images; and an input apparatus1004that accepts an input operation carried out by the user U of the terminal apparatus10. In the embodiment, the terminal apparatus10is described as an “information processing apparatus.” However, the present invention is not limited to such an aspect. The processor1000provided in the terminal apparatus10may be the “information processing apparatus.” The memory1001is a non-transitory recording medium. For example, the memory1001includes either or both of the following: a volatile memory, such as Random Access Memory (RAM) or the like, which serves as a working area for the processor1000; and a non-volatile memory, such as an Electrically Erasable Programmable Read-Only Memory (EEPROM) or the like, which is used for storing various information, such as the control program PRG or the like of the terminal apparatus10. The memory1001serves as the storage15. In the embodiment, the memory1001is exemplified as a “recording medium” in which the control program PRG is recorded. However, the present invention is not limited to such an aspect. The “recording medium” in which the control program PRG is recorded may be a storage provided in an external apparatus existing outside the terminal apparatus10. For example, the “recording medium” on which the control program PRG is recorded may be a storage that is provided outside the terminal apparatus10. The storage may be provided in a distribution server apparatus that has the control program PRG and distributes the control program PRG. The processor1000is, for example, a Central Processing Unit (CPU). The processor1000executes the control program PRG stored in the memory1001, and operates according to the control program PRG, to serve as the controller11. As described above, the angular velocity sensor1002includes the X-axis angular velocity sensor, the Y-axis angular velocity sensor, and the Z-axis angular velocity sensor. The angular velocity sensor1002serves as the orientation information generator14. Both the display apparatus1003and the input apparatus1004are constitute of a touch panel, for example. The display apparatus1003serves as the display12and the input apparatus1004serves as the operator13. The display apparatus1003and the input apparatus1004may be configured separately from each other. The input apparatus1004may be configured by one or more components including some or all of a touch panel, operation buttons, a keyboard, a joystick, and a pointing device, such as a mouse. It is of note that the processor1000may be configured to include additional hardware, such as a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) or a Field Programmable Gate Array (FPGA) or the like, in addition to the CPU or in place of the CPU. In this case, some of or all of the functionality of the controller11realized by the processor1000may be realized by other hardware, such as a DSP or the like. The processor1000may be configured to further include some or all of the following: one or a plurality of CPUs; and one or a plurality of hardware elements. For example, when the processor1000may be configured to include a plurality of the CPUs, some or all of features of the controller11may be realized by collaborative operation carried out by the plurality of the CPUs in accordance with the control program PRG. 3. Operation of Terminal Apparatus An example operation of the terminal apparatus10will be described below with reference toFIGS.15and16. FIGS.15and16are each a flowchart showing an example operation of the terminal apparatus10when the terminal apparatus10executes display processing for displaying a display image GH on the display12. In the embodiment, an example case is assumed in which when the user U inputs a predetermined starting operation for starting the display processing with the operator13, the terminal apparatus10starts the display processing. As shown inFIG.15, when the display processing is started, the display controller111executes an initialization processing (S100). Specifically, in the initialization processing at step S100, the display controller111disposes optional objects CB, an enter button Bt, a message board Bd, a character V and environment components Ev and the like in the virtual space SP-V to the predetermined positions in the virtual space SP-V or to random positions in the virtual space SP-V. Furthermore, in the initialization processing at step S100, the display controller111sets an orientation of the virtual camera CM in the virtual space SP-V to the predetermined initial orientation. Subsequently, the display controller111determines time at which the initialization processing has been completed as the reference time t0, and determines the orientation of the HMD1at the reference time t0as a “reference orientation” (S102). At step S102, the display controller111determines the apparatus coordinate system ΣSat the reference time t0as the reference apparatus coordinate system ΣS0. Furthermore, at step S102, the display controller111sets the camera coordinate system ΣCsuch that the direction of each coordinate axis of the camera coordinate system ΣCis the same as that of each coordinate axis of the apparatus coordinate system ΣSwhen viewed by the user U wearing the HMD1. At step S102, the display controller111determines the camera coordinate system ΣCat the reference time t0as the reference camera coordinate system ΣC0. In the following description, a virtual straight line LC at the reference time t0will be referred to as a reference straight line LC0. Subsequently, the orientation information acquirer112acquires the orientation information B from the orientation information generator14(S104). Then, the display controller111calculates, based on the orientation information B acquired by the orientation information acquirer112at step S104, a change dB in orientation from the reference orientation of the HMD1(S106). In the embodiment, for example, the change dB in orientation obtained by the display controller111is described by the following: the rotational axis WSviewed by the reference apparatus coordinate system ΣS0; and the angle θWaround the rotational axis WS. In other words, in the embodiment, when the HMD1rotates by the angle θWaround the rotational axis WSas viewed by the reference apparatus coordinate system ΣS0, the change dB in orientation includes a direction vector representative of the rotational axis WSin the reference apparatus coordinate system ΣS0and the angle θW. However, the change dB in orientation may be described by any other expression method. For example, the change dB in orientation may be described by an orientation conversion matrix indicating a change in orientation from the reference apparatus coordinate system ΣS0to the apparatus coordinate system ΣS, or may be described by quaternions indicative of a change in orientation from the reference apparatus coordinate system ΣS0to the apparatus coordinate system ΣS. Subsequently, the display controller111determines, based on the change dB in orientation calculated at step S106, the orientation of the virtual camera CM in the virtual space SP-V (S108). Specifically, at step S108, first, the display controller111sets the rotational axis WC, based on the rotational axis WSindicating the change dB in orientation calculated at step S106. Furthermore, the display controller111sets the angle θC, based on the angle θWindicating the change dB in the orientation. Subsequently, the display controller111sets the camera coordinate system ΣCas a coordinate system obtained by rotating the reference camera coordinate system ΣC0around the rotational axis WCby the angle θC, to determine the orientation of the virtual camera CM. In other words, at step S108, the display controller111sets the virtual straight line LC as a straight line obtained by rotating the reference straight line LC0around the rotational axis WCby the angle θC. For example, when the HMD1rotates by the angle θWaround the ZS axis from the reference orientation relative to the yaw direction QZ, the display controller111sets the camera coordinate system ΣCas a coordinate system having an orientation obtained by rotating the reference camera coordinate system ΣC0around the ZCaxis by the angle θW, to determine the orientation of the virtual camera CM. Furthermore, for example, when the HMD1rotates by the angle θWaround the YS-axis from the reference orientation relative to the pitch direction QY, the display controller111sets the camera coordinate system ΣCas a coordinate system having an orientation obtained by rotating the reference camera coordinate system ΣC0around the YC-axis by the angle θW, to determine the orientation of the virtual camera CM. Furthermore, for example, when the HMD1rotates by the angle θWaround the XS-axis from the reference orientation relative to the roll direction θX, the display controller111sets the camera coordinate system ΣCas a coordinate system having a orientation obtained by rotating the reference camera coordinate system ΣC0around the XC-axis by the angle θW, to determine the orientation of the virtual camera CM. Subsequently, the identifier113determines whether an optional object CB[k] that intersects with the virtual straight line LC is present from among the optional objects CB[1] to CB[K] (S10). Alternatively, at step S110, the identifier113may determine whether the following optional object CB[k] is present from among the optional objects CB[1] to CB[K] (another example of “predetermined relationship”). The optional object CB[k] is one in which the direction of the extending virtual straight line LC is included in width in the direction from the virtual camera CM to the optional object CB[k]. At step S110, the identifier113may determine whether an optional object CB[k], the distance of which to the virtual straight line LC is less than or equal to the predetermined distance, is present from among the optional objects CB[1] to CB[K]. When the result of the determination at step S110is affirmative, the identifier113determines whether the optional object CB[k] that intersects with the virtual straight line LC is identified as the optional object subject (S112). In a period from a time before the current time to the current time, when the virtual straight line LC continues to intersect with the optional object CB[k] at step S112, the identifier113maintains the result of the determination at the start time of the period (hereafter, referred to as an “intersection period”) during which the intersection is maintained. When the result of the determination at step S112is negative, the identifier113identifies, as the optional object subject, the optional object CB[k] that is determined to intersect with the virtual straight line LC at step S110(S114). Then the display controller111sets the color of the optional object CB[k] that has been identified, as the optional object subject, by the identifier113at step S114to a color representative of the optional object subject (S116). When the result of the determination at step S112is affirmative, the identifier113excludes the optional object CB[k] that is determined to intersect with the virtual straight line LC at step S110from the optional object subject (S118). In other words, when the virtual straight line LC starts to intersect again with the optional object CB[k] that has been identified before, as the optional object subject, by the identifier113at step S118, the identifier113stops the identification, as the optional object subject, to the optional object CB[k]. Then the display controller111sets the color of the optional object CB[k] that has been stopped, as the optional object subject, by the identifier113at step S18to a color not indicative of the optional object subject (S120). The display controller111generates the image information DS indicative of the result of an image of the virtual space SP-V captured by the virtual camera CM, and supplies the image information DS to the display12, to cause the display12to display the display image GH (S122). As shown inFIG.16, when the result of the determination at step S110is negative, the selector114determines whether the virtual straight line LC and the enter button Bt intersect each other (S130). When the result of the determination at step S130is negative, the selector114moves the processing to step S122. When the result of the determination at step S130is affirmative, the selector114determines whether the elapsed time since the intersection of the virtual straight line LC with the enter button Bt starts to be generated is the time length ΔT1or more (S132). When the result of the determination at step S132is affirmative, the selector114selects one or more optional objects CB identified as the optional object subject from among the optional objects CB[1] to CB[K] (S134). After that, the selector114moves the processing to step S122. When the result of the determination at step S132is negative, the display controller111arranges the gauge image GB in the virtual space SP-V (S136), and after that moves the processing to step S122. In the following description, a condition for selecting one or more optional objects CB identified as the optional object subjects is referred to as a reference orientation condition (an example of “predetermined condition”). As described above, in the embodiment, the selection condition is one in which the virtual straight line LC and the enter button Bt intersect each other for the time length ΔT1. It is of note that the virtual straight line LC extends in a direction that is determined by the change dB in orientation calculated based on the orientation information B. In other words, the direction in which the virtual straight line LC extends is defined by the orientation information B. For this reason, it can be acknowledged that the selection condition with respect to the virtual straight line LC is a condition with respect to the orientation information B. As shown inFIG.15, the display controller111determines whether the operator13has received an input of a predetermined end operation made by the user U, the predetermined end operation being an operation indicative of ending the displaying processing (S24). When the result of the determination at step S124is negative, the display controller111moves the processing to step S104. When the result of the determination at step S124is affirmative, the display controller111ends the display processing. 4. Summary of Embodiment In the foregoing description, in the embodiment, the display controller111determines the direction in which a virtual straight line LC extends, based on the orientation information B indicative of a change in orientation of the HMD1. In the embodiment, the identifier113identifies an optional object CB as an optional object subject, based on the positional relationship between the virtual straight line LC and the optional object CB. In the embodiment, the selector114selects the optional object CB identified as the optional object subject, based on the positional relationship between the virtual straight line LC and the enter button Bt. For this reason, according to the embodiment, the user U wearing the HMD1changes the orientation of HMD1, thereby enabling the following: changing the orientation of the virtual camera CM in the virtual space SP-V; identifying an optional object CB as an optional object subject; and selecting the optional object CB identified as the optional object subject. In other words, according to the embodiment, the user U wearing the HMD1is able to carry out inputs of various instructions by changing the orientation of the HMD1. In the embodiment, on the premise that an optional object CB is identified as an optional object subject, when a virtual straight line LC and the enter button Bt intersect each other, the selector114selects the optional object CB. For this reason, according to the embodiment, it is possible to reduce probability of incorrect selection of the optional object CB by the user U wearing the HMD1against the intention of the user U, as compared to a case (Reference Example 1) in which the optional object CB is selected when the optional object CB and the virtual straight line LC intersect each other, regardless of whether the optional object CB is identified as the optional object subject. In the embodiment, the user U wearing the HMD1is able to identify an optional object CB by intersecting a virtual straight line LC with the optional object CB for a period that is shorter than the time length ΔT1. In the embodiment, the user U wearing the HMD1identifies one or more optional objects CB to be selected, and then intersects the virtual straight line LC with the enter button Bt for the time length ΔT1, thereby enabling selection of the one or more optional objects CB. For this reason, according to the embodiment, when the user U wearing the HMD1is about to select a plurality of optional objects CB, the user U is able to make a quick selection of the plurality of optional objects CB, as compared to a case (Reference Example 2) in which the plurality of optional objects CB are selected in which the intersection of the virtual straight line LC with each of the optional objects CB is made for the time length ΔT1, for example. As a result, according to the embodiment, even when the number of optional objects CB to be selected by the user is greater, increase in burden on the user U wearing the HMD1is suppressed, as compared to Reference Example 2. In the embodiment, an optional object CB that is identified as an optional object subject and an optional object CB that is not yet identified as an optional object subject are displayed in different display mode. For this reason, according to the embodiment, the user U wearing the HMD1is able to acknowledge with ease whether each optional object CB is identified as the optional object subject. B. Modifications Each of the embodiments described above can be variously modified. Specific modification modes will be described below as examples. Two or more modes, freely selected from the following examples can be appropriately combined within a range, as long as they do not conflict with each other. In the modifications described below, elements with substantially the same operational actions or functions as those in the embodiments are denoted by the same reference signs as in the above description, and detailed description thereof will not be presented, as appropriate. Modification 1 In the foregoing embodiment, when a virtual straight line LC and an optional object CB intersect each other, the identifier113identifies the optional object CB regardless of the period of the intersection of both. However, the present invention is not limited to such an aspect. The identifier113may identify an optional object CB, when the virtual straight line LC continues to intersect with the optional object CB for a predetermined time length, for example. In the modification, identifier113identifies an optional object CB when the virtual straight line LC continues to intersect with the optional object CB with each other for a time length ΔT2(an example of a “reference period”) that is shorter than the time length ΔT1. In the modification, as shown inFIG.17, the display controller111arranges a gauge image GC on a position that corresponds to an optional object CB for the intersection period from the start of intersecting the virtual straight line LC with the optional object CB until the time length ΔT2elapses. It is of note that in the modification, likewise the gauge image GB, the gauge image GC may be displayed such that the remaining time of the intersection period is visually acknowledged. For example, the gauge image GC may have at least one of the following: an image GC1representative of the time length from the current time to the end time of the intersection period; and an image GC2representative of the time length from the start time of the intersection period to the current time. In the modification, a case is assumed in which the gauge image GC starts to be displayed in the visible image GS at the same time when the intersection period starts. However, the gauge image GC may be displayed after a certain time has elapsed from the start of the intersection period. In the foregoing, in the modification, when an optional object CB continues to intersect with a virtual straight line LC for the time length ΔT2, the optional object CB is identified. For this reason, in the modification, it is possible to reduce probability of incorrect selection of the optional object CB, as compared to a case in which an optional object CB is identified at a time at which the optional object CB intersects with the virtual straight line LC. Modification 2 In the embodiment and the modification 1 described above, the display controller111displays an optional object CB[k] in the selection-decided period in the same display mode as the optional object CB[k] before the selection-decided period is started. However, the present invention is not limited to such an aspect. The display controller111may display an optional CB[k] in the selection-decided period in a display mode different from that of the optional object CB[k] before the selection-decided period is started. FIG.18shows an example of a visible image GS according to the modification. In the modification, the display controller111causes the display12to display a visible image GS, shown inFIGS.7to9, from time t1to time t4, and after that the display controller11lcauses the display12to display a visible image GS, shown inFIG.18, from time tb1to time tb5. As shown inFIG.18, the display controller111displays an optional object CB[k] in the selection-decided period from time tb1to time tb5in a display mode different from an optional object CB[k] in the period from time t1to time t4. Specifically, when the optional object CB[k] is identified as the optional object subject, the display controller111displays the optional object CB[k] in the selection-decided period in a mode in which the optional object CB[k]is made more visible than that before the selection-decided period is started. Here, the “displaying an optional object in a mode in which the optional object is made more visible” may be a concept including some or all of the following: making a color of the optional object CB[k] darker; increasing the size of the optional object CB[k]; lowering transparency of the optional object CB[k](making it opaque); and increasing brightness of the optional object CB[k](displaying in a brighter color). Conversely, when the optional object CB[k] is not identified as the optional object subject, the display controller111displays the optional object CB[k] in the selection-decided period in a mode in which the optional object CB[k] is made less visible than that before the selection-decided period is started. Here, the “displaying an optional object in a mode in which the optional object is made less visible” may be a concept including some or all of the following: making a color of the optional object CB[k] lighter; decreasing the size of the optional object CB[k]; increasing transparency of the optional object CB[k] (making it clear); and lowering brightness of the optional object CB[k] (displaying in a darker color). In the modification, when the selection-decided period is started, the display controller111may change either of the following two display modes: an optional object CB that is identified as an optional object subject; and an optional object CB that is not yet identified as an optional object subject. In the foregoing, in the modification, when the selection-decided period is started, at least one of the following two display modes are changed: an optional object CB that is identified as an optional object subject; and an optional object CB that is not yet identified as the optional object subject. For this reason, according to the modification, the user U wearing the HMD1is able to acknowledge with ease the start of the selection-decided period as well as an optional object CB[k] that is about to be selected, as compared to a case in which when the selection-decided period is started, the display mode of the optional object CB[k] is not changed. Modification 3 In the foregoing embodiment and modifications 1 and 2, the display controller111does not change the display mode of an optional object CB[k] that has not been identified as an optional object subject in the period before the selection-decided period is started. However, the present invention is not limited to such an aspect. The display controller111may change the display mode of an optional object CB[k] that is not yet identified as an optional object subject in the period before the selection-decided period is started. For example, the display controller111may change the display mode of an optional object CB[k] that is not yet identified as an optional object subject at a timing at which the optional object subject is identified at first from among the optional objects CB[1] to CB[K]. In other words, the display controller111may differentiate between the display mode of an optional object CB that is not yet identified as an optional object subject in a pre-selection period (an example of a “second period”), and the display mode before the start of the pre-selection period. The pre-selection period is a period from a timing at which an optional object subject is identified at first from among the optional objects CB[1] to CB[K], to the end of the selection-decided period. FIG.19shows an example of a visible image GS according to the modification. In the modification, the display controller111causes the display12to display a visible image GS, shown inFIG.7at time t1. After that, the display controller111causes the display12to display a visible image GS, shown inFIG.19at time t2. As shown inFIG.19, at time t2at which an optional object CB[2] is identified as a first optional object subject from among the optional objects CB[1] to CB[6], the display controller111changes a display mode of an optional object CB[k], which is other than the optional object CB[2] that is not yet identified as an optional object subject, to a display mode in which the optional object CB[k] is made less visible than that at time t1. According to the modification, the user U wearing the HMD1is able to acknowledge with ease an optional object CB that has been identified as an optional object subject. Modification 4 In the foregoing embodiment and modifications 1 to 3, when a selection condition is satisfied under which a virtual straight line LC continues to intersect with the enter button Bt for the time length ΔT1, the selector114selects one or more optional objects CB that are identified as optional object subjects. However, the present invention is not limited to such an aspect. For example, the selector114may select one or more optional objects CB that are identified as optional object subjects, when the virtual straight line LC continues to intersect with the optional objects CB for the time length ΔT1. In other words, the selection condition may refer to a condition under which the virtual straight line LC continues to intersect with an optional object CB for the time length ΔT1. FIG.20shows an example of a visible image GS according to the modification. In the modification, the display controller111causes the display12to display visible images GS shown inFIGS.7to9at times t1to t4, and after that causes the display12to display a visible image GS shown inFIG.20at time tc after time t4. Furthermore, in the modification, when the virtual straight line LC continues to intersect with optional objects CB for the time length ΔT2or longer, as shown inFIG.20, the display controller111causes the display to display the gauge image GB at a position that corresponds to the optional objects CB. The time length ΔT2is shorter than the time length ΔT1. Then, in the modification, when the virtual straight line LC continues to intersect with optional objects CB for the time length ΔT1, the selector114selects one or more optional objects CB that are identified as optional object subjects. In summary, according to the embodiment, the user U wearing the HMD1causes a virtual straight line LC to intersect with an optional object CB, thereby enabling selection of one or more optional objects CB that are identified as optional object subjects. For this reason, according to the modification, when the user U wearing the HMD1is about to select optional objects CB, the user U is able to select the optional objects CB in a short time, as compared to, for example, Reference Example 2. Furthermore, according to the modification, the enter button Bt, which is used to select one or more optional objects CB that are identified as optional object subjects in the visible image GS, is not required to display. For this reason, according to the modification, a display of the visible image GS can be simplified, as compared to a mode in which the enter button Bt is displayed in the visible image GS. In the modification, the selection condition is a condition under which the virtual straight line LC continues to intersect with any optional object CB[k] from among the optional objects CB[1] to CB[K] for the time length ΔT1. However, the present invention is not limited to such an aspect. For example, when M optional objects CB is selected from among the optional objects CB[1] to CB[K] in advance, the selection condition may be a condition under which the virtual straight line LC continues to intersect with an optional object CB that is identified a Mth optional object subject for the time length ΔT1(M represents a natural number satisfying 1≤M≤K). Modification 5 In the foregoing embodiment and modifications 1 to 4, the selector114selects one or more optional objects CB that are identified as optional object subjects, when a selection condition is satisfied under which a virtual straight line LC intersects with the enter button Bt or the optional objects CB. However, the present invention is not limited to such an aspect. For example, the selector114may select one or more optional objects CB that are identified as the optional object subjects, when a predetermined orientation condition (another example of the “predetermined condition”) that relates to the orientation information B is satisfied. In the modification, the selector114selects one or more optional objects CB that are identified as the optional object subjects, when a roll rotation state continues for the time length ΔT1or more. This roll rotation state refers to a state in which the HMD1rotates by an angle θth or more from the reference orientation in the roll direction Qx around the Xs-axis. In other words, in the modification, the orientation condition refers to a condition under which the roll rotation state continues for time length ΔT1or longer. This roll rotation state refers to a state in which an angle, of an amount dB of change in orientation of the HMD1from the reference orientation, indicative of the rotation component in the roll direction Qx about the Xs-axis is equal to or greater than the angle θth. In the modification, the Xs-axis is an example of “predetermined reference axis”, and the angle θth is an example of “predetermined angle.” FIG.21shows an example of a visible image GS according to the modification. In the modification, the display controller111causes the display12to display visible images GS shown inFIGS.7to9at times t1to t4, and after that causes the display12to display a visible image GS shown inFIG.21in the selection-decided period from time tb1to time tb5. In the modification, in the selection-decided period, upon the HMD1rotates by the angle θth or more from the reference orientation in the roll direction Qx, the camera coordinate system ΣCis also rotated by the angle θth or more from the reference camera coordinate system ΣC0in the roll direction QCx. For this reason, in the modification, in the selection-decided period, as shown in FIG.21, the display controller111displays the virtual space SP-V on the display12in a mode in which the virtual space SP-V tilted around the Xc axis by an angle θth or more. In other words, in the modification, the user U wearing the HMD1maintains the state in which the virtual space SP-V is tilted at the angle θth or more around the Xc axis is displayed on the display12for the time length ΔT1or more, thereby enabling selection of one or more optional objects CB that are identified as optional object subjects. In the modification, the enter button Bt may not be provided in the virtual space SP-V. Furthermore, in the modification, in the selection-decided period, the gauge image GB may be displayed in the virtual space SP-V. In summary, according to the modification, the user U wearing the HMD1is able to select one or more objects CB that are identified as optional object subjects by tilting the HMD1in the roll direction Qx. For this reason, according to the modification, the user U wearing the HMD1is able to select one or more optional objects CB with easy input operation, as compared to Reference Example 2, for example. Furthermore, according to the modification, the enter button Bt, which is used to select one or more optional objects CB that are identified as optional object subjects in the visible image GS, is not required to be displayed. For this reason, according to the modification, a display of the visible image GS can be simplified, as compared to a mode in which the enter button Bt is displayed in the visible image GS. Modification 6 In the foregoing embodiment and modifications 1 to 4, when a virtual straight line LC and an optional object CB intersect each other, the identifier113identifies the optional object CB as an optional object subject. However, the present invention is not limited to such an aspect. For example, even when the virtual straight line LC and the optional object CB intersect each other, and a predetermined specific-avoidable-condition regarding the orientation information B is satisfied, it is not required to identify the optional object CB as the optional object subject. In the modification, even when the virtual straight line LC and the optional object CB intersect each other, and the HMD1is in the roll rotation state in which it rotates by the angle θth or more from the reference orientation in the roll direction Qx around the Xs-axis, the identifier113does not identify the optional object CB as the optional object subject. In other words, in the modification, the specific-avoidable-condition refers to a condition under which the HMD1is in the roll rotation state. In the modification, the orientation of the HMD1in the roll rotation state is an example of “predetermined orientation.” FIGS.22and23each show an example of a visible image GS according to the modification. In the modification, the display controller111causes the display12to display visible images GS shown inFIGS.7to8at times t1to t2, and after that causes the display12to display visible images GS shown inFIGS.22and23at times t3to t4. As shown inFIG.22, at time t3, the user U wearing the HMD1tilts the HMD1by the angle θth or more in the roll direction QX such that the HMD1is in the roll rotation state. For this reason, even in which a virtual straight line LC[t3] and an optional object CB[3] intersect each other at time t3, the identifier113avoids identification of the optional object CB[3] as the optional object subject. After that, as shown inFIG.23, at time t4, the user U wearing the HMD1is in a state (hereafter, referred to as “non-roll rotation state”) in which the HMD1is tilted by an angle that is less than the angle θth in the roll direction Qx. For this reason, at time t4, the identifier113identifies an optional object CB[6] that intersects with a virtual straight line LC[t4] as an optional object subject. In the foregoing, in the modification, even when a virtual straight line LC and an optional object CB intersect each other, the user U wearing the HMD1operates the orientation of the HMD1such that the HMD1is in the roll rotation state. This allows avoiding identification of the optional object CB as the optional object subject. For this reason, in the modification, for example, even when a plurality of optional objects CB[1] to CB[K] are closely arranged in the virtual space SP-V, it is possible to easily avoid incorrect identification of an optional object CB that is not intended to be selected by the user U wearing the HMD1, as the optional object subject. In the modification, when the HMD1is in the roll rotation state, the identifier113avoids identification of an optional object CB as an optional object subject. However, the present invention is not limited to such an aspect. For example, when the HMD1is in the non-roll rotation state, the identifier113may avoid identification of the optional object CB as the optional object subject. Conversely, when the HMD1is in the roll rotation state, the identifier113may identify the optional object CB as the optional object subject. Modification 7 In the foregoing embodiment and modifications 1 to 5, when a virtual straight line LC and an optional object CB intersect each other, the identifier113identifies the optional object CB. However, the present invention is not limited to such an aspect. Only when a trajectory PL of an intersection of a virtual straight line LC with an optional object CB is satisfied with a predetermined trajectory condition, the identifier113may identify the optional object CB as an optional object subject. Here, the trajectory condition may be freely selected as long as it is a geometric condition with respect to the trajectory PL. FIGS.24and25are each an explanatory diagram for a trajectory condition.FIGS.24and25each shows an exemplary case in which an intersection of a virtual straight line LC with an optional object CB[k] starts to be generated at time tk1, and the intersection of the virtual straight line LC with the optional object CB[k] comes to an end at time tk2. InFIGS.24and25, a time between time tk1and time tk2is described as time tk. Furthermore, inFIGS.24and25, a trajectory PL[tk1] is denoted by a part of the trajectory PL at predetermined time included in time tk1, a trajectory PL[tk2] is denoted by a part of the trajectory PL at a predetermined time included in time tk2, a trajectory PL[tk1][tk] is denoted by a part of the trajectory PL from time tk1to time tk, and a trajectory PL[tk][tk2] is denoted by a part of the trajectory PL from time tk to time tk2. The trajectory condition will be will be described below with reference toFIGS.24and25. In the modification, as shown inFIG.24, the trajectory condition may refer to a condition under which an angle θ12is less than or equal to a predetermined reference angle (e.g., 90 degrees), for example. The angle θ12refers to an angle between a unit vector representative of a direction in which the trajectory PL[tk] changes and a unit vector representative of a direction opposite to a direction in which the trajectory PL[tk2] changes. Alternatively, in the modification, as shown inFIG.24, the trajectory condition may refer to a condition under which a distance DC between the virtual straight line LC[tk1] at time tk1and the virtual straight line LC[tk2] at time tk2is less than or equal to a predetermined reference distance, for example. Alternatively, in the modification, as shown inFIG.24, the trajectory condition may refer to a condition under which the following (b-i) and (b-ii) are positioned on the same side, e.g., on the +Zv side, of sides constituting the optional object CB: (b-i) an intersection of the virtual straight line LC[tk1] with the optional object CB[k] at the time tk1; and (b-ii) an intersection of the virtual straight line LC[tk2] with the optional object CB[k] at the time tk2, for example. Alternatively, in the modification, as shown inFIG.25, the trajectory condition may refer to a condition under which the trajectory PL[tk1][tk] and the trajectory PL[tk][tk2] intersect each other. In summary, according to the modification, only when a trajectory PL of an intersection of a virtual straight line LC with an optional object CB is satisfied with the predetermined trajectory condition, the optional object CB is identified as an optional object subject. For this reason, according to the modification, it is possible to reduce the probability of incorrect selection of the optional object CB, as compared to a case in which an optional object CB is identified at a time at which the optional object CB intersects with the virtual straight line LC, for example. Modification 8 In the foregoing embodiment and modifications 1 to 7, the orientation information B indicates a detection result of a change in orientation of the terminal apparatus10. However, the present invention is not limited to such an aspect. The orientation information B may indicates orientation of the terminal apparatus10viewed by a coordinate system fixed on the ground, for example. In this case, the orientation information generator14may include either an acceleration sensor or a geomagnetic sensoror, or may include both, for example. Alternatively, in this case, the orientation information B may refer to information on an image output from a camera that is provided outside of the HMD1and captures the HMD1, for example. Modification 9 In the foregoing embodiment and modifications 1 to 8, the information processing apparatus is provide in the HMD1. However, the information processing apparatus may be provided separately from the HMD1. FIG.26is a block diagram for an example configuration of an information processing system SYS according to the modification. As shown inFIG.26, the information processing system SYS includes an information processing apparatus20and a Head Mounted Display1A that is communicable with the information processing apparatus20. Among these components, the information processing apparatus20may include, for example, the controller11, the operator13, and the storage15. The Head Mounted Display A may include, in addition to the display12and the orientation information generator14, an operator31that receives an input operation carried out by the user U wearing the Head Mounted Display1A, and a storage32that stores thereon various information. Modification 10 In the foregoing embodiment and modifications 1 to 9, the virtual straight line LC represents the optical axis of the virtual camera CM. However, the present invention is not limited to such an aspect. For example, when the HMD1has an eye tracking feature of measuring a direction of the line of sight of the user U wearing it, the line of sight measured by the eye tracking feature may be used as the virtual straight line LC. C. Appendixes From the above description, the present invention can be understood, for example, as follows. In order to clarify each aspect, reference numerals in the drawings are appended below in parentheses for convenience. However the present invention is not limited to the drawings. Appendix 1 A recording medium according to an aspect of the present invention is a non-transitory computer readable recording medium (e.g., a memory1001) having recorded therein a program (e.g., control program PRG), the program causing a processor (e.g., a processor1000) of an information processing apparatus (e.g., a terminal apparatus10) to function as: a display controller (e.g., a display controller111) configured to cause a display (e.g., a display12) provided on a Head Mounted Display (e.g., HMD) to display a stereoscopic image to which binocular parallax is applied, the stereoscopic image being an image of a virtual space (e.g., a virtual space SP-V) in which optional objects (e.g., optional objects CB) are disposed, and the virtual space being captured by a virtual camera (e.g., a virtual camera); an acquirer (e.g., an orientation information acquirer112) configured to acquire orientation information (e.g., orientation information) on an orientation of the Head Mounted Display; an identifier (an identifier113) configured to, when a predetermined position relationship (e.g., a relationship of an intersection) is established between a virtual line (e.g., virtual straight line LC) and one optional object from among the optional objects, the virtual line having a direction according to the orientation information and intersecting with the virtual camera, identify the one option image; and a selector (e.g., a selector114) configured to, when one or more optional objects are identified by the identifier and a predetermined condition (e.g., a selection condition) relating to the orientation information is satisfied, select the one or more optional objects. According to the aspect, when the predetermined positional relationship is established between an optional object and a virtual line having a direction according to the orientation information, the optional object is identified. Then, according to the aspect, when one or more optional objects are identified, and the predetermined condition relating to the orientation information is satisfied, the identified one or more optional objects are selected. In other words, according to the aspect, identification of the optional objects and selection of the identified optional objects are carried out on the bases of the orientation information relating to the orientation of the Head Mounted Display. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to select one or more optional objects, based on the orientation of the Head Mounted Display. For this reason, according to the aspect, on the bases of the orientation information, it is possible to carry out both of the following: an input operation to change the orientation of the virtual camera; and an input operation other than the input operation to change the orientation of the virtual camera. In other words, according to the aspect, it is possible to input various instructions by changing the orientation of Head Mounted Display. Now, a case (hereafter, as referred to “Reference Example 1”) is assumed as follows: when the predetermined positional relationship is established between a virtual line and one optional object from among optional objects, the virtual line having a direction according to the orientation information, the one optional object is selected. However, in the Reference Example 1, when an appropriate input operation relating to the orientation of the Head Mounted Display is not carried out, incorrect selection of an optional object that differs from the desired optional object may be carried out. Conversely, according to the aspect, when the desired optional object is identified, the optional object is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the desired optional object is reduced, as compared to the Reference Example 1. In order to avoid incorrect selection of an optional object that differs from the desired optional object, another case (hereafter, as referred to “Reference Example 2”) is assumed as follows: when the predetermined positional relationship is maintained between the virtual line and the one optional object for the predetermined time length, the one optional object is selected. However, in Reference Example 2, as the number of the optional objects to be selected increases, time required for selection increases and whereby a burden on the user wearing the Head Mounted Display increases. Conversely, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that changes the orientation of the Head Mounted Display such that the predetermined condition is satisfied, thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. For this reason, according to the aspect, even when the number of the optional objects to be selected is greater, a burden on the user wearing the Head Mounted Display is reduced, as compared to Reference Example 2. In the above aspect, the “optional object” may be a virtual object that exists in the virtual space or may be a specific region that exists in the virtual space, for example. In the case in which the “optional object” may be a specific region that exists in the virtual space, the region may be a region that is separated from the surroundings thereof by color or pattern. Here, the “specific region that exists in the virtual space” may be one having one dimensional spread in the virtual space, such as a straight line, a curve and a line segment, may be one having two dimensional spread in the virtual space, such as a square, a triangle and a circle, or may be one having three dimensional spread in the virtual space, such as a solid and a curved-surface solid. The “optional objects” may be regions that are provided on a surface of a display object that is disposed in the virtual space. In the above aspect, the “virtual camera” may include a first virtual camera that captures an image of the virtual space and a second virtual camera that captures an image of the virtual space at a position that differs from that of the first virtual camera, for example. The “stereoscopic image” may include the following: an image for left eye, which is an image of the virtual space captured by the first virtual camera and is viewed by the user's left eye; and an image for right eye, which is an image of the virtual space captured by the second virtual camera and is viewed by the user's right eye, for example. In the above aspect, the “Head Mounted Display” may be a display apparatus that is wearable on the user's head, for example. Specifically, the “Head Mounted Display” may be a goggle-type or eyeglass-type display apparatus that is wearable on the user's head. The “Head Mounted Display” may include wearable equipment that is wearable on the user's head, and a portable display apparatus, such as a smartphone, that is mounted on the wearable equipment. In the above aspect, the “orientation of the Head Mounted Display” may be a direction of the Head Mounted Display, or may be an inclination of the Head Mounted Display, or may be a concept including both the orientation and the inclination of the Head Mounted Display, for example. Here, the “direction of the Head Mounted Display” may be a direction in which the Head Mounted Display orientates in the real space, or may be an angle between the reference direction of the Head Mounted Display and a direction of the magnetic north, for example. The “inclination of the Head Mounted Display” may be an angle between the reference direction of the Head Mounted Display and the vertical direction, for example. In the above aspect, the “orientation information” may indicate the orientation of the Head Mounted Display or may indicate a change in orientation of the Head Mounted Display, for example. In the above aspect, the “acquirer” may acquire the orientation information from the Head Mounted Display, or may acquire the orientation information from an imaging apparatus that captures an image of the Head Mounted Display. When the acquirer acquires the orientation information from the Head Mounted Display, the Head Mounted Display may include a sensor for detecting information indicative of a change in orientation of the Head Mounted Display, or may include a sensor for detecting information indicative of the orientation of the Head Mounted Display. Here, the “sensor for detecting information indicative of a change in orientation of the Head Mounted Display” may be an angular velocity sensor, for example. Alternatively, the “sensor for detecting information indicative of the orientation of the Head Mounted Display” may be one or both a geomagnetic sensor and an angular velocity sensor. When the acquirer acquires the orientation information from an imaging apparatus that captures an image of the Head Mounted Display, the orientation information may be an image indicating a result of an image of the Head Mounted Display captured by the imaging apparatus. In the above aspect, the “virtual line” may be a straight line that extends in the direction in which the virtual camera orientates in the virtual space, for example. Specifically, the “virtual line” may be the optical axis of the virtual camera, for example. Alternatively, the “vertical line” may be a straight line that extends in a sight direction of the user wearing the Head Mounted Display, for example. In this case, the Head Mounted Display may have an eye tracking feature that detects a sight direction of the user wearing it. In the above aspect, the “predetermined positional relationship is established between the virtual line and the optional object” may refer to a case in which the virtual line and the optional object intersect each other, for example. Alternatively, the “predetermined positional relationship is established between the virtual line and the optional object” may refer to a case in which a distance between the virtual line and the optional object is less than or equal to a predetermined distance, for example. In the above aspect, the “predetermined condition” may be a condition relating to a virtual line having a direction according to the orientation information, may be a condition relating to an orientation of the Head Mounted Display indicative of the orientation information, or may be a condition relating to a change in orientation of the Head Mounted Display indicative of the orientation information. Here, the “condition relating to a virtual line having a direction according to the orientation information” may be a condition under which a predetermined positional relationships is established between a virtual line having a direction according to the orientation information and a predetermined object that is disposed in the virtual space, the predetermined object being a virtual object, for example. Alternatively, the “condition relating to a virtual line having a direction according to the orientation information” may be a condition under which the predetermined positional relationship is maintained between a virtual line having a direction according to the orientation information and the predetermined object that is disposed in the virtual space, the predetermined object being a virtual object, in a first period having the predetermined time length, for example. Alternatively, the “condition relating to a virtual line having a direction according to the orientation information” may be a condition under which the predetermined positional relationship is maintained between the virtual line having a direction according to the orientation information and one optional object that has been identified by the identifier, in a first period having the predetermined time length, for example. The “condition relating to an orientation of the Head Mounted Display indicative of the orientation information” may be a condition under which the orientation of the Head Mounted Display indicative of the orientation information is an orientation in which the Head Mounted Display rotates by the predetermined angle or more from a reference orientation around the predetermined reference axis, for example. Alternatively, the “condition relating to an orientation of the Head Mounted Display indicative of the orientation information” may be a condition under which the orientation of the Head Mounted Display indicative of the orientation information rotates by the predetermined angle or more around the predetermined reference axis, for example. Appendix 2 The recording medium according to another aspect of the present invention is a recording medium according to Appendix 1, when the predetermined positional relationship is established between the virtual line and a predetermined object (e.g., an enter button) that is disposed in the virtual space, the selector is configured to select the one or more optional objects. According to the aspect, in which the predetermined positional relationship is established between the virtual line and the predetermined object, the optional object identified by the identifier is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the optional object to be selected is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that the user changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and the predetermined object, and thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. In the above aspect, the “predetermined object” may be a virtual object that exists in the virtual space or may be a specific region that exists in the virtual space, for example. Appendix 3 The recording medium according to another aspect of the present invention is a recording medium according to Appendix 1, when the orientation of the Head Mounted Display indicated by the orientation information is an orientation in which the Head Mounted Display rotates by a predetermined angle (e.g., an angle θth) or more from a reference orientation (e.g., an orientation of HMD1at the reference time t0) around a predetermined reference axis (e.g., the XS-axis), the selector is configured to select the one or more optional objects. According to the aspect, when the orientation of the Head Mounted Display rotates by the predetermined angle or more around the predetermined reference axis, the optional object that has been identified by the identifier is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the optional object to be selected is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that the user changes the orientation of the Head Mounted Display such that the Head Mounted Display rotates by the predetermined angle or more around the predetermined reference axis, and thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. In the above aspect, the “predetermined reference axis” may be a straight line that extends to the predetermined direction when viewed by the user wearing the Head Mounted Display, for example. Specifically, the “predetermined reference axis” may be, when viewed by the user wearing the Head Mounted Display, a straight line that extends a direction in front of the user, for example. In other words, in the above aspect, the “orientation in which the Head Mounted Display rotates by the predetermined angle or more from the predetermined reference orientation around the predetermined reference axis” may be an orientation in which the Head Mounted Display rotates by the predetermined angle or more in the roll direction, when viewed by the user wearing the Head Mounted Display, for example. Appendix 4 The recording medium according to another aspect of the present invention is a recording medium according to Appendix 1, when for a first period, the predetermined positional relationship is maintained between the virtual line and a predetermined object that is disposed in the virtual space, the selector is configured to select the one or more optional objects. According to the aspect, when for the first period, the predetermined positional relationship is maintained between the virtual line and the predetermined object, the optional object that has been identified by the identifier is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the optional object to be selected is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that the user controls the orientation of the Head Mounted Display such that for the first period, the predetermined positional relationship is established between the virtual line and the predetermined object, thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. In the above aspect, the “first period” may have the predetermined time length, and may be started at time at which the predetermined positional relationship is established between the virtual line and the predetermined object, for example. Appendix 5 The recording medium according to another aspect of the present invention is a recording medium according to Appendix 1, when for a first period (e.g., a selection-decided period), the predetermined positional relationship is maintained between the virtual line and the one optional object, the selector is configured to select the one or more optional objects. According to the aspect, when for the first period, the predetermined positional relationship is maintained between the virtual line and one optional object, the optional object that has been identified by the identifier is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the optional object to be selected is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that the user controls the orientation of the Head Mounted Display such that for the first period, the predetermined positional relationship is established between the virtual line and one optional object from among two or more optional objects, and thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. In the above aspect, the “first period” may have the predetermined time length, and may be started at time at which the predetermined positional relationship is established between the virtual line and the one optional object, for example. Appendix 6 The recording medium according to another aspect of the present invention is a recording medium according to Appendixes 1 or 5, upon a condition under which M (the M represents a natural number that is equal to or greater than one) optional objects are to be selected from among the optional objects, when for a first period, the predetermined positional relationship is maintained between the virtual line and an optional object that is identified by the identifier at the Mth time from among the optional objects, the selector is configured to select the M optional objects identified by the identifier. According to the aspect, when for the first period, the predetermined positional relationship is maintained between the virtual line and an optional object that is identified by the identifier at the Mth time, the optional object that has been identified by the identifier is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the optional object to be selected is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, the user wearing the Head Mounted Display changes the orientation of the Head Mounted Display such that the predetermined positional relationship is established between the virtual line and each of the two or more optional objects, and after that the user controls the orientation of the Head Mounted Display such that for the first period, the predetermined positional relationship is established between the virtual line and the optional object that has been identified at the last (the Mth time) from among two or more optional objects, and thereby enabling selection of the two or more optional objects. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. In the above aspect, the “first period” may have the predetermined time length, and may be started at time at which the predetermined positional relationship is established between the virtual line and the optional object that is identified at the Mth time, for example. Appendix 7 The recording medium according to another aspect of the present invention is a recording medium according to any one of Appendixes 4 to 6, when the predetermined positional relationship is established between the virtual line and the one optional object, or when for a reference period (e.g., a time length ΔT2) having a time length that is shorter than that of the first period, the predetermined positional relationship is maintained between the virtual line and the one optional object, the selector is configured to select the one optional object. According to the aspect, in the reference period having the time length that is shorter than that of the first period, when the predetermined positional relationship is established between a virtual line and an optional object, the optional object is identified. For this reason, according to the aspect, time required to identify an optional object is made shorter, as compared to an aspect in which when the predetermined positional relationship is maintained between a virtual line and an optional object for the first period, the optional object is identified, for example. Appendix 8 The recording medium according to another aspect of the present invention is a recording medium according to any one of Appendixes 1 to 7, when the predetermined positional relationship is established between the virtual line and the one optional object, and the orientation of the Head Mounted Display indicated by the orientation information is a predetermined orientation (e.g., an orientation in the roll rotation state), the identifier is configured to identify the one optional object, and when the predetermined positional relationship is established between the virtual line and the one optional object, and the orientation of the Head Mounted Display indicated by the orientation information is not the predetermined orientation, the identifier is configured not to identify the one optional object. According to the aspect, only when the orientation of the Head Mounted Display is the predetermined orientation, the optional object is identified. For this reason, according to the aspect, it is possible to reduce probability of incorrect selection of an optional object that differs from the desired optional object, as compared to an aspect in which an optional object is identified only on the basis of the positional relationship between the virtual line and the optional object with no consideration of the orientation of the Head Mounted Display, for example. In the above aspect, the “only when the orientation of the Head Mounted Display is the predetermined orientation” may refer to a case in which the Head Mounted Display is in an orientation in which it rotates by the predetermined angle or more from the reference orientation around the predetermined reference axis, for example. Conversely, the “only when the orientation of the Head Mounted Display is the predetermined orientation” may refer to a case in which the Head Mounted Display is in an orientation in which it rotates by less than the predetermined angle from the reference orientation around the predetermined reference axis, for example. Appendix 9 The recording medium according to another aspect of the present invention is a recording medium according to any one of Appendixes 1 to 8, the display controller is configured to differentiate between: a display mode, in the display, of another optional object that is not yet identified by the identifier from among the optional objects in a second period (e.g., a pre-selection period); and a display mode, in the display, of the another optional object before a start of the second period, the second period being a period after an optional object is identified for the first time by the identifier from among the optional objects until the one or more optional objects are selected by the selector. According to the aspect, a display mode of another optional object in the second period is differentiated from a display mode of the other optional object in the period before the start of the second period. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to view with ease which optional object being identified from among the optional objects, as compared to a case in which a display mode of another optional object in the second period is the same as that in the period before the start of the second period, for example. In the above aspect, the “display mode” refers to, for example, a mode that is distinguished from another mode with the sense of sight. Specifically, the “display mode” may be a concept including some or all of shape, pattern, color, size, brightness, and transparency, for example. Appendix 10 The recording medium according to another aspect of the present invention is a recording medium according to any one of Appendixes 1 to 9, the display controller is configured to differentiate between: a display mode, in the display, of the one optional object after the one optional object is identified by the identifier; and a display mode, in the display, of the one optional object that is not yet identified by the identifier. According to the aspect, a display mode of one optional object in a period after the one optional object has been identified is differentiated from a display mode of one optional object in a period before the one optional object is identified. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to view with ease which optional object being identified from among the optional objects, as compared to a case in which a display mode of one optional object in a period after the one optional object has been identified is the same as that in a period before the one optional object, for example. Appendix 11 The recording medium according to another aspect of the present invention is a recording medium according to any one of Appendixes 4 to 7, the display controller is configured to differentiate between: a display mode, in the display, of at least some optional objects from among the optional objects in a period before a start of the first period; and a display mode, in the display, of the at least some optional objects in a period of a part or entire of the first period. According to the aspect, a display mode of some optional objects in the first period is differentiated from a display mode of the some optional objects in the period before the start of the first period. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to acknowledge the start of the first period in which optional objects are selected, as compared to a case in which a display mode of some optional objects in the first period is the same as that in the period before the start of the first period, for example. In the above aspect, the “at least some optional objects” may be one optional object that is identified by the identifier from among optional objects, may be another optional object that is not yet identified by the identifier from among the optional objects, or may include both the one object and the other object, for example. Appendix 12 An information processing apparatus according to an aspect of the present invention includes: a display controller configured to cause a display provided on a Head Mounted Display to display a stereoscopic image to which binocular parallax is applied, the stereoscopic image being an image of a virtual space in which optional objects are disposed, and the virtual space being captured by a virtual camera; an acquirer configured to acquire orientation information on an orientation of the Head Mounted Display; an identifier configured to, when a predetermined position relationship is established between a virtual line and one optional object from among the optional objects, the virtual line having a direction according to the orientation information and intersecting with the virtual camera, identify the one option image; and a selector configured to, when one or more optional objects are identified by the identifier and a predetermined condition relating to the orientation information is satisfied, select the one or more optional objects. According to the aspect, identification of the optional objects and selection of the identified optional objects are carried out on the bases of the orientation information relating to the orientation of the Head Mounted Display. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to carry out an input operation other than an input operation to change the orientation of the virtual camera on the bases of the orientation of the Head Mounted Display. In other words, according to the aspect, it is possible to input various instructions by changing the orientation of Head Mounted Display. Furthermore, according to the aspect, when the desired optional object is identified, the optional object is selected. For this reason, according to the aspect, probability of incorrect selection of an optional object that differs from the desired optional object is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, it is possible to select the two or more options by changing the orientation of the Head Mounted Display. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. Appendix 13 A Head Mounted Display according to an aspect of the present invention includes: a display; and an information processing apparatus, in which the information processing apparatus includes: a display controller configured to cause a display provided on a Head Mounted Display to display a stereoscopic image to which binocular parallax is applied, the stereoscopic image being an image of a virtual space in which optional objects are disposed, and the virtual space being captured by a virtual camera; an acquirer configured to acquire orientation information on an orientation of the Head Mounted Display; an identifier configured to, when a predetermined position relationship is established between a virtual line and one optional object from among the optional objects, the virtual line having a direction according to the orientation information and intersecting with the virtual camera, identify the one option image; and a selector configured to, when one or more optional objects are identified by the identifier and a predetermined condition relating to the orientation information is satisfied, select the one or more optional objects. According to the aspect, identification of the optional objects and selection of the identified optional objects are carried out on the bases of the orientation information relating to the orientation of the Head Mounted Display. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to carry out an input operation other than an input operation to change the orientation of the virtual camera on the bases of the orientation of the Head Mounted Display. In other words, according to the aspect, it is possible to input various instructions by changing the orientation of Head Mounted Display. Furthermore, according to the aspect, when the desired optional object is identified, the optional object is selected. For this reason, according to the aspect, a possibility of incorrect selection of an optional object that differs from the desired optional object is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, it is possible to select the two or more options by changing the orientation of the Head Mounted Display. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. Appendix 14 An information processing system according to an aspect of the present invention includes: a Head Mounted Display including a display; and an information processing apparatus, in which the information apparatus includes: a display controller configured to cause a display provided on a Head Mounted Display to display a stereoscopic image to which binocular parallax is applied, the stereoscopic image being an image of a virtual space in which optional objects are disposed, and the virtual space being captured by a virtual camera; an acquirer configured to acquire orientation information on an orientation of the Head Mounted Display; an identifier configured to, when a predetermined position relationship is established between a virtual line and one optional object from among the optional objects, the virtual line having a direction according to the orientation information and intersecting with the virtual camera, identify the one option image; and a selector configured to, when one or more optional objects are identified by the identifier and a predetermined condition relating to the orientation information is satisfied, select the one or more optional objects. According to the aspect, identification of the optional objects and selection of the identified optional objects are carried out on the bases of the orientation information relating to the orientation of the Head Mounted Display. For this reason, according to the aspect, the user wearing the Head Mounted Display is able to carry out an input operation other than an input operation to change the orientation of the virtual camera on the bases of the orientation of the Head Mounted Display. In other words, according to the aspect, it is possible to input various instructions by changing the orientation of Head Mounted Display. Furthermore, according to the aspect, when the desired optional object is identified, the optional object is selected. For this reason, according to the aspect, a possibility of incorrect selection of an optional object that differs from the desired optional object is reduced, as compared to the Reference Example 1. Furthermore, according to the aspect, even when two or more optional objects are selected from among the optional objects, it is possible to select the two or more options by changing the orientation of the Head Mounted Display. For this reason, according to the aspect, prolongation of time required for selection is reduced when the number of the optional objects to be selected is greater, as compared to Reference Example 2. DESCRIPTION OF REFERENCE SIGNS 1. . . Head Mounted Display10. . . terminal apparatus11. . . controller12. . . display13. . . operator14. . . orientation information generator15. . . storage111. . . display controller112. . . orientation information acquirer113. . . identifier114. . . selector1000. . . processor1002. . . angular velocity sensor | 109,591 |
11861061 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the specific embodiments and examples, but is inclusive of general principles described herein and illustrated in the figures in addition to the general principles encompassed by the appended claims. The present disclosure is directed to systems and methods for providing users an extended reality environment. The term “extended reality environment,” which may also be referred to as “extended reality,” “extended reality space.” or “extended environment,” refers to all types of real-and-virtual combined environments and human-machine interactions at least partially generated by computer technology. The extended reality environment may be a completely simulated virtual environment or a combined real-and-virtual environment that a user may perceive from different perspectives. In some examples, the user may interact with elements of the extended reality environment. One non-limiting example of an extended reality environment may be a virtual reality environment, also known as “virtual reality” or a “virtual environment.” An immersive virtual reality environment may be a simulated non-physical environment which provides to the user the perception of being present in the virtual environment. Another non-limiting example of an extended reality environment may be an augmented reality environment, also known as “augmented reality” or “augmented environment.” An augmented reality environment may involve live direct or indirect view of a physical real-world environment that is enhanced with virtual computer-generated perceptual information, such as virtual objects that the user may interact with. Another non-limiting example of an extended reality environment is a mixed reality environment, also known as “mixed reality” or a “mixed environment.” A mixed reality environment may be a hybrid of physical real-world and virtual environments, in which physical and virtual objects may coexist and interact in real time. In some examples, both augmented reality environments and mixed reality environments may include a combination of real and virtual worlds, real-time interactions, and accurate 3D registrations of virtual and real objects. In some examples, both the augmented reality environment and the mixed reality environment may include constructive overlaid sensory information that may be added to the physical environment. In other examples, both the augmented reality environment and the mixed reality environment may include destructive virtual content that may mask at least part of the physical environment. In some embodiments, the systems and methods may provide the extended reality environment using an extended reality appliance. The term extended reality appliance may include any type of device or system that enables a user to perceive and/or interact with an extended reality environment. The extended reality appliance may enable the user to perceive and/or interact with an extended reality environment through one or more sensory modalities. Some non-limiting examples of such sensory modalities may include visual, auditory, haptic, somatosensory, and olfactory signals or feedback. One example of the extended reality appliance is a virtual reality appliance that enables the user to perceive and/or interact with a virtual reality environment. Another example of the extended reality appliance is an augmented reality appliance that enables the user to perceive and/or interact with an augmented reality environment. Yet another example of the extended reality appliance is a mixed reality appliance that enables the user to perceive and/or interact with a mixed reality environment. Consistent with one aspect of the disclosure, the extended reality appliance may be a wearable device, such as a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human. Other extended reality appliances may include a holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and/or additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to detecting head movements and determining a change of the head pose of the user. The change the field-of-view of the extended reality environment may be achieved by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position. According to some embodiments, an extended reality appliance may include a digital communication device configured to at least one of: receive virtual content data configured to enable a presentation of the virtual content, transmit virtual content for sharing with at least one external device, receive contextual data from at least one external device, transmit contextual data to at least one external device, transmit usage data indicative of usage of the extended reality appliance, and transmit data based on information captured using at least one sensor included in the extended reality appliance. In additional embodiments, the extended reality appliance may include memory for storing at least one of virtual data configured to enable a presentation of virtual content, contextual data, usage data indicative of usage of the extended reality appliance, sensor data based on information captured using at least one sensor included in the extended reality appliance, software instructions configured to cause a processing device to present the virtual content, software instructions configured to cause a processing device to collect and analyze the contextual data, software instructions configured to cause a processing device to collect and analyze the usage data, and software instructions configured to cause a processing device to collect and analyze the sensor data. In additional embodiments, the extended reality appliance may include a processing device configured to perform at least one of rendering of virtual content, collecting and analyzing contextual data, collecting and analyzing usage data, and collecting and analyzing sensor data. In additional embodiments, the extended reality appliance may include one or more sensors. The one or more sensors may include one or more image sensors (e.g., configured to capture images and/or videos of a user of the appliance or of an environment of the user), one or more motion sensors (such as an accelerometer, a gyroscope, a magnetometer, etc.), one or more positioning sensors (such as GPS, outdoor positioning sensor, indoor positioning sensor, etc.), one or more temperature sensors (e.g., configured to measure the temperature of at least part of the appliance and/or of the environment), one or more contact sensors, one or more proximity sensors (e.g., configured to detect whether the appliance is currently worn), one or more electrical impedance sensors (e.g., configured to measure electrical impedance of the user), one or more eye tracking sensors, such as gaze detectors, optical trackers, electric potential trackers (e.g., electrooculogram (EOG) sensors), video-based eye-trackers, infra-red/near infra-red sensors, passive light sensors, or any other technology capable of determining where a human is looking or gazing. In some embodiments, the systems and methods may use an input device to interact with the extended reality appliance. The term input device may include any physical device configured to receive input from a user or an environment of the user, and to provide the data to a computational device. The data provided to the computational device may be in a digital format and/or in an analog format. In one embodiment, the input device may store the input received from the user in a memory device accessible by a processing device, and the processing device may access the stored data for analysis. In another embodiment, the input device may provide the data directly to a processing device, for example, over a bus or over another communication system configured to transfer data from the input device to the processing device. In some examples, the input received by the input device may include key presses, tactile input data, motion data, position data, gestures based input data, direction data, or any other data. Some examples of the input device may include a button, a key, a keyboard, a computer mouse, a touchpad, a touchscreen, a joystick, or another mechanism from which input may be received. Another example of an input device may include an integrated computational interface device that includes at least one physical component for receiving input from a user. The integrated computational interface device may include at least a memory, a processing device, and the at least one physical component for receiving input from a user. In one example, the integrated computational interface device may further include a digital network interface that enables digital communication with other computing devices. In one example, the integrated computational interface device may further include a physical component for outputting information to the user. In some examples, all components of the integrated computational interface device may be included in a single housing, while in other examples the components may be distributed among two or more housings. Some non-limiting examples of physical components for receiving input from users that may be included in the integrated computational interface device may include at least one of a button, a key, a keyboard, a touchpad, a touchscreen, a joystick, or any other mechanism or sensor from which computational information may be received. Some non-limiting examples of physical components for outputting information to users may include at least one of a light indicator (such as a LED indicator), a screen, a touchscreen, a beeper, an audio speaker, or any other audio, video, or haptic device that provides human-perceptible outputs. In some embodiments, image data may be captured using one or more image sensors. In some examples, the image sensors may be included in the extended reality appliance, in a wearable device, in the wearable extended reality device, in the input device, in an environment of a user, and so forth. In some examples, the image data may be read from memory, may be received from an external device, may be generated (for example, using a generative model), and so forth. Some non-limiting examples of image data may include images, grayscale images, color images, 2D images, 3D images, videos, 2D videos, 3D videos, frames, footages, data derived from other image data, and so forth. In some examples, the image data may be encoded in any analog or digital format. Some non-limiting examples of such formats may include raw formats, compressed formats, uncompressed formats, lossy formats, lossless formats, JPEG, GIF, PNG, TIFF, BMP, NTSC, PAL, SECAM, MPEG, MPEG-4 Part 14, MOV, WMV, FLV, AVI, AVCHD, WebM, MKV, and so forth. In some embodiments, the extended reality appliance may receive digital signals, for example, from the input device. The term digital signals refers to a series of digital values that are discrete in time. The digital signals may represent, for example, sensor data, textual data, voice data, video data, virtual data, or any other form of data that provides perceptible information. Consistent with the present disclosure, the digital signals may be configured to cause the extended reality appliance to present virtual content. In one embodiment, the virtual content may be presented in a selected orientation. In this embodiment, the digital signals may indicate a position and an angle of a viewpoint in an environment, such as an extended reality environment. Specifically, the digital signals may include an encoding of the position and angle in six degree-of-freedom coordinates (e.g., forward/back, up/down, left/right, yaw, pitch, and roll). In another embodiment, the digital signals may include an encoding of the position as three-dimensional coordinates (e.g., x, y, and z), and an encoding of the angle as a vector originating from the encoded position. Specifically, the digital signals may indicate the orientation and an angle of the presented virtual content in absolute coordinates of the environment, for example, by encoding yaw, pitch and roll of the virtual content with respect to a standard default angle. In another embodiment, the digital signals may indicate the orientation and the angle of the presented virtual content with respect to a viewpoint of another object (e.g., a virtual object, a physical object, etc.), for example, by encoding yaw, pitch, and roll of the virtual content with respect to a direction corresponding to the viewpoint or to a direction corresponding to the other object. In another embodiment, such digital signals may include one or more projections of the virtual content, for example, in a format ready for presentation (e.g., image, video, etc.). For example, each such projection may correspond to a particular orientation or a particular angle. In another embodiment, the digital signals may include a representation of virtual content, for example, by encoding objects in a three-dimensional array of voxels, in a polygon mesh, or in any other format in which virtual content may be presented. In some embodiments, the digital signals may be configured to cause the extended reality appliance to present virtual content. The term virtual content may include any type of data representation that may be displayed by the extended reality appliance to the user. The virtual content may include a virtual object, inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. Consistent with the present disclosure, the virtual content may include any visual presentation rendered by a computer or a processing device. In one embodiment, the virtual content may include a virtual object that is a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation). The rendered visual presentation may change to reflect changes to a status of an object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. In another embodiment, the virtual content may include a virtual display (also referred to as a “virtual display screen” or a “virtual screen” herein), such as a virtual computer screen, a virtual tablet screen or a virtual smartphone screen, configured to display information generated by an operating system, in which the operating system may be configured to receive textual data from a physical keyboard and/or a virtual keyboard and to cause a display of the textual content in the virtual display screen. In one example, illustrated inFIG.1, the virtual content may include a virtual environment that includes a virtual computer screen and a plurality of virtual objects. In some examples, a virtual display may be a virtual object mimicking and/or extending the functionality of a physical display screen. For example, the virtual display may be presented in an extended reality environment (such as a mixed reality environment, an augmented reality environment, a virtual reality environment, etc.), using an extended reality appliance. In one example, a virtual display may present content produced by a regular operating system that may be equally presented on a physical display screen. In one example, a textual content entered using a keyboard (for example, using a physical keyboard, using a virtual keyboard, etc.) may be presented on a virtual display in real time as the textual content is typed. In one example, a virtual cursor may be presented on a virtual display, and the virtual cursor may be controlled by a pointing device (such as a physical pointing device, a virtual pointing device, a computer mouse, a joystick, a touchpad, a physical touch controller, and/or any other device for identifying a location on the display). In one example, one or more windows of a graphical user interface operating system may be presented on a virtual display. In another example, content presented on a virtual display may be interactive, that is, it may change in reaction to actions of users. In yet another example, a presentation of a virtual display may include a presentation of a screen frame, or may include no presentation of a screen frame. Some disclosed embodiments may include and/or access a data structure or a database. The terms data structure and a database, consistent with the present disclosure may include any collection of data values and relationships among them. The data may be stored linearly, horizontally, hierarchically, relationally, non-relationally, uni-dimensionally, multidimensionally, operationally, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner enabling data access. By way of non-limiting examples, data structures may include an array, an associative array, a linked list, a binary tree, a balanced tree, a heap, a stack, a queue, a set, a hash table, a record, a tagged union, Entity-Relationship model, a graph, a hypergraph, a matrix, a tensor, and/or other ways of organizing data. For example, a data structure may include an XML database, an RDBMS database, an SQL database or NoSQL alternatives for data storage/search such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Solr, Cassandra, Amazon DynamoDB, Scylla, HBase, and/or Neo4J. A data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). Data in the data structure may be stored in contiguous or non-contiguous memory. Moreover, a data structure may not require information to be co-located. It may be distributed across multiple servers, for example, the multiple servers may be owned or operated by the same or different entities. Thus, the term data structure in the singular is inclusive of plural data structures. In some embodiments, the system may determine the confidence level in received input or in any determined value. The term confidence level refers to any indication, numeric or otherwise, of a level (e.g., within a predetermined range) indicative of an amount of confidence the system has in the determined data. For example, the confidence level may have a value between 1 and 10. Alternatively, the confidence level may be expressed as a percentage or any other numerical or non-numerical indication. In some cases, the system may compare the confidence level to a threshold. The term threshold may denote a reference value, a level, a point, or a range of values. In operation, when the confidence level of determined data exceeds the threshold (or is below it, depending on a particular use case), the system may follow a first course of action and, when the confidence level is below it (or above it, depending on a particular use case), the system may follow a second course of action. The value of the threshold may be predetermined for each type of examined object or may be dynamically selected based on different considerations. System Overview Reference is now made toFIG.1, which illustrates a user that uses an example extended reality system consistent with various embodiments of the present disclosure.FIG.1is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. As shown, a user100is sitting behind table102, supporting a keyboard104and mouse106. Keyboard104is connected by wire108to a wearable extended reality appliance110that displays virtual content to user100. Alternatively or additionally, keyboard104may connect to wearable extended reality appliance110wirelessly. For illustration purposes, the wearable extended reality appliance is depicted as a pair of smart glasses, but, as described above, wearable extended reality appliance110may be any type of head-mounted device used for presenting an extended reality to user100. The virtual content displayed by wearable extended reality appliance110includes a virtual screen112(also referred to as a “virtual display screen” or a “virtual display” herein) and a plurality of virtual widgets114. Virtual widgets114A-114D are displayed next to virtual screen112and virtual widget114E is displayed on table102. User100may input text to a document116displayed in virtual screen112using keyboard104, and may control virtual cursor118using mouse106. In one example, virtual cursor118may move anywhere within virtual screen112. In another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114D but not to virtual widget114E. In yet another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114E. In an additional example, virtual cursor118may move anywhere in the extended reality environment including virtual screen112and virtual widgets114A-114E. In yet another example, virtual cursor may move on all available surfaces (i.e., virtual surfaces or physical surfaces) or only on selected surfaces in the extended reality environment. Alternatively or additionally, user100may interact with any one of virtual widgets114A-114E, or with selected virtual widgets, using hand gestures recognized by wearable extended reality appliance110. For example, virtual widget114E may be an interactive widget (e.g., a virtual slider controller) that may be operated with hand gestures. FIG.2illustrates an example of a system200that provides extended reality (XR) experience to users, such as user100.FIG.2is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. System200may be computer-based and may include computer system components, wearable appliances, workstations, tablets, handheld computing devices, memory devices, and/or internal network(s) connecting the components. System200may include or be connected to various network computing resources (e.g., servers, routers, switches, network connections, storage devices) for supporting services provided by system200. Consistent with the present disclosure, system200may include an input unit202, an XR unit204, a mobile communications device206, and/or a remote processing unit208. Remote processing unit208may include a server210coupled to one or more physical or virtual storage devices, such as a data structure212. System200may also include or be connected to a communications network214that facilitates communications and data exchange between different system components and the different entities associated with system200. Consistent with the present disclosure, input unit202may include one or more devices that may receive input from user100. In one embodiment, input unit202may include a textual input device, such as keyboard104. The textual input device may include all possible types of devices and mechanisms for inputting textual information to system200. Examples of textual input devices may include mechanical keyboards, membrane keyboards, flexible keyboards, QWERTY keyboards, Dvorak keyboards, Colemak keyboards, chorded keyboards, wireless keyboards, keypads, key-based control panels, or other arrays of control keys, vision input devices, or any other mechanism for inputting text, whether the mechanism is provided in physical form or is presented virtually. In one embodiment, input unit202may also include a pointing input device, such as mouse106. The pointing input device may include all possible types of devices and mechanisms for inputting two-dimensional or three-dimensional information to system200. In one example, two-dimensional input from the pointing input device may be used for interacting with virtual content presented via the XR unit204. Examples of pointing input devices may include a computer mouse, trackball, touchpad, trackpad, touchscreen, joystick, pointing stick, stylus, light pen, or any other physical or virtual input mechanism. In one embodiment, input unit202may also include a graphical input device, such as a touchscreen configured to detect contact, movement, or break of movement. The graphical input device may use any of a plurality of touch sensitivity technologies, including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies as well as other proximity sensor arrays or other elements for determining one or more points of contact. In one embodiment, input unit202may also include one or more voice input devices, such as a microphone. The voice input device may include all possible types of devices and mechanisms for inputting voice data to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In one embodiment, input unit202may also include one or more image input devices, such as an image sensor, configured to capture image data. In one embodiment, input unit202may also include one or more haptic gloves configured to capture hands motion and pose data. In one embodiment, input unit202may also include one or more proximity sensors configured to detect presence and/or movement of objects in a selected region near the sensors. In accordance with some embodiments, the system may include at least one sensor configured to detect and/or measure a property associated with the user, the user's action, or user's environment. One example of the at least one sensor, is sensor216included in input unit202. Sensor216may be a motion sensor, a touch sensor, a light sensor, an infrared sensor, an audio sensor, an image sensor, a proximity sensor, a positioning sensor, a gyroscope, a temperature sensor, a biometric sensor, or any other sensing devices to facilitate related functionalities. Sensor216may be integrated with, or connected to, the input devices or it may be separated from the input devices. In one example, a thermometer may be included in mouse106to determine the body temperature of user100. In another example, a positioning sensor may be integrated with keyboard104to determine movement of user100relative to keyboard104. Such positioning sensor may be implemented using one of the following technologies: Global Positioning System (GPS), GLObal NAvigation Satellite System (GLONASS), Galileo global navigation system, BeiDou navigation system, other Global Navigation Satellite Systems (GNSS), Indian Regional Navigation Satellite System (IRNSS), Local Positioning Systems (LPS), Real-Time Location Systems (RTLS), Indoor Positioning System (IPS), Wi-Fi based positioning systems, cellular triangulation, image based positioning technology, indoor positioning technology, outdoor positioning technology, or any other positioning technology. In accordance with some embodiments, the system may include one or more sensors for identifying a position and/or a movement of a physical device (such as a physical input device, a physical computing device, keyboard104, mouse106, wearable extended reality appliance110, and so forth). The one or more sensors may be included in the physical device or may be external to the physical device. In some examples, an image sensor external to the physical device (for example, an image sensor included in another physical device) may be used to capture image data of the physical device, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using a visual object tracking algorithm to identify the movement of the physical device, may be analyzed using a visual object detection algorithm to identify the position of the physical device (for example, relative to the image sensor, in a global coordinates system, etc.), and so forth. In some examples, an image sensor included in the physical device may be used to capture image data, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using visual odometry algorithms to identify the position of the physical device, may be analyzed using an ego-motion algorithm to identify movement of the physical device, and so forth. In some examples, a positioning sensor, such as an indoor positioning sensor or an outdoor positioning sensor, may be included in the physical device and may be used to determine the position of the physical device. In some examples, a motion sensor, such as an accelerometer or a gyroscope, may be included in the physical device and may be used to determine the motion of the physical device. In some examples, a physical device, such as a keyboard or a mouse, may be configured to be positioned on a physical surface. Such physical device may include an optical mouse sensor (also known as non-mechanical tracking engine) aimed towards the physical surface, and the output of the optical mouse sensor may be analyzed to determine movement of the physical device with respect to the physical surface. Consistent with the present disclosure, XR unit204may include a wearable extended reality appliance configured to present virtual content to user100. One example of the wearable extended reality appliance is wearable extended reality appliance110. Additional examples of wearable extended reality appliance may include a Virtual Reality (VR) device, an Augmented Reality (AR) device, a Mixed Reality (MR) device, or any other device capable of generating extended reality content. Some non-limiting examples of such devices may include Nreal Light, Magic Leap One, Varjo, Quest 1/2, Vive, and others. In some embodiments, XR unit204may present virtual content to user100. Generally, an extended reality appliance may include all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. As mentioned above, the term “extended reality” (XR) refers to a superset which includes the entire spectrum from “the complete real” to “the complete virtual.” It includes representative forms such as augmented reality (AR), mixed reality (MR), virtual reality (VR), and the areas interpolated among them. Accordingly, it is noted that the terms “XR appliance,” “AR appliance,” “VR appliance,” and “MR appliance” may be used interchangeably herein and may refer to any device of the variety of appliances listed above. Consistent with the present disclosure, the system may exchange data with a variety of communication devices associated with users, for example, mobile communications device206. The term “communication device” is intended to include all possible types of devices capable of exchanging data using digital communications network, analog communication network or any other communications network configured to convey data. In some examples, the communication device may include a smartphone, a tablet, a smartwatch, a personal digital assistant, a desktop computer, a laptop computer, an IoT device, a dedicated terminal, a wearable communication device, and any other device that enables data communications. In some cases, mobile communications device206may supplement or replace input unit202. Specifically, mobile communications device206may be associated with a physical touch controller that may function as a pointing input device. Moreover, mobile communications device206may also, for example, be used to implement a virtual keyboard and replace the textual input device. For example, when user100steps away from table102and walks to the break room with his smart glasses, he may receive an email that requires a quick answer. In this case, the user may select to use his or her own smartwatch as the input device and to type the answer to the email while it is virtually presented by the smart glasses. Consistent with the present disclosure, embodiments of the system may involve the usage of a cloud server. The term “cloud server” refers to a computer platform that provides services via a network, such as the Internet. In the example embodiment illustrated inFIG.2, server210may use virtual machines that may not correspond to individual hardware. For example, computational and/or storage capabilities may be implemented by allocating appropriate portions of desirable computation/storage power from a scalable repository, such as a data center or a distributed computing environment. Specifically, in one embodiment, remote processing unit208may be used together with XR unit204to provide the virtual content to user100. In one example configuration, server210may be a cloud server that functions as the operating system (OS) of the wearable extended reality appliance. In one example, server210may implement the methods described herein using customized hard-wired logic, one or more Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), firmware, and/or program logic which, in combination with the computer system, cause server210to be a special-purpose machine. In some embodiments, server210may access data structure212to determine, for example, virtual content to display to user100. Data structure212may utilize a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, other type of storage device or tangible or non-transitory computer-readable medium, or any medium or mechanism for storing information. Data structure212may be part of server210or separate from server210, as shown. When data structure212is not part of server210, server210may exchange data with data structure212via a communication link. Data structure212may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed methods. In one embodiment, data structure212may include any of a plurality of suitable data structures, ranging from small data structures hosted on a workstation to large data structures distributed among data centers. Data structure212may also include any combination of one or more data structures controlled by memory controller devices (e.g., servers) or software. Consistent with the present disclosure, communications network may be any type of network (including infrastructure) that supports communications, exchanges information, and/or facilitates the exchange of information between the components of a system. For example, communications network214in system200may include, for example, a telephone network, an extranet, an intranet, the Internet, satellite communications, off-line communications, wireless communications, transponder communications, a Local Area Network (LAN), wireless network (e.g., a Wi-Fi/302.11 network), a Wide Area Network (WAN), a Virtual Private Network (VPN), digital communication network, analog communication network, or any other mechanism or combination of mechanisms that enables data transmission. The components and arrangements of system200shown inFIG.2are intended to be exemplary only and are not intended to limit any embodiment, as the system components used to implement the disclosed processes and features may vary. FIG.3is a block diagram of an exemplary configuration of input unit202.FIG.3is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.3, input unit202may directly or indirectly access a bus300(or other communication mechanism) that interconnects subsystems and components for transferring information within input unit202. For example, bus300may interconnect a memory interface310, a network interface320, an input interface330, a power source340, an output interface350, a processing device360, a sensors interface370, and a database380. Memory interface310, shown inFIG.3, may be used to access a software product and/or data stored on a non-transitory computer-readable medium. Generally, a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by at least one processor can be stored. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, any other optical data storage medium, any physical medium with patterns of holes, a PROM, an EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The terms “memory” and “computer-readable storage medium” may refer to multiple structures, such as a plurality of memories or computer-readable storage mediums located within an input unit or at a remote location. Additionally, one or more computer-readable storage mediums can be utilized in implementing a computer-implemented method. Accordingly, the term computer-readable storage medium should be understood to include tangible items and exclude carrier waves and transient signals. In the specific embodiment illustrated inFIG.3, memory interface310may be used to access a software product and/or data stored on a memory device, such as memory device311. Memory device311may include high-speed random-access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Consistent with the present disclosure, the components of memory device311may be distributed in more than units of system200and/or in more than one memory device. Memory device311, shown inFIG.3, may contain software modules to execute processes consistent with the present disclosure. In particular, memory device311may include an input determination module312, an output determination module313, a sensors communication module314, a virtual content determination module315, a virtual content communication module316, and a database access module317. Modules312-317may contain software instructions for execution by at least one processor (e.g., processing device360) associated with input unit202. Input determination module312, output determination module313, sensors communication module314, virtual content determination module315, virtual content communication module316, and database access module317may cooperate to perform various operations. For example, input determination module312may determine text using data received from, for example, keyboard104. Thereafter, output determination module313may cause presentation of the recent inputted text, for example on a dedicated display352physically or wirelessly coupled to keyboard104. This way, when user100types, the user can see a preview of the typed text without constantly moving his head up and down to look at virtual screen112. Sensors communication module314may receive data from different sensors to determine a status of user100. Thereafter, virtual content determination module315may determine the virtual content to display, based on received input and the determined status of user100. For example, the determined virtual content may be a virtual presentation of the recent inputted text on a virtual screen virtually located adjacent to keyboard104. Virtual content communication module316may obtain virtual content that is not determined by virtual content determination module315(e.g., an avatar of another user). The retrieval of the virtual content may be from database380, from remote processing unit208, or any other source. In some embodiments, input determination module312may regulate the operation of input interface330in order to receive pointer input331, textual input332, audio input333, and XR-related input334. Details on the pointer input, the textual input, and the audio input are described above. The term “XR-related input” may include any type of data that may cause a change in the virtual content displayed to user100. In one embodiment, XR-related input334may include image data of user100from the wearable extended reality appliance (e.g., detected hand gestures of user100). In another embodiment, XR-related input334may include wireless communication indicating a presence of another user in proximity to user100. Consistent with the present disclosure, input determination module312may concurrently receive different types of input data. Thereafter, input determination module312may further apply different rules based on the detected type of input. For example, a pointer input may have precedence over voice input. In some embodiments, output determination module313may regulate the operation of output interface350in order to generate output using light indicators351, display352, and/or speakers353. In general, the output generated by output determination module313does not include virtual content to be presented by a wearable extended reality appliance. Instead, the output generated by output determination module313includes various outputs that relates to the operation of input unit202and/or the operation of XR unit204. In one embodiment, light indicators351may include a light indicator that shows the status of a wearable extended reality appliance. For example, the light indicator may display green light when wearable extended reality appliance110are connected to keyboard104, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display352may be used to display operational information. For example, the display may present error messages when the wearable extended reality appliance is inoperable. In another embodiment, speakers353may be used to output audio, for example, when user100wishes to play some music for other users. In some embodiments, sensors communication module314may regulate the operation of sensors interface370in order to receive sensor data from one or more sensors, integrated with, or connected to, an input device. The one or more sensors may include: audio sensor371, image sensor372, motion sensor373, environmental sensor374(e.g., a temperature sensor, ambient light detectors, etc.), and other sensors375. In one embodiment, the data received from sensors communication module314may be used to determine the physical orientation of the input device. The physical orientation of the input device may be indicative of a state of the user and may be determined based on combination of a tilt movement, a roll movement, and a lateral movement. Thereafter, the physical orientation of the input device may be used by virtual content determination module315to modify display parameters of the virtual content to match the state of the user (e.g., attention, sleepy, active, sitting, standing, leaning backwards, leaning forward, walking, moving, riding). In some embodiments, virtual content determination module315may determine the virtual content to be displayed by the wearable extended reality appliance. The virtual content may be determined based on data from input determination module312, sensors communication module314, and other sources (e.g., database380). In some embodiments, determining the virtual content may include determining the distance, the size, and the orientation of the virtual objects. The determination of the position of the virtual objects may be determined based on the type of the virtual objects. Specifically, with regards to the example illustrated inFIG.1, the virtual content determination module315may determine to place four virtual widgets114A-114D on the sides of virtual screen112and to place virtual widget114E on table102because virtual widget114E is a virtual controller (e.g., volume bar). The determination of the position of the virtual objects may further be determined based on user's preferences. For example, for left-handed users, virtual content determination module315may determine placing a virtual volume bar left of keyboard104; and for right-handed users, virtual content determination module315may determine placing the virtual volume bar right of keyboard104. In some embodiments, virtual content communication module316may regulate the operation of network interface320in order to obtain data from one or more sources to be presented as virtual content to user100. The one or more sources may include other XR units204, the user's mobile communications device206, remote processing unit208, publicly available information, etc. In one embodiment, virtual content communication module316may communicate with mobile communications device206in order to provide a virtual representation of mobile communications device206. For example, the virtual representation may enable user100to read messages and interact with applications installed on the mobile communications device206. Virtual content communication module316may also regulate the operation of network interface320in order to share virtual content with other users. In one example, virtual content communication module316may use data from input determination module to identify a trigger (e.g., the trigger may include a gesture of the user) and to transfer content from the virtual display to a physical display (e.g., TV) or to a virtual display of a different user. In some embodiments, database access module317may cooperate with database380to retrieve stored data. The retrieved data may include, for example, privacy levels associated with different virtual objects, the relationship between virtual objects and physical objects, the user's preferences, the user's past behavior, and more. As described above, virtual content determination module315may use the data stored in database380to determine the virtual content. Database380may include separate databases, including, for example, a vector database, raster database, tile database, viewport database, and/or a user input database. The data stored in database380may be received from modules314-317or other components of system200. Moreover, the data stored in database380may be provided as input using data entry, data transfer, or data uploading. Modules312-317may be implemented in software, hardware, firmware, a mix of any of those, or the like. In some embodiments, any one or more of modules312-317and data associated with database380may be stored in XR unit204, mobile communications device206, or remote processing unit208. Processing devices of system200may be configured to execute the instructions of modules312-317. In some embodiments, aspects of modules312-317may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules312-317may be configured to interact with each other and/or other modules of system200to perform functions consistent with some disclosed embodiments. For example, input unit202may execute instructions that include an image processing algorithm on data from XR unit204to determine head movement of user100. Furthermore, each functionality described throughout the specification, with regards to input unit202or with regards to a component of input unit202, may correspond to a set of instructions for performing said functionality. These instructions need not be implemented as separate software programs, procedures, or modules. Memory device311may include additional modules and instructions or fewer modules and instructions. For example, memory device311may store an operating system, such as ANDROID, iOS, UNIX, OSX, WINDOWS, DARWIN, RTXC, LINUX or an embedded operating system such as VXWorkS. The operating system can include instructions for handling basic system services and for performing hardware-dependent tasks. Network interface320, shown inFIG.3, may provide two-way data communications to a network, such as communications network214. In one embodiment, network interface320may include an Integrated Services Digital Network (ISDN) card, cellular modem, satellite modem, or a modem to provide a data communication connection over the Internet. As another example, network interface320may include a Wireless Local Area Network (WLAN) card. In another embodiment, network interface320may include an Ethernet port connected to radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of network interface320may depend on the communications network or networks over which input unit202is intended to operate. For example, in some embodiments, input unit202may include network interface320designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth network. In any such implementation, network interface320may be configured to send and receive electrical, electromagnetic, or optical signals that carry digital data streams or digital signals representing various types of information. Input interface330, shown inFIG.3, may receive input from a variety of input devices, for example, a keyboard, a mouse, a touch pad, a touch screen, one or more buttons, a joystick, a microphone, an image sensor, and any other device configured to detect physical or virtual input. The received input may be in the form of at least one of: text, sounds, speech, hand gestures, body gestures, tactile information, and any other type of physically or virtually input generated by the user. In the depicted embodiment, input interface330may receive pointer input331, textual input332, audio input333, and XR-related input334. In additional embodiments, input interface330may be an integrated circuit that may act as a bridge between processing device360and any of the input devices listed above. Power source340, shown inFIG.3, may provide electrical energy to power input unit202and optionally also power XR unit204. Generally, a power source included in the any device or system in the present disclosure may be any device that can repeatedly store, dispense, or convey electric power, including, but not limited to, one or more batteries (e.g., a lead-acid battery, a lithium-ion battery, a nickel-metal hydride battery, a nickel-cadmium battery), one or more capacitors, one or more connections to external power sources, one or more power convertors, or any combination of them. With reference to the example illustrated inFIG.3, the power source may be mobile, which means that input unit202can be easily carried by hand (e.g., the total weight of power source340may be less than a pound). The mobility of the power source enables user100to use input unit202in a variety of situations. In other embodiments, power source340may be associated with a connection to an external power source (such as an electrical power grid) that may be used to charge power source340. In addition, power source340may be configured to charge one or more batteries included in XR unit204; for example, a pair of extended reality glasses (e.g., wearable extended reality appliance110) may be charged (e.g., wirelessly or not wirelessly) when they are placed on or in proximity to the input unit202. Output interface350, shown inFIG.3, may cause output from a variety of output devices, for example, using light indicators351, display352, and/or speakers353. In one embodiment, output interface350may be an integrated circuit that may act as bridge between processing device360and at least one of the output devices listed above. Light indicators351may include one or more light sources, for example, a LED array associated with different colors. Display352may include a screen (e.g., LCD or dot-matrix screen) or a touch screen. Speakers353may include audio headphones, a hearing aid type device, a speaker, a bone conduction headphone, interfaces that provide tactile cues, and/or vibrotactile stimulators. Processing device360, shown inFIG.3, may include at least one processor configured to execute computer programs, applications, methods, processes, or other software to perform embodiments described in the present disclosure. Generally, a processing device included in any device or system in the present disclosure may include one or more integrated circuits, microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field programmable gate array (FPGA), or other circuits suitable for executing instructions or performing logic operations. The processing device may include at least one processor configured to perform functions of the disclosed methods such as a microprocessor manufactured by Intel™. The processing device may include a single core or multiple core processors executing parallel processes simultaneously. In one example, the processing device may be a single core processor configured with virtual processing technologies. The processing device may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. In another example, the processing device may include a multiple-core processor arrangement (e.g., dual, quad core, etc.) configured to provide parallel processing functionalities to allow a device associated with the processing device to execute multiple processes simultaneously. Other types of processor arrangements may be implemented to provide the capabilities disclosed herein. Sensors interface370, shown inFIG.3, may obtain sensor data from a variety of sensors, for example, audio sensor371, image sensor372, motion sensor373, environmental sensor374, and other sensors375. In one embodiment, sensors interface370may be an integrated circuit that may act as bridge between processing device360and at least one of the sensors listed above. Audio sensor371may include one or more audio sensors configured to capture audio by converting sounds to digital information. Some examples of audio sensors may include: microphones, unidirectional microphones, bidirectional microphones, cardioid microphones, omnidirectional microphones, onboard microphones, wired microphones, wireless microphones, or any combination of the above. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on data received from audio sensor371(e.g., voice commands). Image sensor372may include one or more image sensors configured to capture visual information by converting light to image data. Consistent with the present disclosure, an image sensor may be included in the any device or system in the present disclosure and may be any device capable of detecting and converting optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums into electrical signals. Examples of image sensors may include digital cameras, phone cameras, semiconductor Charge-Coupled Devices (CCDs), active pixel sensors in Complementary Metal-Oxide-Semiconductor (CMOS), or N-type metal-oxide-semiconductor (NMOS, Live MOS). The electrical signals may be used to generate image data. Consistent with the present disclosure, the image data may include pixel data streams, digital images, digital video streams, data derived from captured images, and data that may be used to construct one or more 3D images, a sequence of 3D images, 3D videos, or a virtual 3D representation. The image data acquired by image sensor372may be transmitted by wired or wireless transmission to any processing device of system200. For example, the image data may be processed in order to: detect objects, detect events, detect actions, detect faces, detect people, recognize a known person, or determine any other information that may be used by system200. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on image data received from image sensor372. Motion sensor373may include one or more motion sensors configured to measure motion of input unit202or motion of objects in the environment of input unit202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit202, measure the velocity of objects in the environment of input unit202, measure the acceleration of objects in the environment of input unit202, detect the motion of input unit202, measure the velocity of input unit202, and/or measure the acceleration of input unit202. In some embodiments, motion sensor373may include one or more accelerometers configured to detect changes in proper acceleration and/or to measure proper acceleration of input unit202. In other embodiments, motion sensor373may include one or more gyroscopes configured to detect changes in the orientation of input unit202and/or to measure information related to the orientation of input unit202. In other embodiments, motion sensor373may include one or more image sensors, LIDAR sensors, radar sensors, or proximity sensors. For example, by analyzing captured images the processing device may determine the motion of input unit202, for example, using ego-motion algorithms. In addition, the processing device may determine the motion of objects in the environment of input unit202, for example, using object tracking algorithms. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on the determined motion of input unit202or the determined motion of objects in the environment of input unit202. For example, causing a virtual display to follow the movement of input unit202. Environmental sensor374may include one or more sensors from different types configured to capture data reflective of the environment of input unit202. In some embodiments, environmental sensor374may include one or more chemical sensors configured to perform at least one of the following: measure chemical properties in the environment of input unit202, measure changes in the chemical properties in the environment of input unit202, detect the present of chemicals in the environment of input unit202, measure the concentration of chemicals in the environment of input unit202. Examples of such chemical properties may include: pH level, toxicity, and temperature. Examples of such chemicals may include: electrolytes, particular enzymes, particular hormones, particular proteins, smoke, carbon dioxide, carbon monoxide, oxygen, ozone, hydrogen, and hydrogen sulfide. In other embodiments, environmental sensor374may include one or more temperature sensors configured to detect changes in the temperature of the environment of input unit202and/or to measure the temperature of the environment of input unit202. In other embodiments, environmental sensor374may include one or more barometers configured to detect changes in the atmospheric pressure in the environment of input unit202and/or to measure the atmospheric pressure in the environment of input unit202. In other embodiments, environmental sensor374may include one or more light sensors configured to detect changes in the ambient light in the environment of input unit202. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from environmental sensor374. For example, automatically reducing the brightness of the virtual content when the environment of user100becomes darker. Other sensors375may include a weight sensor, a light sensor, a resistive sensor, an ultrasonic sensor, a proximity sensor, a biometric sensor, or other sensing devices to facilitate related functionalities. In some embodiments, other sensors375may include one or more positioning sensors configured to obtain positioning information of input unit202, to detect changes in the position of input unit202, and/or to measure the position of input unit202. Alternatively, GPS software may permit input unit202to access an external GPS receiver (e.g., connecting via a serial port or Bluetooth). Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from other sensors375. For example, presenting private information only after identifying user100using data from a biometric sensor. The components and arrangements shown inFIG.3are not intended to limit any embodiment. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of input unit202. For example, not all components may be essential for the operation of an input unit in all cases. Any component may be located in any appropriate part of an input unit, and the components may be rearranged into a variety of configurations while providing the functionality of various embodiments. For example, some input units may not include all of the elements as shown in input unit202. FIG.4is a block diagram of an exemplary configuration of XR unit204.FIG.4is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.4, XR unit204may directly or indirectly access a bus400(or other communication mechanism) that interconnects subsystems and components for transferring information within XR unit204. For example, bus400may interconnect a memory interface410, a network interface420, an input interface430, a power source440, an output interface450, a processing device460, a sensors interface470, and a database480. Memory interface410, shown inFIG.4, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface410may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on memory devices, such as memory device411. Memory device411may contain software modules to execute processes consistent with the present disclosure. In particular, memory device411may include an input determination module412, an output determination module413, a sensors communication module414, a virtual content determination module415, a virtual content communication module416, and a database access module417. Modules412-417may contain software instructions for execution by at least one processor (e.g., processing device460) associated with XR unit204. Input determination module412, output determination module413, sensors communication module414, virtual content determination module415, virtual content communication module416, and database access module417may cooperate to perform various operations. For example, input determination module412may determine User Interface (UI) input received from input unit202. At the same time, sensors communication module414may receive data from different sensors to determine a status of user100. Virtual content determination module415may determine the virtual content to display based on received input and the determined status of user100. Virtual content communication module416may retrieve virtual content not determined by virtual content determination module415. The retrieval of the virtual content may be from database380, database480, mobile communications device206, or from remote processing unit208. Based on the output of virtual content determination module415, output determination module413may cause a change in a virtual content displayed to user100by projector454. In some embodiments, input determination module412may regulate the operation of input interface430in order to receive gesture input431, virtual input432, audio input433, and UI input434. Consistent with the present disclosure, input determination module412may concurrently receive different types of input data. In one embodiment, input determination module412may apply different rules based on the detected type of input. For example, gesture input may have precedence over virtual input. In some embodiments, output determination module413may regulate the operation of output interface450in order to generate output using light indicators451, display452, speakers453, and projector454. In one embodiment, light indicators451may include a light indicator that shows the status of the wearable extended reality appliance. For example, the light indicator may display green light when the wearable extended reality appliance110are connected to input unit202, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display452may be used to display operational information. In another embodiment, speakers453may include a bone conduction headphone used to output audio to user100. In another embodiment, projector454may present virtual content to user100. The operations of a sensors communication module, a virtual content determination module, a virtual content communication module, and a database access module are described above with reference toFIG.3, details of which are not repeated herein. Modules412-417may be implemented in software, hardware, firmware, a mix of any of those, or the like. Network interface420, shown inFIG.4, is assumed to have similar functionality as the functionality of network interface320, described above in detail. The specific design and implementation of network interface420may depend on the communications network(s) over which XR unit204is intended to operate. For example, in some embodiments. XR unit204is configured to be selectively connectable by wire to input unit202. When connected by wire, network interface420may enable communications with input unit202; and when not connected by wire, network interface420may enable communications with mobile communications device206. Input interface430, shown inFIG.4, is assumed to have similar functionality as the functionality of input interface330described above in detail. In this case, input interface430may communicate with an image sensor to obtain gesture input431(e.g., a finger of user100pointing to a virtual object), communicate with other XR units204to obtain virtual input432(e.g., a virtual object shared with XR unit204or a gesture of avatar detected in the virtual environment), communicate with a microphone to obtain audio input433(e.g., voice commands), and communicate with input unit202to obtain UI input434(e.g., virtual content determined by virtual content determination module315). Power source440, shown inFIG.4, is assumed to have similar functionality as the functionality of power source340described above, only it provides electrical energy to power XR unit204. In some embodiments, power source440may be charged by power source340. For example, power source440may be wirelessly changed when XR unit204is placed on or in proximity to input unit202. Output interface450, shown inFIG.4, is assumed to have similar functionality as the functionality of output interface350described above in detail. In this case, output interface450may cause output from light indicators451, display452, speakers453, and projector454. Projector454may be any device, apparatus, instrument, or the like capable of projecting (or directing) light in order to display virtual content onto a surface. The surface may be part of XR unit204, part of an eye of user100, or part of an object in proximity to user100. In one embodiment, projector454may include a lighting unit that concentrates light within a limited solid angle by means of one or more mirrors and lenses, and may provide a high value of luminous intensity in a defined direction. Processing device460, shown inFIG.4, is assumed to have similar functionality as the functionality of processing device360described above in detail. When XR unit204is connected to input unit202, processing device460may work together with processing device360. Specifically, processing device460may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. It is appreciated that other types of processor arrangements could be implemented to provide the capabilities disclosed herein. Sensors interface470, shown inFIG.4, is assumed to have similar functionality as the functionality of sensors interface370described above in detail. Specifically, sensors interface470may communicate with audio sensor471, image sensor472, motion sensor473, environmental sensor474, and other sensors475. The operations of an audio sensor, an image sensor, a motion sensor, an environmental sensor, and other sensors are described above with reference toFIG.3, details of which are not repeated herein. It will be appreciated that other types and combination of sensors may be used to provide the capabilities disclosed herein. The components and arrangements shown inFIG.4are not intended to limit any embodiment. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of XR unit204. For example, not all components may be essential for the operation of XR unit204in all cases. Any component may be located in any appropriate part of system200, and the components may be rearranged into a variety of configurations while providing the functionality of various embodiments. For example, some XR units may not include all of the elements in XR unit204(e.g., wearable extended reality appliance110may not have light indicators451). FIG.5is a block diagram of an exemplary configuration of remote processing unit208.FIG.5is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.5, remote processing unit208may include a server210that directly or indirectly accesses a bus500(or other communication mechanism) interconnecting subsystems and components for transferring information within server210. For example, bus500may interconnect a memory interface510, a network interface520, a power source540, a processing device560, and a database580. Remote processing unit208may also include a one or more data structures. For example, data structures212A,212B, and212C. Memory interface510, shown inFIG.5, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface510may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on other memory devices, such as memory devices311,411,511, or data structures212A,212B, and212C. Memory device511may contain software modules to execute processes consistent with the present disclosure. In particular, memory device511may include a shared memory module512, a node registration module513, a load balancing module514, one or more computational nodes515, an internal communication module516, an external communication module517, and a database access module (not shown). Modules512-517may contain software instructions for execution by at least one processor (e.g., processing device560) associated with remote processing unit208. Shared memory module512, node registration module513, load balancing module514, computational module515, and external communication module517may cooperate to perform various operations. Shared memory module512may allow information sharing between remote processing unit208and other components of system200. In some embodiments, shared memory module512may be configured to enable processing device560(and other processing devices in system200) to access, retrieve, and store data. For example, using shared memory module512, processing device560may perform at least one of: executing software programs stored on memory device511, database580, or data structures212A-C; storing information in memory device511, database580, or data structures212A-C; or retrieving information from memory device511, database580, or data structures212A-C. Node registration module513may be configured to track the availability of one or more computational nodes515. In some examples, node registration module513may be implemented as: a software program, such as a software program executed by one or more computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, node registration module513may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify node registration module513of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from node registration module513, or at any other determined times. In some examples, node registration module513may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at selected times, or at any other determined times. Load balancing module514may be configured to divide the workload among one or more computational nodes515. In some examples, load balancing module514may be implemented as: a software program, such as a software program executed by one or more of the computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, load balancing module514may interact with node registration module513in order to obtain information regarding the availability of one or more computational nodes515. In some implementations, load balancing module514may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify load balancing module514of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from load balancing module514, or at any other determined times. In some examples, load balancing module514may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at pre-selected times, or at any other determined times. Internal communication module516may be configured to receive and/or to transmit information from one or more components of remote processing unit208. For example, control signals and/or synchronization signals may be sent and/or received through internal communication module516. In one embodiment, input information for computer programs, output information of computer programs, and/or intermediate information of computer programs may be sent and/or received through internal communication module516. In another embodiment, information received though internal communication module516may be stored in memory device511, in database580, in data structures212A-C, or other memory device in system200. For example, information retrieved from data structure212A may be transmitted using internal communication module516. In another example, input data may be received using internal communication module516and stored in data structure212B. External communication module517may be configured to receive and/or to transmit information from one or more components of system200. For example, control signals may be sent and/or received through external communication module517. In one embodiment, information received though external communication module517may be stored in memory device511, in database580, in data structures212A-C, and or any memory device in the system200. In another embodiment, information retrieved from any of data structures212A-C may be transmitted using external communication module517to XR unit204. In another embodiment, input data may be transmitted and/or received using external communication module517. Examples of such input data may include data received from input unit202, information captured from the environment of user100using one or more sensors (e.g., audio sensor471, image sensor472, motion sensor473, environmental sensor474, other sensors475), and more. In some embodiments, aspects of modules512-517may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules512-517may be configured to interact with each other and/or other modules of system200to perform functions consistent with disclosed embodiments. Memory device511may include additional modules and instructions or fewer modules and instructions. Network interface520, power source540, processing device560, and database580, shown inFIG.5, are assumed to have similar functionality as the functionality of similar elements described above with reference toFIGS.4and5. The specific design and implementation of the above-mentioned components may vary based on the implementation of system200. In addition, remote processing unit208may include more or fewer components. For example, remote processing unit208may include an input interface configured to receive direct input from one or more input devices. Consistent with the present disclosure, a processing device of system200(e.g., processor within mobile communications device206, a processor within a server210, a processor within a wearable extended reality appliance, such as, wearable extended reality appliance110, and/or a processor within an input device associated with wearable extended reality appliance110, such as keyboard104) may use machine learning algorithms in order to implement any of the methods disclosed herein. In some embodiments, machine learning algorithms (also referred to as machine learning models in the present disclosure) may be trained using training examples, for example in the cases described below. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recurrent neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and more. For example, a trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a data regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recurrent neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples, training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples. In some examples, engineers, scientists, processes and machines that train machine learning algorithms may further use validation examples and/or test examples. For example, validation examples and/or test examples may include example inputs together with the desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs for the example inputs of the validation examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on a result of the comparison. In some examples, a machine learning algorithm may have parameters and hyper parameters, where the hyper parameters may be set manually by a person or automatically by a process external to the machine learning algorithm (such as a hyper parameter search algorithm), and the parameters of the machine learning algorithm may be set by the machine learning algorithm based on the training examples. In some implementations, the hyper-parameters may be set based on the training examples and the validation examples, and the parameters may be set based on the training examples and the selected hyper-parameters. For example, given the hyper-parameters, the parameters may be conditionally independent of the validation examples. In some embodiments, trained machine learning algorithms (also referred to as machine learning models and trained machine learning models in the present disclosure) may be used to analyze inputs and generate outputs, for example in the cases described below. In some examples, a trained machine learning algorithm may be used as an inference model that when provided with an input generates an inferred output. For example, a trained machine learning algorithm may include a classification algorithm, the input may include a sample, and the inferred output may include a classification of the sample (such as an inferred label, an inferred tag, and so forth). In another example, a trained machine learning algorithm may include a regression model, the input may include a sample, and the inferred output may include an inferred value corresponding to the sample. In yet another example, a trained machine learning algorithm may include a clustering model, the input may include a sample, and the inferred output may include an assignment of the sample to at least one cluster. In an additional example, a trained machine learning algorithm may include a classification algorithm, the input may include an image, and the inferred output may include a classification of an item depicted in the image. In yet another example, a trained machine learning algorithm may include a regression model, the input may include an image, and the inferred output may include an inferred value corresponding to an item depicted in the image (such as an estimated property of the item, such as size, volume, age of a person depicted in the image, distance from an item depicted in the image, and so forth). In an additional example, a trained machine learning algorithm may include an image segmentation model, the input may include an image, and the inferred output may include a segmentation of the image. In yet another example, a trained machine learning algorithm may include an object detector, the input may include an image, and the inferred output may include one or more detected objects in the image and/or one or more locations of objects within the image. In some examples, the trained machine learning algorithm may include one or more formulas and/or one or more functions and/or one or more rules and/or one or more procedures, the input may be used as input to the formulas and/or functions and/or rules and/or procedures, and the inferred output may be based on the outputs of the formulas and/or functions and/or rules and/or procedures (for example, selecting one of the outputs of the formulas and/or functions and/or rules and/or procedures, using a statistical measure of the outputs of the formulas and/or functions and/or rules and/or procedures, and so forth). Consistent with the present disclosure, a processing device of system200may analyze image data captured by an image sensor (e.g., image sensor372, image sensor472, or any other image sensor) in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image data may comprise analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may comprise the transformed image data. For example, the transformed image data may comprise one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may comprise a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may comprise: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth. In some examples, analyzing the image data may include calculating at least one convolution of at least a portion of the image data, and using the calculated at least one convolution to calculate at least one resulting value and/or to make determinations, identifications, recognitions, classifications, and so forth. Consistent with other aspects of the disclosure, a processing device of system200may analyze image data in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image may comprise analyzing the image data and/or the preprocessed image data using one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some non-limiting examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms, such as machine learning algorithms and/or deep learning algorithms, on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result, and more. In some embodiments, analyzing image data (for example by the methods, steps and modules described herein) may comprise analyzing pixels, voxels, point cloud, range data, etc. included in the image data. A convolution may include a convolution of any dimension. A one-dimensional convolution is a function that transforms an original sequence of numbers to a transformed sequence of numbers. The one-dimensional convolution may be defined by a sequence of scalars. Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed sequence of numbers. Likewise, an n-dimensional convolution is a function that transforms an original n-dimensional array to a transformed array. The n-dimensional convolution may be defined by an n-dimensional array of scalars (known as the kernel of the n-dimensional convolution). Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed array. In some examples, an image may comprise one or more components (such as color components, depth component, etc.), and each component may include a two dimensional array of pixel values. In one example, calculating a convolution of an image may include calculating a two dimensional convolution on one or more components of the image. In another example, calculating a convolution of an image may include stacking arrays from different components to create a three dimensional array, and calculating a three dimensional convolution on the resulting three dimensional array. In some examples, a video may comprise one or more components (such as color components, depth component, etc.), and each component may include a three dimensional array of pixel values (with two spatial axes and one temporal axis). In one example, calculating a convolution of a video may include calculating a three dimensional convolution on one or more components of the video. In another example, calculating a convolution of a video may include stacking arrays from different components to create a four dimensional array, and calculating a four dimensional convolution on the resulting four dimensional array. Wearable extended reality appliances may include different display regions. The display regions may be permanently set, or may dynamically configurable, for example by software or hardware components. Dynamically controlling the display luminance or intensity in the different display regions may be beneficial, for example to conserve resources, and/or accommodate user visibility needs. For example, the display luminance may be dimmed in a less relevant region of a display and intensified in a more relevant region to hold the focus of the user on the more relevant region, and/or to efficiently allocate resources (e.g., electrical energy). As another example, the display luminance may be dimmed or intensified to prevent eye strain, motion sickness, to accommodate ambient lighting conditions, and/or energy consumption requirements. One technique for dynamically controlling the display luminance may be to dynamically control the duty cycle of the display signal in each region of a wearable extended reality appliance. In other examples, dynamically controlling the duty cycle of the display signal in each region of a wearable extended reality appliance may be beneficial regardless of the display luminance or intensity, for example to prevent eye strain, motion sickness, to accommodate ambient lighting conditions, and/or energy consumption requirements. In some examples, dynamically controlling the duty cycle of the display signal in the entire display of a wearable extended reality appliance may be beneficial, for example to prevent eye strain, motion sickness, to accommodate ambient lighting conditions, and/or energy consumption requirements. In some embodiments, duty cycle control operations may be performed for wearable extended reality appliances. Data representing virtual content in an extended reality environment and associated with a wearable extended reality appliance may be received. Two separate display regions (e.g., a first display region and a second display region), of the wearable extended reality appliance may be identified. A duty cycle configuration may be determined for each display region. Thus, a first duty cycle configuration may be determined for the first display region, and a second duty cycle configuration may be determined for the second display region, where the second duty cycle configuration differs from the first duty cycle configuration. The wearable extended reality appliance may be caused to display virtual content in the first display region according to the first determined duty cycle configuration and in the second display region according to the second determined duty cycle configuration. In this manner, virtual content may be displayed in each display region of the wearable extended reality appliance in accordance with a different duty cycle configuration. In some instances, the description that follows may refer toFIGS.6-9which illustrate exemplary implementations for performing duty cycle control operations for representing virtual content in an extended reality environment associated with a wearable extended reality appliance, consistent with some disclosed embodiments.FIGS.6-9are intended merely to facilitate the conceptualizing of one exemplary implementation for performing duty cycle control operations to represent virtual content via a wearable extended reality appliance and do not limit the disclosure to any particular implementation. Additionally, while the description that follows generally relates to a first duty cycle configuration corresponding to a higher duty cycle and a second duty cycle configuration corresponding to a lower duty cycle, this is for illustrative purposes only and does not limit the invention. It thus may be noted that some implementations and/or applications may include the first duty cycle configuration corresponding to a lower duty cycle than the second duty cycle configuration. Additionally, the use of the descriptors “first’ and “second” is intended merely to distinguish between two different entities and does not necessarily assign a higher ordinality or importance to one entity versus the other. The description that follows includes references to smart glasses as an exemplary implementation of a wearable extended reality appliance. It is to be understood that these examples are merely intended to assist in gaining a conceptual understanding of disclosed embodiments, and do not limit the disclosure to any particular implementation for a wearable extended reality appliance. The disclosure is thus understood to relate to any implementation for a wearable extended reality appliance, including implementations different than smart glasses. Some embodiments involve a non-transitory computer readable medium containing instructions that when executed by at least one processor cause the at least one processor to perform duty cycle control operations for wearable extended reality appliances. The term “non-transitory computer-readable medium” may be understood as described earlier. The term “instructions” may refer to program code instructions that may be executed by a computer processor. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, optimization algorithms, and any other computer processing technique. The term “processor” may refer to any physical device having an electric circuit that performs a logic operation. A processor may include one or more integrated circuits, microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field programmable gate array (FPGA), or other circuits suitable for executing instructions or performing logic operations, as described earlier. The term “cycle” may refer to a portion of an oscillating signal that repeats periodically (e.g., regularly) over time. For each cycle of an oscillating signal, a fraction of the cycle may be associated with a “high” (e.g., on or active) state, another fraction of the cycle may be associated with a “low” (e.g., off or inactive) state such that aggregating the cycles of the signal over time causes the oscillating signal to regularly alternate between the “high” and “low” (e.g., on/off, or active/inactive) states. The term “duty cycle” may relate to the fraction of a cycle of an oscillating signal associated with the “high” (e.g., on or active) state versus “low” (e.g., off or inactive) state. For example, a signal having a 50% duty cycle may be set to the “high” state for half of each cycle and substantially to the “low” state for the complementary half of each cycle, accounting for latency and response times to transition between the high and low states. Aggregating multiple cycles over a time duration may result in an oscillating signal having a substantially uniform 50/50 distribution between the high/low states for the time duration. As another example, a signal having a 75% duty cycle may be set to the “high” state for three quarters of each cycle and to the “low” state substantially for the complementary one quarter of the cycle, accounting for latency and response times, such that aggregating multiple cycles over a time duration results in an oscillating signal having a substantially uniform 75/25 distribution between the high/low states for the time duration. For a visual display application, the high state may be associated with a high level of light output (e.g., illumination set to on or active and relatively high-power consumption) and the low state may be associated with a low level of light output (e.g., illumination set to off or inactive and relatively low power consumption). Thus, controlling the duty cycle of a display signal may allow modifying the total light output of the display signal. In some examples, reducing the duty cycle may reduce the total light output, thereby reducing luminosity or intensity and power consumption, whereas and increasing the duty cycle may increase the total light output, thereby increasing luminosity or intensity and power consumption. In some examples, reducing the duty cycle may reduce the opacity, whereas and increasing the duty cycle may increase the opacity. In some examples, for example when the frequency of the cycles in the display signal is sufficiently high (e.g., above a fusion threshold) and other steps are taken to maintain luminosity or intensity (for example by changing the maximum voltage), transitioning between the high and low states of each cycle may not be perceivable by the human eye, allowing for a “smooth” visual user experience preventing eye strain. Thus, controlling the duty cycle for a visual display may allow to smoothly transition between varying display configurations. The term “duty cycle control operations” may refer to one or more arithmetic and/or logical computations or procedures that may be performed by at least one processor for controlling the duty cycle of a display signal. For example, the duty cycle control operations may include instructions to implement pulse-width modulation (PWM), pulse-duration modulation (PDM), filters, signal compression or expansion, inversions, solutions to differential equations, statistical and/or polynomial signal processing, stochastic signal processing, estimation and detection techniques and any additional signal processing techniques affecting the duty cycle of a display signal. The term “wearable extended reality appliances” may refer to a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human, as described earlier. Thus, the at least one processor may control the luminosity (e.g., brightness) and/or the energy consumption for displaying content via a wearable extended reality appliance by controlling the duty cycle of the display signal. For example, during a first time duration, the at least one processor may perform a first PWM procedure to increase the duty cycle from 50% to 75% to increase the brightness of content displayed via a wearable extended reality appliance. During a second time duration, the at least one processor may perform a second PWM procedure to decrease the duty cycle from 75% to 50% to dim the display of content via the wearable extended reality appliance. As another example, the at least one processor controlling the display of content via a wearable extended reality appliance may perform a PWM procedure to display incoming messages of a messaging application according to a 75% duty cycle, e.g., to draw the attention of the user, and display a weather application according to a 50% duty cycle, e.g., as a background application. By way of a non-limiting example, an exemplary implementation for performing duty cycle control operations for wearable extended reality appliances is shown.FIG.6. Similar toFIG.1,FIG.6illustrates a user100wearing wearable extended reality appliance110, with the noted difference of an extended reality environment620including a first display region602, a second display region604, a physical wall606, and a physical desktop608. First display region602may be associated with the display of virtual screen112according to one duty cycle configuration (e.g., duty cycle configuration610), and second display region604may be associated with the display of virtual widgets114C and114D according to a different duty cycle configuration (e.g., duty cycle configuration612). Processing device460(FIG.4) may perform a PWM procedure (e.g., a duty cycle control operation) to control the display of content via wearable extended reality appliance110such that content associated with virtual screen112is displayed according to duty cycle configuration610(e.g., a 60%), for example to focus the attention of user100on virtual screen112, while virtual widget114C may be displayed according to duty cycle configuration610(e.g., 20%), for example, as a background application. Some embodiments involve receiving data representing virtual content in an extended reality environment associated with a wearable extended reality appliance. The term “receiving” may refer to accepting delivery of, acquiring, retrieving, generating, obtaining or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor. The data may be received via a communications channel, such as a wired channel (e.g., cable, fiber) and/or wireless channel (e.g., radio, cellular, optical, IR). The data may be received as individual packets or as a continuous stream of data. The data may be received synchronously, e.g., by periodically polling a memory buffer, queue or stack, or asynchronously, e.g., via an interrupt event. For example, the data may be received from an input device or sensor configured with input unit202(FIG.1), from mobile communications device206, from remote processing unit208, or from any other local and/or remote source, and the data may be received by wearable extended reality appliance110, mobile communications device206, remote processing unit208, or any other local and/or remote computing device. In some examples, the data may be received from a memory unit, may be received from an external device, may be generated based on other information (for example, generated using a rendering algorithm based on at least one of geometrical information, texture information or textual information), and so forth. The term “content” may refer to data or media. Such data or media may be formatted according to a distinct specification for presenting information to a user via an interface of an electronic device. For example, content may include any combination of data formatted as text, image, audio, video, haptic, and any other data type for conveying information to a user. The term “virtual content” may refer to synthesized content that may exist wholly within the context of one or more processing devices, for example within an extended reality environment. Virtual content may thus be distinguished from physical or real-world content that may exist or be generated independent of a processing device. For example, voice data for a synthesized digital avatar may be virtual content, whereas a recorded voice message of a human user may be associated with physical, real-world (e.g., non-virtual) content. By way of another example, virtual content may be a synthesized image, in contrast to a real-world image. The term “data representing virtual content” may include signals carrying or encoding the virtual content. Such data (e.g., information encoded into binary bits or n-ary qubits) may be formatted according to one or more distinct specifications to allow rendering virtual content associated with the data via a user interface of an electronic device. The term “extended reality environment”, e.g., also referred to as “extended reality”, “extended reality space”, or “extended environment”, may refer to all types of real-and-virtual combined environments and human-machine interactions at least partially generated by computer technology, as described earlier. The extended reality environment may be implemented via at least one processor and at least one extended reality appliance (e.g., a wearable and/or non-wearable extended reality appliance). The term “associated with” may refer to the existence of a relationship, affiliation, correspondence, link or any other type of connection or correlation. The term “wearable extended reality appliance” may be understood as described earlier. The wearable extended reality appliance may produce or generate an extended reality environment including representations of physical (e.g., real) objects and virtual content for viewing by the wearer. For example, wearable extended reality appliance may be a pair of smart glasses. The extended reality environment associated with the pair of smart glasses may include the field-of-view of the wearer of the smart glasses, e.g., a portion of the physical environment surrounding the wearer, as well as any virtual content superimposed thereon. Encoded information (e.g., data) for rendering (e.g., representing) virtual content may be obtained (e.g., received), for example, by a processor associated with the pair of smart glasses. The encoded information may be processed for displaying the virtual content within the extended reality environment generated by (e.g., associated with) the pair of smart glasses (e.g., a wearable extended reality appliance). For example, the extended reality environment may include different display regions, where received video content may be displayed in a forward-center region of the field-of-view of the wearer, and text content may be displayed as a notification (e.g., in a bottom right corner of the field-of-view of the wearer). By way of a non-limiting example,FIG.6illustrates an extended reality environment620generated for user100by (e.g., at least) processing device460(FIG.4) and wearable extended reality appliance110. Processing device460may receive first data encoded as text content and second data encoded as image content for displaying to user100via (e.g., associated with) wearable extended reality appliance110. The text content may represent values for virtual axes of a virtual bar graph displayed on virtual screen112, and the image data may represent virtual widgets114C and114D. Some embodiments involve identifying in the extended reality environment a first display region and a second display region separated from the first display region. The term “identifying” may refer to recognizing, perceiving, or otherwise determining or establishing an association with a known entity, quantity, or value. The term “extended reality environment” may be understood as described earlier. The term “display region” may refer to a designated area or zone (e.g., physical and/or virtual) inside the field-of-view of a wearer of a wearable extended reality appliance. For example, the field of view of the wearer may be visible via an electronic display screen (e.g., semi-transparent screen) of the wearable extended reality appliance. For example, the electronic display may be an electroluminescent (EL), liquid crystal (LC), light emitting diode (LED) include OLED and AMOLED, plasma, quantum dot, or cathode ray tube display, or any other type of electronic display technology. The electronic display may include a region for presenting content (e.g., virtual content) together with (e.g., overlaid on, alongside, or otherwise co-presented with) the physical environment surrounding the wearer. For example, one part of the region may be non-transparent for presenting the virtual content, and another part of the region may be transparent for presenting the physical environment. The term “separate” may refer to detached, partitioned, or otherwise disassociated, e.g., disjointed. Thus, the extended reality environment may include multiple different areas or zones for presenting content (e.g., display regions) that are disassociated (e.g., separate) from each other. Each zone (e.g., display region) may include one or more transparent parts for viewing the physical environment, and one or more non-transparent parts for presenting virtual content overlaid on the presentation of the physical environment. For example, a processing device may be configured to recognize or establish (e.g., identify) different display regions according to one or more characteristics, such as relating to the wearer, the virtual content being displayed, the physical environment, software and/or hardware requirements of the wearable extended reality appliance, and/or any other characteristic relevant to the extended reality environment. For example, the different display regions may be recognized (e.g., identified) according to the viewing angle of the wearer (e.g., the front center may be the first display region, and the right side may be the second display region). As another example, the different display regions may be established (e.g., identified) according to different attributes of the content displayed therein, such as the context (e.g., high versus low priority, primary or peripheral content), type (e.g., text, image, video), resolution (e.g., high versus low), representation (e.g., 2D versus 3D, grey scale versus color), or temporal attributes (e.g., current versus historical). In some embodiments, different display regions may be identified according to hardware characteristics of one or more extended reality appliances used to implement the extended reality environment, such as the power consumption, resolution capability, channel capacity, memory requirements, or any other hardware characteristic affecting the capability to render content. In some embodiments, different display regions may be identified according to characteristics of the user consuming content via the extended reality environment, such as the type of user (e.g., adult or child, young or old, disabled or able bodied), the user application (e.g., professional or lay), the user activity (e.g., gaming, trading, viewing streamed content, editing a text document). In some embodiments, different display regions of the extended reality environment may be identified according to ambient conditions, such as lighting, temperature, the presence of physical objects and/or background noise. For example, a top-center area (e.g., a first display region) of the extended reality environment may be designated for rendering high priority content (e.g., warnings or alerts), and a bottom-side region of the extended reality environment may be designated for rendering lower priority content (e.g., weather updates). As another example, a left-oriented area may be designated for displaying 3D color images, and a right-oriented area may be designated for displaying white text against a dark background. By way of a non-limiting example, turning toFIG.6, processing device460(FIG.4) may identify in extended reality environment620(e.g., implemented via wearable extended reality appliance110), a first display region602for displaying a first category of content, e.g., relating to a work application for user100, such as a bar chart and accompanying text on virtual screen112, and a second display region604for displaying a second category of content, e.g., relating to personal applications for user100, such as virtual widget114C associated with the local weather forecast and virtual widget114D associated with personal emails. According to some embodiments, identifying the first display region and the second display region is based on an analysis of the received data. For example, the received data may represent the virtual content. The terms “identifying”, “display region”, and “received data” may be understood as described earlier. The term “based on” may refer to established or founded upon, or otherwise derived from. The term “analysis of the received data” may include examining or investigating the received data, such as by parsing one or more elements of the received data, and using the parsed elements to perform one or more computations, queries, comparisons, reasoning, deduction, extrapolation, interpolation, or any other logical or arithmetic operation, e.g., to determine a fact, conclusion, or consequence associated with the received data. The analysis may be based on the data type, size, format, a time when the data was received and/or sent, communication and/or processing latency, a communications channel and/or network used to receive the data, a source of the received data, the context under which the data was received, or any other criterion relevant to determining a fact or consequence associated with the received data. For example, an analysis of first received data may identify the first received data as video content for a live streaming application that may be displayed in a central region of the extended reality display environment, whereas analysis of second received data may identify the second received data as text content for an electronic mail application that may be displayed in a peripheral region of the extended reality display environment. In some examples, a machine learning model may be trained using training examples to identify display regions based on data representing virtual content. An example of such training example may include a sample data representing sample virtual content, together with a label indicating one or more desired display regions. The trained machine learning model may be used to analyze the received data representing the virtual content to identify the first display region and/or the second display region. By way of a non-limiting example, turning toFIG.6, processing device460(FIG.3) may analyze first received data to determine that the first received data is graphic content. Based on this analysis, processing device460may identify first display region602for rendering the first received data, e.g., on virtual display112. In addition, processing device460may analyze second received data to determine a source of the second received data, e.g., a remote server providing weather updates. Based on this analysis, processing device460may identify second display region604for rendering the second received data, e.g., in association with virtual widget114C. According to some embodiments, identifying the first display region and the second display region is based on an area of focus of a wearer of the wearable extended reality appliance. The terms “identifying”, “display region”, “based on”, and “wearable extended reality appliance” may be understood as described earlier. The term “wearer of the wearable extended reality appliance” may include a user donning, carrying, or otherwise being communicatively connected to the wearable extended reality appliance, e.g., as clothing, an accessory (e.g., glasses, watch, hearing aid, ankle bracelet), a tattoo imprinted on the skin, as a sticker adhering to the surface of the skin, as an implant embedded beneath the skin (e.g., a monitor or regulator), or any other type of wearable extended reality appliance. The term “area of focus” may include a region or zone surrounding a point associated with a line of sight (e.g., detected via an eye tracker), a head pose (e.g., angle, orientation, and/or inclination detected via an inertial measurement sensor), a gaze, or a region of an electronic display (e.g., a window, widget, document, image, application, or any other displayed element) selected for example via a keyboard, electronic pointing device controlling a cursor, voice command, tracked gesture (e.g., head, hand or any other type of gesture), eye tracking apparatus, or any other means for selecting a displayed element. The area of focus may include a shape (e.g., circle, square, ellipse, or any other geometric shape) surrounding or a document or application associated with a point corresponding to the line-of-sight of the wearer, or a position of the cursor. Thus, the first and second display regions may be identified based on the behavior of the user, e.g., where the wearer of the extended reality appliance is looking, pointing to, or otherwise indicating. For example, a forward center region of the virtual reality environment may be identified as the first display region based on an eye tracker detecting the line-of-sight of the wearer of the extended reality appliance. As another example, a bottom left region of the virtual reality environment may be identified as a second display region based on a selection via an electronic pointing device. By way of a non-limiting example, display region602inFIG.6may be identified by processing device460(FIG.4) based on an eye tracker (not shown) configured with wearable extended reality appliance110detecting the line-of-sight of user100(e.g., a first area of focus of user100). Concurrently, display region604may be identified by processing device460based on a selection of virtual widget114C via electronic mouse106(e.g., a second area of focus of user100). According to some embodiments, identifying the first display region and the second display region is based on characteristics of the extended reality environment resulting from a physical environment of the wearable extended reality appliance. The terms “identifying”, “display region”, “based on”, “extended reality environment”, and “wearable extended reality appliance” may be understood as described earlier. The term “characteristics” may include attributes, properties, aspects, traits, or any other feature distinctly associated with the extended reality environment. The term “physical environment” may refer to the real-world surroundings of the wearable extended reality appliance, such as the presence of walls, surfaces (e.g., floor, table tops, ceiling), obstructing objects (house plants, people, furniture, walls, doors), windows, and any other physical object potentially affecting the display of content via the wearable extended reality appliance. The term “resulting from” may refer to following from, or consequent to. Thus, the extended reality environment may be affected by the physical environment surrounding the wearable extended reality appliance, by including one or more objects facilitating the display of virtual content, (e.g., smooth, opaque, blank, white or pale colored, flat, and/or large surfaces) and/or one or more objects hampering the display of virtual (e.g., obstructions, rough, small, dark, or transparent surfaces, bright lights or other distracting objects). Accordingly, the first and second display regions may be identified according to one or more characteristics of physical objects in the extended reality environment, such as the distance (e.g., far or close), color (e.g., dark, light, varied or textured), size, texture (e.g., roughness, smoothness), opacity, transparency, shape, position (e.g., relative to the head pose of the wearer), exposure to light or shadow, and any other physical characteristic affecting display capabilities. For example, a blank white wall facing the wearable extended reality appliance may be identified as a first display region (e.g., requiring an average duty cycle to display content), and a window positioned adjacent to the wall may be identified as a second display region (e.g., requiring a higher duty cycle to display content to overcome daylight). As another example, a desktop facing the wearable extended reality appliance may be identified as a first display region, and a ceiling may be identified as a second display region. By way of a non-limiting example, processing device460(FIG.4) may identify wall606as display region602(FIG.6) based on the position relative to user100(e.g., front-forward), its large size, white color, smooth texture, and the lack of a bright light source (e.g., window) hindering user100from seeing virtual content displayed thereon. Similarly, processing device460may identify desktop608of a desk as another display region based on the position (e.g., front-down), based on its flat, smooth surface, uniform color, and lack of an obstructing object (e.g., a houseplant) hindering user100from seeing virtual content displayed thereon. Some embodiments may involve receiving image data captured from the physical environment of the wearable extended reality appliance using an image sensor included in the wearable extended reality appliance; and analyzing the received image data to identify the first display region and the second display region. The term “receiving”, “physical environment”, “wearable extended reality appliance”, “display region”, and “identify” may be understood as described earlier. The term “image data” may refer to pixel data streams, digital images, digital video streams, data derived from captured images, and data that may be used to construct one or more 2D and/or 3D images, a sequence of 2D and/or 3D images, 2D and/or 3D videos, or a virtual 2D and/or 3D representation, as described earlier. The image data may include data configured to convey information associated with the visual characteristics of an image, for example as a graphic form or picture, as described earlier. For example, the image data may include at least one of pixels, voxels or meta-data. The term “captured” may refer to the detection, sensing or acquisition of information using an optical sensor, such as by sensing light waves reflecting off an object. The term “image sensor” may include one or more sensory components capable of detecting and converting optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums into electrical signals, as described earlier. The electric signals may be stored in memory and subsequently used to activate the pixels of an electronic display to present the object visually. Examples of electronic image sensors may include digital cameras, phone cameras, semiconductor Charge-Coupled Devices (CCDs), active pixel sensors in Complementary Metal-Oxide-Semiconductor (CMOS), or N-type metal-oxide-semiconductor (NMOS, Live MOS. The term “included” may refer to integrated or configured with. For example, an image sensor may be mechanically, optically, and/or electrically coupled to (e.g., included with) the wearable extended reality appliance to sense physical objects, such as surfaces, light sources, obstacles, people, animals, and/or any other object present in or absent from the environment of the wearer of the wearable extended reality appliance. Thus, the wearable extended reality device may be provided with one or more image sensors for sensing the physical characteristics (e.g., objects, spaces, light sources, shadows, and any other physical attribute) of the physical environment surrounding the wearer of the wearable extended reality appliance. A processing device may convert the data sensed by the image sensor to an image representing the physical environment. The term “analyzing” may refer to investigating, scrutinizing and/or studying a data set, for example, to determine a correlation, association, pattern or lack thereof within the data set or with respect to a different data set. The image data received by the image sensor may be analyzed, for example using one or more image processing techniques such as convolutions, fast Fourier transforms, edge detection, pattern recognition, object detection algorithms, clustering, artificial intelligence, machine and/or deep learning, and any other image processing technique, to identify the first and second regions. In some examples, a machine learning model may be trained using training examples to identify display regions based on images and/or videos. An example of such training example may include a sample image and/or a sample video, together with a label indicating one or more desired display regions. The trained machine learning model may be used to analyze the received image data to identify the first display region and/or the second display region. In some examples, at least part of the image data may be analyzed to calculate a convolution of the at least part of the image data and thereby obtain a result value of the calculated convolution. Further, in response to the result value of the calculated convolution being a first value, one pair of regions may be identified as the first display region and the second display region, and in response to the result value of the calculated convolution being a second value, a different pair of regions may be identified as the first display region and the second display region. For example, image data sensed by the image sensor may be analyzed as described above. Based on the analysis of the image data, a vertical wall facing the user may be identified as the first display region, and a horizontal table surface supporting an input device may be identified as the second display region. By way of a non-limiting example with reference toFIG.6, wearable extended reality appliance110may be provided with a camera, such as image sensor472(FIG.4). Image sensor472may capture image data of wall606facing user and desktop608. Processing device460may receive the image data from image sensor472, e.g., via bus400and may analyze the image data to identify wall606as the first region and desktop608as the second region. Some embodiments involve determining a first duty cycle configuration for the first display region. The term “determining” may refer to establishing or arriving at a conclusive outcome as a result of a reasoned, learned, calculated or logical process. The term “configuration” may refer to a set up or an arrangement complying with one or more definitions or specifications. For example, a configuration may include one or more settings assigning one or more values to one or more parameters or variables to define a specific arrangement. The terms “duty cycle” and “display region” may be understood as described earlier. Thus, a “duty cycle configuration” may include one or more set ups, specifications or settings for one or more parameters affecting the duty cycle of a signal (e.g., a display signal), such as the ratio or percent for each cycle during which the signal is set to “active” versus “inactive”, the frequency, amplitude and/or phase of the signal, the response time between the active” versus “inactive” states (e.g., gradient), and/or any other attribute that may affect the duty cycle. Accordingly, a specific set of specifications or settings for the duty cycle (e.g., duty cycle configuration) may be established for the first display region. For example, if the first display region is associated with high priority content, and/or exposed to a strong light source (e.g., a window exposing daylight), the duty cycle configuration may be determined to cause a more intense display (e.g., by setting a higher luminosity). Conversely, if the first display region is associated with low priority content or positioned in a relatively dark region of the extended reality environment, the duty cycle configuration may be determined to cause a dimmer display (e.g., by setting a lower luminosity). As another example, the first display region may be associated with a default duty cycle defined in advance, such as a primary display region automatically associated with a high (e.g., 60%) duty cycle. By way of a non-limiting example, turning toFIG.6, processing device460(FIG.4) may determine a duty cycle configuration610for display signals associated with display region602. Duty cycle configuration610may include one or more settings causing content displayed via projector454in display region602to correspond to a 60% duty cycle, such that slightly more than half (e.g., approximately 60%) of every cycle of the display signal is set to “high” or “active”, and slightly less than half (e.g., approximately 40%) of every cycle is set to “low” or “inactive”. Some embodiments involve determining a second duty cycle configuration for the second display region, wherein the second duty cycle configuration differs from the first duty cycle configuration. The terms “determining”, “duty cycle”, “configuration”, and “display region” may be understood as described earlier. The term “differs” may refer to being distinguished or distinct from, or otherwise being dissimilar. Thus, a second duty cycle configuration determined for the second display region may be dissimilar to (e.g., distinct from) the first duty cycle configuration determined for the first display region such that they are not the same. For example, the first duty cycle configuration may cause content to be displayed according to a 30% duty cycle, e.g., relatively dim and drawing little power, and the second duty cycle configuration may cause content to be displayed according to an 80% duty cycle, e.g., relatively bright and drawing considerably more power. By way of a non-limiting example, turning toFIG.6, processing device460(FIG.4) may determine a duty cycle configuration612for display signals associated with display region604that differs from duty cycle configuration610determined for display region602. Duty cycle configuration612may include one or more settings causing content displayed via projector454in display region604to correspond to a 20% duty cycle, such that only a small fraction (e.g., approximately 20%) of every cycle of the display signal is set to “high” or “active”, and a predominant portion of the signal (e.g., approximately 80%) of every cycle is set to “low” or “inactive”. Consequently, content may be displayed in display region602differently than content displayed in display region604. Some embodiments involve causing the wearable extended reality appliance to display the virtual content in accordance with the determined first duty cycle configuration for the first display region and the determined second duty cycle configuration for the second display region. The term “causing” may include triggering, inducing or taking an action to bring about a particular consequence or deterministic outcome. The term “display” may refer to presenting visually, for example by controlling the activation of one or more pixels of an electronic display to visually exhibit content. For example, some regions of an extended reality display may include pixels activated by circuitry for displaying virtual content overlaid on non-activated (e.g., transparent) regions of the display presenting the real world. The terms “extended reality appliance”, “virtual content”, “duty cycle configuration” (e.g., determined duty cycle configuration), and “display region” may be understood as described earlier. Thus, after determining the first and second duty cycle configurations the wearable extended reality appliance may be caused to display content in each of the first and second display regions according to the first and second duty cycle configurations, respectively. For example, a processing device executing software instructions may control hardware circuitry (e.g., switches, diodes, transistors, controllers, filters, samplers, converters, compressors) and/or software equivalents to display content via the wearable extended reality appliance. The processing device may thus direct different display signals to different display regions of the extended reality environment to cause specific content to be displayed in certain regions. For example, display signals may be directed to display content in different regions according to one or more criterion as described earlier, such as the content type, context, priority, the physical environment, ambient conditions (e.g., lighting, noise, distractions), and/or any other criterion relevant to the display of content. In addition, the processing device may modify the display signals targeted for each display region using one or more signal processing techniques (e.g., analog and/or digital, linear and/or non-linear, discrete and/or continuous time) to affect the duty cycle of the display signals. Signal processing techniques affecting the duty cycle may include, for example, filters, transforms, modulations (e.g., PWM or PDM), inversions, differential equations, statistical and/or polynomial signal processing, stochastic signal processing, estimation and detection techniques, and any other signal processing technique affecting the duty cycle. The processing device may control aspects or parameters affecting the duty cycle such as the frequency and/or amplitude of the display signal, the percent during which each cycle of the display signal is set to “active” versus “inactive”, the latency and/or responsiveness of the switching between the “active” and “inactive states within each cycle (e.g., expressed as a time delay within each cycle or gradient to transition between the active and inactive states), and/or any other factor affecting the duty cycle of the display signals. The first and second duty cycle configurations may be applied to the display signals based on time, space, context or association, frequency of use, head pose, background noise, physical and/or virtual distractions, and/or any other criterion. Applying the first/second duty cycles based on time may include, for example, applying the first duty cycle configuration during morning hours and the second duty cycle configuration in the evening. Applying the first/second duty cycles based on space may include, for example applying the first duty cycle configuration for displaying content against a wall and the second duty cycle configuration for displaying on the surface of a desk. Applying the first/second duty cycles based on context or association may include, for example applying the first duty cycle configuration for highly relevant or important content and the second duty cycle configuration for less relevant content. Applying the first/second duty cycles based on frequency of use may include, for example, applying the first duty cycle configuration for frequently used virtual widgets or accessories and the second duty cycle configuration for infrequently used widgets or accessories. Applying the first/second duty cycles based on head pose may include, for example, applying the first duty cycle configuration when the head of the user faces forward and the second duty cycle configuration when the user turns his head sideways. Applying the first/second duty cycles based on background noise may include, for example, increasing the duty cycle in response to detecting distracting sounds, and decreasing the duty cycle in the absence of background noise. Applying the first/second duty cycles based on physical and/or virtual distractions may include, for example, increasing the duty cycle in response to detecting a person, animal or virtual avatar entering the extended reality environment. Additionally, or alternatively, the first and second duty cycle configurations may be applied to the same object at different times, to different objects displayed simultaneously, to different regions of the extended reality environment, according to context or association and any other criterion differentiating displays of content via the wearable extended reality appliance. Applying the first/second duty cycles to the same object at different times may be based on, for example, the time of day and/or the frequency that the object is being used, changes to ambient illumination, or a changing head pose or posture of the user. Applying the first/second duty cycles to different objects displayed simultaneously may include, for example, displaying video content using a first duty cycle configuration simultaneously with displaying a virtual widget using a second duty cycle configuration. Applying the first/second duty cycles to different regions may include, for example, displaying objects docked to a desktop using a first duty cycle configuration and objects docked to a wall using a second duty cycle configuration. Applying the first/second duty cycles according to context or association may include, for example, displaying more important content according to a first duty cycle configuration, and less relevant content according to the second duty cycle configuration. By way of a non-limiting example, processing device460(FIG.4) may cause projector454to display content within display region602(FIG.6) according to duty cycle610(e.g., 60%) and content within display region604according to duty cycle612(e.g., 20%). For example, processing device460may determine that display region604is situated in a relatively dark region within the physical space surrounding user100, and that a duty cycle of 20% is therefore sufficient. Additionally, or alternatively, processing device460may determine that display region602corresponds to high priority content, and that a duty cycle of 60% may better draw the attention of user100. Some embodiments may involve determining a spatial distribution of the virtual content in the extended reality environment, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the spatial distribution of the virtual content. The terms “determining”, “virtual content”, “extended reality environment”, “duty cycle configuration”, and “based on”, may be understood as described earlier. The term “spatial distribution” may refer to an arrangement, layout or allocation of the virtual content in the extended reality environment, such as where in the extended reality environment content is displayed at a given moment in time. For example, a spatial distribution may be determined based on a density threshold for displaying content (e.g., to avoid a cluttered display), the field of view of the user, ambient lighting conditions, the presence of physical and/or virtual objects, and any other criterion. Determining the first/second duty cycle configuration for a spatial distribution based on a density threshold may include, for example, reducing the duty cycle when the density of the displayed content exceeds the threshold, e.g., to prevent eye strain. Determining the first/second duty cycle configuration for a spatial distribution based on the field of view of the user may include, for example, displaying content in the periphery according to a lower duty cycle configuration, and content in the center according to a higher duty cycle configuration, e.g., to facilitate concentration. Determining the first/second duty cycle configuration for a spatial distribution based on ambient lighting conditions may include, for example, applying a higher duty cycle configuration for content displayed in brightly lit areas and a lower duty cycle configuration for content displayed in dimly lit, or shadowed areas. Determining the first/second duty cycle configuration for a spatial distribution based on the presence of physical and/or virtual objects may include, for example, increasing and/or lowering the duty cycle when content is displayed in proximity to certain objects (e.g., based on the object type, color, size, light reflectance, light absorbance, or any other visible criterion). As another example, a spatial distribution may cause a virtual keyboard to be fixed (e.g., docked) to a physical desktop (e.g., regardless of the head pose of the user) while causing a virtual screen to follow the user's gaze, e.g., anywhere within the extended reality environment. A higher duty cycle configuration may be applied to display the fixed virtual keyboard, and a lower duty cycle may be applied to display the virtual screen following the user's gaze (e.g., to prevent motion sickness). Another spatial distribution may redistribute, resize, or collapse a plurality of virtual widgets into a list when the number of virtual widgets exceeds a threshold. In such a case, the duty cycle for displaying the virtual widgets may be lowered. By way of a non-limiting example, processing device460(FIG.4) may determine to spatially distribute virtual screen112in display region602ofFIG.6, e.g., in the direct line of sight of user100when user100is facing wall606, and virtual widgets114C and114D in a peripheral region of the field of view of user100. Based on this spatial distribution, processing device460may determine to display content in virtual screen112according to duty cycle configuration610, and virtual widgets114C and114C according to duty cycle configuration610. Some embodiments may further involve detecting a head motion of a wearer of the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the detected head motion of the wearer. The term “detecting” may include discovering, noticing or ascertaining, for example in response to sensing or otherwise becoming aware of something. For example, a sensor (e.g., optical, electric and/or magnetic, acoustic, motion, vibration, heat, pressure, olfactory, gas, or any other type of sensor) may sense a signal that may be analyzed to discover or ascertain, and thereby detect a physical phenomenon, such as a head motion. The term “head motion” may refer to any movement, for example enabled by the neck and/or shoulder muscles, which changes the position of the head (e.g., the part of the body from the neck upwards, including the ears, brain, forehead, cheeks, chin, eyes, nose, and mouth). Examples of head motion may include tilting (e.g., up and down motion), rotating (e.g., left or right motion), leaning (e.g., sideways motion), shifting (e.g., 360 degrees parallel to the floor plane, as a result of moving at least the upper body), and any combination thereof. The terms “wearer of the wearable extended reality appliance”, “duty cycle configuration”, “determined”, and “based on” may be understood as described earlier. For example, the head motion of the wearer of the wearable extended reality appliance may be detected with respect to the body of the wearer, a stationary physical object in the vicinity of the wearer, a virtual object displayed via the extended reality appliance, and/or any combination thereof. At least one motion sensor may be provided to track any of the position, orientation, pose, and/or angle of the head of the wearer to detect a head motion. Examples of motion sensors that may be used include an IMU sensor (e.g., including one or more of a gyroscope, compass, and accelerometer), a camera (e.g., optic, IR), an acoustic sensor (e.g., sonar, ultrasound), an RFID sensor, and any other sensor configured to sense motion. For example, a combination of an IMU sensor integrated with the wearable extended reality appliance and an optical detector (e.g., camera) positioned to detect the head of the wearer may operate together to track head motions of the wearer, such as up/down, left/right, tilt, rotation, translation (e.g., due to walking), and any other type of head motion. Data collected by the at least one motion sensor may be received and analyzed by a processor to track the head of the wearer over time. Thus, at least one of the duty cycle configurations may be determined based on the head motion of the wearer of the wearable extended reality appliance. For example, when the wearer is facing a virtual screen, a virtual widget may be displayed according to a first (e.g., high) duty cycle configuration, e.g., to facilitate the interfacing of the wearer with the virtual widget. However, when the wearer turns his head away from the virtual screen, for example to take a rest break, the virtual widget may be displayed according to a second (e.g., lower) duty cycle configuration, to prevent motion sickness or distractions during the rest break, while still allowing the wearer to interface with the virtual widget if necessary. As another example, two virtual screens may be displayed to the wearer simultaneously, where the display of each virtual screen may toggle between the first and second duty cycle configurations depending on the head orientation of the wearer of the extended reality appliance. For example, the configuration with the higher duty cycle may be used to display whichever of the two virtual screens is currently in a direct line of sight of the wearer, and the configuration with the lower duty cycle may be used for the other virtual screen (e.g., not in the direct line of sight). As the wearer moves his direct line of sight to the other of the two virtual screens, the duty cycle configuration may be switched. By way of a non-limiting example, turning toFIG.7, an exemplary implementation for basing the duty cycle on a determined head motion is shown.FIG.7is substantially similar toFIG.6with the notable difference of a display region702positioned against wall606at eye level with user100, and display region706positioned on desktop608. Virtual content on virtual screen112may be displayed in display region702, and a virtual widget710may be displayed in display region706. Two duty cycle configurations704(e.g., 60%) and708(e.g., 20%) may be provided to follow the gaze of user. When the gaze of user is directed towards display region702, virtual screen112may displayed according to duty cycle704, e.g., to provide a more intense display, whereas virtual widget710may be displayed according to duty cycle708, e.g., to conserve energy. When the head of user100tilts downwards, away from display region702and towards display region706, processing device460(FIG.4) may detect the head motion (e.g., in conjunction with motion sensor473) and switch the duty cycle configurations. Thus, for example, when the head of user100tilts downwards, virtual screen112may be displayed according to duty cycle configuration708(e.g., to conserve energy) and virtual widget710may be displayed according to duty cycle configuration704, (e.g., to provide a more intense display). When user100tilts head714upwards once more to face virtual screen112, processing device460may switch duty cycle configurations704and708again, accordingly. Some embodiments may provide a non-transitory computer readable medium containing instructions that when executed by at least one processor cause the at least one processor to perform duty cycle control operations for wearable extended reality appliances, the operations comprising: receiving data representing virtual content in an extended reality environment associated with a wearable extended reality appliance; detecting in the extended reality environment a first head motion and a second head motion of a wearer of the wearable extended reality appliance; determining a first duty cycle configuration based on the first head motion; determining a second duty cycle configuration based on the second head motion, wherein the second duty cycle configuration differs from the first duty cycle configuration; and causing the wearable extended reality appliance to display the virtual content in accordance with the first duty cycle configuration upon detecting the first head motion and in accordance with the second duty cycle configuration upon detecting the second head motion. The terms “non-transitory computer-readable medium”, “instructions”, “processor”, “duty cycle control operations”, “wearable extended reality appliances”, “receiving”, “data representing virtual content”, “extended reality environment”, “associated with”, “detecting”, “head motion”, “wearer of the wearable extended reality appliance”, “determining”, “duty cycle configuration”, “based on”, “differs”, “causing”, “display”, and “virtual content” may be understood as described earlier. Thus, according to some embodiments, instead of using different display regions, different head motions of the wearer of the wearable extended reality appliance may be used to determine different (e.g., first and second) duty cycle configurations and to cause virtual content to be displayed according to different duty cycle configurations. In some examples, data captured using inertial sensors, accelerometers, gyroscopes and/or magnetometers included in the wearable extended reality appliance may be analyzed to determine head motion (such as the first head motion and/or the second head motion), head position and/or head direction. In some examples, image data captured using image sensors included in the wearable extended reality appliance may be analyzed (for example, using egomotion algorithms, using ego-positioning algorithms, etc.) to determine head motion (such as the first head motion and/or the second head motion), head position and/or head direction. For example, while stationed at a physical work station (e.g., seated in a chair facing a physical wall) a wearer of a wearable extended reality appliance may turn his head level to face the wall. The motion (e.g., the first head motion) may be detected and used to determine a first duty cycle configuration for displaying virtual content (e.g., text on a virtual screen). For example, the first duty cycle configuration may be relatively high to allow the wearer to interface with the displayed virtual content. When the wearer turns his head away from the wall, for example, due to a distraction, the head turning motion may be detected as a second head motion. The second head motion may be used to determine a second (e.g., lower) duty cycle configuration for displaying the virtual content, previously displayed according to the first (e.g., higher) duty cycle configuration, for example to conserve energy since the focus of the user is no longer on the wall. As another example, a first head motion (e.g., gesture) by the wearer of the wearable extended reality appliance may be associated with invoking a messaging widget. Upon detecting the first head motion, the messaging widget may be displayed according to a first (e.g., relatively high) duty cycle configuration. When the wearer turns his head down to focus on the desktop, the downward motion (e.g., the second head motion) may be detected and used to determine a second (e.g., lower) duty cycle configuration. The messaging widget may be continually displayed by the extended reality appliance to follow the gaze of the wearer. However, the messaging widget may now be displayed according to the second (e.g., lower) duty cycle configuration instead of the first (e.g., higher) duty cycle configuration, for example to prevent nausea. By way of a non-limiting example,FIG.8illustrates an exemplary implementation for using head motions to determine the duty cycle configurations, in place of display regions.FIG.8is substantially similar toFIG.7with the noted difference that the duty cycle configuration may be based on a head motion, independent of the display region. Thus, a detected head motion may be used determine the duty cycle configuration to apply, instead of the display region. Accordingly, a first head motion leading to head position802may be associated with a duty cycle configuration804, and a second head motion leading to head pose806may be associated with a second duty cycle configuration808, e.g., instead or in place of first and second display regions associated with first and second duty cycle configurations804and808, respectively. For example, first head motion leading to head position802may correspond to user100invoking virtual screen112, and second head motion leading to head pose806may correspond to user100turning the focus away from virtual screen. Processing device460in conjunction with motion sensor473(FIG.4) may detect the first head motion (e.g., leading to first head position802) and determine an invocation of virtual screen112. In response, processing device460may apply duty cycle configuration804to display virtual screen112. For example, duty cycle configuration802may be relatively high (e.g., 60%) to allow user to see virtual content on virtual screen112. Processing device460in conjunction with motion sensor473may detect the second head motion (e.g., positioning the head of user100into second head pose806) and determine the focus turned away from virtual screen112. In response, processing device460may apply duty cycle configuration808for displaying virtual screen112. For example, duty cycle configuration808may be relatively low (e.g., 20%) to conserve energy since user100is no longer focused on virtual screen112. According to some embodiments, the at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on a speed associated with the detected head motion of the wearer. The term “speed” may refer to velocity, pace, or rate of an activity. Thus, the rate at which the wearer moves his head may be used to determine the first and/or second duty cycle configurations, and/or which duty cycle configuration to apply for displaying virtual content. In some embodiments, the speed that the wearer moves his head may be compared to a predefined threshold, such that if the speed is greater than the threshold, one of the duty cycle configurations may be applied, and if the speed is less than the threshold, the other duty cycle configuration may be applied. For example, the threshold may be associated with nausea or motion sickness. When speed of the head motion is below the threshold, a first duty cycle configuration (e.g., higher of the two duty cycle configurations) may be applied, e.g., to enhance the display of the virtual content. However, when the wearer moves his head at a speed exceeding the threshold, a second duty cycle configuration (e.g., lower of the two duty cycle configurations) may be applied, e.g., to prevent nausea. For example, when the wearer is walking slowly, e.g., the head motion is below the threshold, the higher of the two duty cycle configurations may be applied, and when the wearer is walking quickly, e.g., the head motion exceeds the threshold, the lower of the two duty cycle configurations may be application, e.g., to prevent motion sickness. As another example, a predefined head gesture, e.g., performed at a particular speed, may be associated with invoking a specific application. When the detected head motion corresponds to predefined head gesture at the particular speed (e.g., the wearer deliberately moved his head to invoke the application), the first duty cycle configuration may be applied to display the invocation of the specific application. However, when the speed of the detected head motion does not correspond to the particular speed for the predefined head gesture (e.g., the wearer moved his head arbitrarily, with no intention of invoking the application), the second duty cycle configuration may be applied to display virtual content, e.g., to prevent motion sickness. By way of a non-limiting example, turning toFIG.8, a smooth and steady downwards motion of the head from head pose802to806may be associated with displaying a virtual widget810on desktop608. Upon detecting a smooth head motion by user100moving the head from head pose802to head pose806, processing device460in conjunction with motion sensor473(FIG.4) may determine that user100has deliberately moved his head to perform a predefined gesture associated with displaying virtual widget810on desktop608. In response, processor460may display virtual widget810on surface608according to first duty cycle configuration806, e.g., 60% duty cycle for an enhanced display. However, upon detecting an abrupt head motion by user100from head pose802to806(e.g., a head motion differing from the predefined gesture due to the speed), processing device460in conjunction with motion sensor may determine that the detected head motion is associated with something other than displaying virtual widget810, e.g., a typing on keyboard104. In response, processor460may avoid displaying virtual widget810, and instead change the display of virtual content on virtual screen from duty cycle configuration804(e.g., higher intensity) to duty cycle configuration808(e.g., lower intensity), e.g., to conserve energy. According to some embodiments, the at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on a direction associated with the detected head motion of the wearer. The term “direction” may refer to an orientation, course, or path along which something moves. Thus, the orientation of a user's head or a path along which the user moves his head may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, a gesture recognition application may associate a right turn of the head with invoking an application and a left turn of the head with pausing an application. In response to detecting a right turn of the head, the first duty cycle configuration may be applied to display virtual content for the invoked application. In response to detecting a left turn of the head, the second duty cycle configuration may be applied to display virtual content for the paused application. As another example, a head tilt downwards up to a predefined threshold may be associated with a distraction of fatigue, and thus the lower of the two duty cycle configurations may be applied, whereas a head tilt downwards beyond the predefined threshold may be associated with a deliberate gesture to invoke an application, such as to display a virtual widget on a desktop, and thus the higher of the two duty cycle configurations may be applied. By way of a non-limiting example, turning toFIG.8, head pose806(e.g., downwards) may be defined as a predefined threshold for invoking virtual widget810. During a first head motion by user100tilting the head downwards from head pose802, but stopping before reaching head pose806, processing device460in conjunction with motion sensor473(FIG.4) may detect the first head motion and determine that user100is fatigued. In response, processing device460may display virtual content via virtual screen112according to duty cycle configuration808(e.g., 20%). During a second head motion by user100tilting the head downwards to reach head pose806, processing device460in conjunction with motion sensor473(FIG.4) may detect the second head motion and determine that user100wishes to invoke virtual widget810. In response, processing device460may display virtual widget810on desktop608according to duty cycle configuration804(e.g., 60%). Some embodiments may further involve determining an area of focus of a wearer of the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the determined area of focus. The term “area of focus” may be understood as described earlier. Thus, the area of the extended reality environment that the wearer is currently looking at or otherwise focused on may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, if the area of focus is inside a predefined region of the extended reality environment, the first duty cycle configuration may be applied, and if the area of focus is outside the predefined region, the second duty cycle configuration may be applied. In some examples, the area of focus of the wearer of the wearable extended reality appliance may be determined based on a gaze direction of the wearer, and the gaze direction may be determined based on an analysis (for example, using a gaze detection algorithm) of one or more images of one or two of the wearer's eyes. For example, the one or more images may be captured using an image sensor included in the wearable extended reality appliance. In some examples, the area of focus of the wearer of the wearable extended reality appliance may be determined based on a head direction of the wearer, and the head direction may be determined as described above. In some examples, the area of focus of the wearer of the wearable extended reality appliance may be determined based on an interaction of the wearer with an object (such as a virtual object or a physical object) in that area, for example through gestures, through a pointing device, through a keyboard, and any other user interfacing technique. By way of a non-limiting example, turning toFIG.7, when processing device460(FIG.4) detects the focus of user100on virtual screen112, content may be displayed on virtual screen112according to duty cycle704(e.g., 60%). When processing device460detects the focus of user100away from virtual screen112(e.g., towards virtual widget710, content may be displayed on virtual screen112according to duty cycle708(e.g., 20%). Some embodiments may further involve detecting a physical object located in proximity to the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the detected physical object. The terms “detecting”, “wearable extended reality appliance”, and “duty cycle configuration” may be understood as described earlier. Regarding detecting a physical object, for example, a sensor (e.g., electric and/or magnetic, optic, acoustic, vibration, olfactory, and any other type of physical sensor) may detect one or more physical characteristics of an object based on a signal emitted from, reflected off, or absorbed by the object. Examples of physical characteristic of a physical object that may be detected may include a distance and/or orientation relative to the wearable extended reality appliance, a color, texture, size (e.g., minimal size), optical properties (e.g., glossiness, roughness, reflectance, fluorescence, refractive index, dispersion, absorption, scattering, turbidity, and any other optical property). For example, image data captured using an image sensor included in the wearable extended reality appliance may be analyzed using an object detection algorithm to detect the physical object. The term “physical object” may include a real or tangible item, such as may be governed by classical laws of physics. The term “located” may refer to a position, placement or station, e.g., in a physical environment. The term “proximity” may refer to being adjacent, or close to (e.g., within a predefined distance). Thus, characteristics of physical objects, such as size, optical properties, distance and/or orientation from the wearable extended reality appliance may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, virtual content displayed next to a brightly colored physical object positioned in proximity to the wearable extended reality appliance may be displayed using a relatively high duty cycle configuration, e.g., to allow distinguishing the virtual content next to the brightly colored physical object. Similarly, virtual content displayed next to a small and/or dull object may be displayed using a relatively low duty cycle configuration, e.g., to allow distinguishing the small, dull object next to the virtual content. In some examples, when the physical object is a person approaching the wearer of the wearable extended reality appliance, one value may be selected for the first duty cycle configuration, and when the physical object is a person not approaching the wearer of the wearable extended reality appliance, a different value may be selected for the first duty cycle configuration. For example, tracking algorithms may be used to analyze images of the person and determine a trajectory of the person, and the determined trajectory may be analyzed to determine if the person is approaching the wearer. In some examples, when the physical object is a person interacting with a wearer of the wearable extended reality appliance, one value may be selected for the first duty cycle configuration, and when the physical object is a person not interacting with a wearer of the wearable extended reality appliance, a different value may be selected for the first duty cycle configuration. For example, audio data captured using an audio sensor included in the wearable extended reality appliance may be analyzed (for example, using a speech recognition algorithm) to determine whether the person is verbally interacting with the wearer. In another example, image data captured using an image sensor included in the wearable extended reality appliance may be analyzed (for example, using a gesture recognition algorithm) to determine whether the person is interacting with the wearer through gestures. By way of a non-limiting example, processing device460(FIG.4) may apply duty cycle configuration610(e.g., 60%) (seeFIG.6) to display virtual screen112based on the relatively close proximity to smart glasses110. Similarly, processing device460may apply duty cycle configuration612(e.g., 20%) to display virtual content on the far edge of desktop608based on the relatively far distance from smart glasses110. Some embodiments may further involve detecting a virtual object in the extended reality environment, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the detected virtual object. The term “virtual object” may refer to a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation) as described earlier. The terms “detecting”, “extended reality environment”, “duty cycle configuration”, “determined” may be understood as described above. For example, a virtual object may be detected by a processor controlling the display of content via the wearable extended reality appliance, such as by detecting an increase in memory and/or bandwidth consumption (e.g., indicating a video played in a picture-in-picture, the introduction of a virtual avatar), detecting the response of the wearer to the display of the virtual object (e.g., via an eye tracker, an event listener configured with an electronic pointing device, a voice recognition application configured with a microphone), and any other method for detecting virtual content. Thus, the existence and/or the display characteristics of a virtual object in the extended reality environment may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply. For example, the duty cycle configuration may be based on a distance and/or orientation of the virtual object relative to the wearable extended reality appliance, on the size of the virtual object, on an optical property of the virtual object (e.g., color, luminance, opacity, pixel intensity), and any other visual property of the virtual object. For example, a high duty cycle configuration may be applied to display content in proximity to a large or bright virtual object, and a lower duty cycle configuration may be applied to display content in proximity to a dim or translucent virtual object. By way of a non-limiting example with reference toFIG.6, upon detecting the start of a streamed video playing in a picture-in-picture622(e.g., based on increase in memory usage), processing device460may determine to display virtual screen112according to duty cycle configuration612instead of duty cycle configuration610, e.g., to conserve resources. Some embodiments may further involve detecting a physical movement in proximity to the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the detected physical movement. The term “physical movement” may refer to an activity or motion in the real physical world, e.g., requiring an expenditure of energy. For example, an object falling, a person, animal, or robot passing by, may be physical movements. The terms “detecting”, “proximity”, “wearable extended reality appliance”, and “duty cycle configuration” may be understood as described above. Thus, physical movement may be detected based on speed, the region and/or proportion that the physical movement occupies in the extended reality environment, the type of movement (e.g., sudden versus slow), the entity performing the movement (e.g., virtual or real, human or inanimate). The physical movement may be detected via one or more detectors configured in the proximity to the wearable extended reality appliance, such as an optical, acoustic, radio or any other type of detector. For example, a motion detection, visual activity or event detection algorithm may be applied to a sequence of images (e.g., video) captured via a camera. Thus, the existence of physical movement in proximity to the wearable extended reality appliance may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, a child entering the extended reality environment or an object falling may cause virtual content to be displayed according to a lower duty cycle configuration, e.g., to draw the attention of wearer away from the extended reality environment so that the wearer may be aware of the child or the falling object. By way of a non-limiting example with reference toFIG.6, a ball624may be tossed by a child in proximity to user100while user100is working at home via wearable extended reality appliance110. A camera626positioned on wall606in proximity to smart glasses110may capture a video of the motion of ball624and provide the video to processing device460(FIG.4). Processing device460may analyze the video and detect the physical movement of ball624. In response, processing device460may cause virtual content, previously displayed via wearable extended reality appliance110according to duty cycle610, to be displayed according to duty cycle configuration612, e.g., as a way of notifying user100of the presence of ball624. Some embodiments may involve identifying a type of virtual content included in the first display region, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the type of virtual content included in the first display region. The term “type of virtual content” may refer to a context, classification, genre, or any other category of virtual content. The terms “virtual content”, “display region”, and “duty cycle configuration” may be understood as described earlier. Thus, a processing device may identify the type of virtual content, for example based on the format of the virtual content, based on metadata associated with the virtual content, on resources required to process and/or render the virtual content (e.g., memory, CPU, and communications bandwidth), on latency experienced when rendering the virtual content, on timing restrictions regarding the display of the virtual content, and any other identifiable characteristic of the virtual content. The type of virtual content (e.g., category, context, format, priority level) being displayed in a given display region may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, a virtual text document may be a different type of virtual content that virtual image, or video content. As another example, virtual content associated with an email application receiving text notifications in real-time may be a different type (e.g., urgent text) than virtual content associated with a dormant graphic editing application displaying graphics (e.g., non-urgent graphics). Thus, the urgent text may be displayed using a higher duty cycle configuration than the non-urgent graphics. As yet another example, virtual content consumed during work hours (e.g., associated with a work context) may be a different type than virtual content consumed after working hours (e.g., associated with a personal context). Thus, content associated with work may be displayed using a higher duty cycle configuration that content associated with personal matters. By way of a non-limiting example, turning toFIG.6, display region604may include virtual widget114C providing daily weather updates as graphic content, and virtual widget114D providing minute-by-minute text notifications. Processing device460(FIG.4) may identify the different types of content displayed by virtual widgets114C and114D (e.g., graphic once per day versus text minute-by-minute) and may determine to display virtual widget114D according to duty cycle configuration610(e.g., 60%), and virtual widget114C according to duty cycle configuration612(e.g., 20%). Some embodiments may further involve determining ambient illumination conditions, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the determined ambient illumination conditions. The term “ambient illumination conditions” may refer to the light that is available or present in an environment. An ambient illumination condition may involve one or more of the direction, intensity, color, quality, and/or the contrast-producing effect of light. For example, a source of light such as a window opening to daylight, a lamp, an electronic display, or any other lighting appliance (e.g., turned on), as well as a physical object casting a shadow, the color of the walls, ceiling and floor, the presence of a mirror, and any other physical object affecting the available light may contribute to the ambient illumination conditions. The terms “determining”, and “duty cycle configuration” may be understood as defined earlier. The ambient illumination conditions may be determined, for example, by analyzing one or more images captured by a camera (e.g., by a processor), by a light meter, an ambient light sensor (e.g., including one or more phototransistors, photodiodes, and photonic integrated circuits), or a lux meter (e.g., configured with a mobile phone), or any other type of ambient light detector positioned in proximity to the extended reality environment. According to some embodiments, determining ambient illumination conditions may include determining the source of light (e.g., a window versus a LED lamp or screen), for example based on the luminance, the spectrum (e.g., detectable by a spectrophotometer). According to some embodiments, determining the ambient illumination conditions may include determining properties of a light source, such as the size, the direction, the presence of objects reflecting, absorbing, dispersing, and/or blocking the light source, and any other factor affecting the light source. Thus, the ambient illumination conditions in the extended reality environment (e.g., and the different display regions included therein) may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, a lower duty cycle configuration may be used to display virtual content in a shadowed region of a room (e.g., because less contrast may be needed to discern the virtual content), and a higher duty cycle configuration may be used to display virtual content in a brightly lit region of the room (e.g., because greater contrast may be needed to discern the virtual content). As another example, a higher duty cycle configuration may be applied to display virtual content during the day when the ambient illumination is primarily due to sunlight, and a lower duty cycle configuration may be applied to display virtual content at night when the ambient illumination is primarily due to artificial lighting. As yet another example, while a curtain is drawn (e.g., open) allowing daylight to penetrate the physical space of the extended reality environment, a higher duty cycle configuration may be used to display virtual content (e.g., to provide greater contrast to discern the virtual content displayed in a well-lit area), and when the curtain is closed, a lower duty cycle configuration may be used to display virtual content (e.g., because less contrast may be needed to discern the virtual content displayed in a darkened area). By way of a non-limiting example, turning toFIG.6, camera626positioned in extended reality environment may detect that display region602is situated in a well-lit area (e.g., exposed to daylight). In response, processing device460(FIG.4) may determine to use duty cycle configuration808(e.g., 60%) to display virtual content in display region602, e.g., to provide greater contrast for user100to discern the virtual content. Conversely, camera626may detect that display region604is situated in a darkened area (e.g., due to a shadow cast by a physical object). In response, processing device460may determine to use duty cycle configuration812(e.g., 20%) to display virtual content in display region604, e.g., because less contrast may be needed. Some embodiments may involve estimating a physical condition of a wearer of the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the estimated physical condition of a wearer. The term “estimating” may include an approximation or assessment, e.g., based on analysis, calculations and/or inference of measured data. The estimating may be facilitated by artificial intelligence, inference, statistical analysis, machine and/or deep learning, extrapolation, clustering, and any other technique for performing estimations. The term “physical condition” may refer to the physiological state of the body or bodily functions of a user. For example, fatigue, nausea, eye strain, head, back and/or neck pain, posture, nervousness, agitation, illness, or any other physiological condition affecting the physical condition of the user. The physical condition of the wearer may be estimated for example by processor receiving data from a sensor configured to detect one or more biomarkers (e.g., heart or breathing rate, yawning, blinking frequency or eye open and close ratio (EOCR), the percentage of eyelid closure over the pupil over time (PERCLOS), blood pressure, oxygen level in exhaled air, or any other biological indication of a physiological state) detected by one or more sensors provided in the extended reality environment. For example, a smart watch worn by the user may detect heart and/or breathing rate. A camera may capture images of the user yawning, head nodding head, or eye closing or rubbing and may provide the images to a processing device for image analysis. An IMU configured with a pair of smart glasses may detect a nodding motion of the wearer. The terms “wearer of the wearable extended reality appliance”, “determining”, and “duty cycle configuration” may be understood as defined earlier. Thus, the physical or physiological state of the wearer of the wearable extended reality appliance may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, the duty cycle configuration may be lowered if the wearer is determined to be agitated (e.g., based on detecting distracted or jerky motions), fatigued, or suffering from neck or back strain (e.g., by a camera capturing the wearer yawning, rubbing his eyes, or slouching). As another example, the duty cycle configuration may be increased if the wearer is determined to be alert and focused, e.g., based on an upright posture and a low PERCLOS level. For example, during a learning period, a machine learning algorithm may detect a pattern of behavior for a wearer and may receive feedback from the wearer allowing the machine learning algorithm to learn a schedule of the wearer. The machine learning algorithm may use the schedule and feedback to identify signs indicating the physical condition of the wearer, such as fatigue, stress, anxiety, nausea, a migraine, and any other physical condition that may be alleviated or facilitated by adjusting the duty cycle. A processing device may use the identified signs to modify the duty cycle configuration to accommodate the physiological needs of the wearer. For example, if the wearer is determined to be suffering from fatigue (e.g., based on the detected breathing rate and PERCLOS level), the duty cycle may be reduced, similarly if the wearer is determined to be alert and energetic (e.g., based on reaction time to displayed content), the duty cycle may be increased. By way of a non-limiting example with reference toFIG.7, camera626may capture images of head nodding and eye closing by user00, concurrently with motion sensor473sensing a nodding motion of the head of user100. Processing device460(FIG.4) may receive image data from camera626and the sensed motion data from sensor473and analyze the image and motion data to determine that user100is experiencing drowsiness. For example, processing device460may enlist a machine learning engine to identify the nodding head motion and the closing of the eyes with sleepiness. In response, processing device460may modify the duty cycle configuration used to display virtual content on virtual screen112from duty cycle configuration704(e.g., 60%) to duty cycle configuration708(e.g., 20%). Some embodiments may involve receiving an indication of a hardware condition of the wearable extended reality appliance, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the hardware condition of the wearable extended reality appliance. The terms “receiving”, “wearable extended reality appliance”, and “duty cycle configuration” may be understood as described earlier. The term “indication” may include a signal, sign, marker, measurement, or any other type of evidence conveying a situation, state, or condition. The term “hardware condition” may include a state of a hardware component in the wearable extended reality appliance, such as the amount of available power in a battery, the processing load allocated to a processor, the available memory or communications bandwidth, the temperature of an electronic component, and any other measure of one or more hardware components of the wearable extended reality appliance. For example, a processing device may monitor available memory (e.g., stack, buffers, queues, RAM), communications bandwidth (e.g., for internal buses and external communications channels), communication and processing latencies, temperature of electronic components, and any other hardware indication. The processing device may receive one or more indications of the hardware condition by polling various electronic components and/or receiving one or more interrupt notifications, such as a buffer or stack overflow notification, a NACK notification (e.g., a timeout after exceeding a latency limit), an overheating warning from a thermometer monitoring the temperature of one or more electronic components, and/or by detecting a processing latency. The warnings may be issued based on predefined thresholds for a given hardware configuration, e.g., based on recommended specifications. Thus, the state of one or more hardware components included in the wearable extended reality appliance may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, upon detecting a low battery level, or a high processing load allocation, the duty cycle configuration may be reduced, whereas upon detecting connection to a wall outlet and/or a low processing load, the duty cycle configuration may be increased. By way of a non-limiting example, a temperature sensor (e.g., other sensor475ofFIG.4) provided with wearable extended reality appliance110may detect a temperature of processing device460and provide the temperature reading to processing device460. Processing device460may compare the temperature reading to a predefined recommended temperature limit and may determine that processing device460is overheated, e.g., due to a high processing load for displaying graphical virtual content in virtual screen112. In response, processing device460may reduce the duty cycle for displaying the virtual content to duty cycle configuration612(e.g., 20%) from duty cycle configuration610(e.g., 60%) to allow processing device460to cool to the recommended temperature limit. Some embodiments may further involve identifying a virtual event, and wherein at least one of the first duty cycle configuration and the second duty cycle configuration is determined based on the identified virtual event. The terms “identifying”, and “duty cycle configuration” may be understood as described earlier. The term “virtual event” may include an occurrence of an action, activity, or any other change of state that is implemented via a computer-generated medium and may not exist outside the computer-generated medium (e.g., in the real, physical world detached from a computer). A virtual event may be identified, for example, based on the processing load of a processing unit (e.g., a GPU), the status of memory resources (e.g., buffers, queues, and stacks, RAM), retrieval of data from a particular location in memory, receiving of data from a specific external source, as a notification from an external device or an event listener (e.g., configured with an operating system of the wearable extended reality appliance), as latency experienced in processing threads other than the virtual event, a response of the wearer of the extended reality appliance, and any other indication of a virtual event. A processing device may identify the virtual event, for example by polling one or more memory resources, monitoring the status of internal buses and/or external communications channels (e.g., by checking latency and time-outs), receiving an interrupt event from an event listener, and any other method for identifying the virtual event. Additionally, or alternatively, a virtual event may be identified based on feedback from a user of an extended reality appliance, such as a head motion, voice command and/or action by an electronic pointing device in response to or related to the virtual event. Thus, one or more synthesized (e.g., virtual) events in the extended reality environment may be used to determine the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, the entry of a virtual avatar entering the room may cause virtual content other than the avatar to be displayed according to a lower duty cycle configuration and the avatar to be displayed according to a higher duty cycle configuration. As another example, the sharing (e.g., sending or receiving) of content may cause memory buffer overflow and/or an overload on a bus system and may trigger a change in the duty cycle configuration, e.g., to a lower duty cycle to alleviate processing load. By way of a non-limiting example, turning toFIG.6, user100may receive an electronic notification (e.g., the occurrence of a virtual event) associated with virtual widget114D. In response, processing device460(FIG.4) may increase the duty cycle for displaying widget114D from duty cycle configuration612(e.g., 20%) to duty cycle configuration610(e.g., 60%). According to some embodiments, the first duty cycle configuration for the first display region and the second duty cycle configuration for the second display region are determined for a first time period, and the operations further include determining at least one updated duty cycle configuration for the first display region and the second display region for a second time period following the first time period. The terms “duty cycle configuration”, “display region”, and “determining” may be understood as described earlier. The term “time period” may refer to a duration, length of time for an activity, condition or state (e.g., measured in seconds, minutes, hours, and/or days), a particular time of day (e.g., morning, afternoon, evening or night), a particular day or days of the week (e.g., weekdays versus weekends or holidays), or any other measure of time. The term “updated” may refer to amended, renewed or revised. Thus, a time-based criterion may be used to determine and/or update the first and/or second duty cycle configurations and/or which duty cycle configuration to apply for displaying virtual content. For example, virtual content may be displayed according to a lower duty cycle configuration during morning hours when the wearer of the wearable extended reality appliance is alert. In the afternoon (e.g., following the morning) when the user is fatigued, the duty cycle may be updated to a higher duty cycle configuration, e.g., to draw the wearer's focus. In the evening (e.g., following the afternoon), the duty cycle may be updated yet again to a lower duty cycle configuration, e.g., to allow the wearer to relax during a shutdown ritual. By way of a non-limiting example, turning toFIG.6, during daylight hours when the ambient lighting is due to natural sunlight, processing device460(FIG.4) may display virtual content on virtual screen112according to duty cycle configuration610(e.g., 60%), for example to provide a more intense display to overcome intense daylight illumination. During the evening hours, e.g., when the ambient lighting is based on an artificial light source, such as a light bulb that is dimmer than sunlight, processing device460may display virtual content on virtual screen112according to duty cycle configuration612(e.g., 20%), for example to conserve energy because a less intense display may be sufficient. According to some embodiments, the operations further include determining when to end the first time period based on detection of an event. The term “determining” and “time period” may be understood as described above. The term “when to end the first time period” may be understood as a boundary for the time period, for example a point in time when the first time period terminates, and a new time period commences. For example, determining when to end the first time period may include calculating, identifying, specifying, setting, or assigning a boundary of for the first time period. For example, a processing device (e.g., configured with the wearable extendible reality appliance) may be configured to calculate, specify, set, or assign a point in time for the termination of the first time period based on detection of an event. The term “event” may refer to an occurrence of an action, activity, change of state, or any other type of development or stimulus, for example detectable by a processing device. The source of the event may be internal or external to the wearable extendible reality appliance. For example, an internal event may include a signal relating to a state of a component of the wearable extendible reality appliance (e.g., temperature, available communication and/or processing bandwidth, available battery power or memory, or any other criteria relating to the operation of the wearable extendible reality appliance). Internal events that may trigger a processing device to terminate the first time period may include, for example, the internal temperature of the processing device exceeding a predefined limit, the power remaining in a battery for the processing device falling below a predefined threshold, or a memory buffer overflowing. Examples of an external event may include an alert, trigger or signal received from an external computing device or peripheral device (e.g., configured with a sensor such as an optical, IR, acoustic, vibration, temperature, heat, humidity, electric and/or magnetic, or any other type of sensor), a user (e.g., as a user input) or any other type of external stimulus. The user input may include input via an input device (e.g., keyboard, electronic pointing device, touch-based device), a voice command, a gesture (e.g., eye via an eye tracker, head, hand, body), or any other type of user input. For example, external events that may trigger a processing device to terminate the first time period may include receiving a notification of a scheduled calendar event, receiving a timeout notification (e.g., NACK) from an external device, completing the receiving of data from an external source, or receiving an external warning to update system software, or install protective measures against malware. Thus, the termination of the first time period may be based on detecting an internal and/or external event. For example, a timer issuing an alert at a predefined hour, or a microphone sensing a child returning home from school may be used to determine the termination of the first time period. As another example, an application invoked by the wearer, or the receiving of a notification from another computing device (e.g., an email or electronic message) detected by an event listener may be used to determine the termination of the first time period. By way of a non-limiting example, turning toFIG.6, processing device460(FIG.4) may apply duty cycle configuration610to display virtual content on virtual screen during a first time period. Upon user100receiving a notification (e.g., associated with virtual widget114D) relating to an urgently scheduled meeting, processing device460(FIG.4) may analyze the notification and determine to terminate the first time period and initiate the second time period. Processing device460may switch the duty cycle for displaying virtual content on virtual screen112to correspond to duty cycle configuration612, e.g., for the second time period, for example to allow user100to prepare for the meeting. According to some embodiments, the at least one updated duty cycle configuration includes a single duty cycle configuration for both the first display region and the second display region. The term “updated”, “duty cycle configuration”, and “display region” may be understood as described earlier. The term “single” may refer to sole or only. Thus, after the first time period terminates, only one (e.g., single) duty cycle configuration may be used to display content in the first and second display regions, e.g., during the second time period following the first time period. For example, after a predetermine hour (e.g., midnight) any content displayed via the wearable extended reality appliance (e.g., in the first and second display regions) may be displayed according to the same duty cycle configuration, such as a lower duty cycle configuration to conserve energy. By way of a non-limiting example, turning toFIG.6, after a time period (e.g., a predetermined time period) of operation, processing device460(FIG.4) may determine that power source440providing power to operate wearable extended reality appliance110is low on power, for example power source440may be a battery and wearable extended reality appliance110may be a wireless appliance. In response, processing device460may update the duty cycle configuration to a lower duty cycle (e.g., 20%) for virtual content displayed in any of display regions602and604, e.g., to conserve power. According to some embodiments, the at least one updated duty cycle configuration includes a first updated duty cycle configuration for the first display region and a second updated duty cycle configuration for the second display region. The terms “updated duty cycle configuration”, and “display region” may be understood as described earlier. Thus, after the first time period terminates, each display region may be associated with a different updated duty cycle configuration. The duty cycle configurations may be increased or decreased by the same or different amounts. For example, both duty cycle configurations may be increased (e.g., both increased by 10%), one duty cycle configuration may be increased (e.g., by 5%) and the other duty cycle configuration may be decreased (e.g., by 20%), or both duty cycle configurations may be decreased (e.g., one by 5% and the other by 15%). For example, during the first time period, content may be displayed in the first display region according to an 80% duty cycle configuration, and content may be displayed in the second display region according to a 60% duty cycle configuration. When the first time period terminates, the duty cycle configuration may be updated for both the first and second display regions, e.g., by reducing the duty cycle configuration for the first display region by 10% and by increasing the duty cycle configuration for the second display region by 20%. Thus, during the second time period (e.g., following the first time period), content may be displayed in the first display region according to a 70% duty cycle configuration (e.g., the first updated duty cycle configuration), and content may be displayed in the second display region according to an 80% duty cycle configuration (e.g., the second updated duty cycle configuration). By way of a non-limiting example, turning toFIG.6, during the first time period, processing device460(FIG.4) may display content in display region602according to duty cycle configuration610(e.g., 60%) and content in display region604according to duty cycle configuration612(e.g., 20%). After the lapse of a predetermined time period (e.g., the first time period), processing device460may determine that power source440(e.g., a battery) is running low, and may update the duty cycle configurations for displaying content in each of display regions602and604, e.g., by reducing the duty cycle for each duty cycle configuration by half. Thus, during the second time period (e.g., following the first time period), processor may display content in display region602according to a duty cycle configuration of 30% (e.g., the first updated duty cycle configuration) and content in display region604according to a duty cycle configuration of 10% (e.g., the second updated duty cycle configuration). According to some embodiments, the at least one updated duty cycle configuration includes a first updated duty cycle configuration for the first display region and a first portion of the second display region and a second updated duty cycle configuration for a second portion of the second display region, the first portion of the second display region differs from the second region of the second display region. The terms “updated duty cycle configuration”, “display region”, and “differs” may be understood as described earlier. Thus, the extended reality environment may be divided into different display regions for the first and second time periods such that the first and second display regions include different sections of the extended reality environment during the first and second time periods. In other words, in addition to updating the duty cycle configurations, the regions of the extended reality environment included in each of the first and second display regions may be updated. For example, during the first time period, the first display region may be limited to a virtual screen directly facing the user, and the second display region may include a section of the extended reality environment adjacent to the virtual screen as well as a desktop, e.g., supporting a keyboard. During the first time period, content may be displayed in the first display region (including just the virtual screen) according to the first duty cycle configuration, and in the second display region (including the area adjacent to the virtual screen and the desktop) according to the second duty cycle configuration. During the second time period, the display regions may be divided up differently. For example, the first updated display region may now include the virtual screen and additionally the desktop, and the second updated display region may now include only the section adjacent to the virtual screen (e.g., without the desktop). Content in the first updated display region (including the virtual screen and desktop) may be displayed according to the first updated duty cycle configuration, and content in the second updated display region (including only the section adjacent to the virtual screen) may be displayed according to the second updated duty cycle configuration. By way of a non-limiting example, turning toFIG.6, during the first time period, processing device460(FIG.4) may display content in display region602(e.g., virtual screen112) according to duty cycle configuration610(e.g., 60%) and content in display region604(e.g., virtual widgets114C and114D) according to duty cycle configuration612(e.g., 20%). When the first time period lapses, (e.g., in response to a notification associated with virtual widget114D), processing device460may update the first and second duty cycle configurations, for example by lowering the duty cycle for each by 10%. Thus, the first updated duty cycle configuration may now be 50% and the second updated duty cycle configuration may now be 10%. However, processing device460may determine that virtual widget114D should be displayed according to the higher of the two duty cycle configurations (e.g., 50%), e.g., to draw the attention of user100to incoming notifications. Thus, processing device460may use the 50% duty cycle configuration (e.g., first updated duty cycle configuration) to display virtual widget114D (e.g., a first portion of display region604) and virtual screen112and may use the 10% duty cycle configuration (e.g., second updated duty cycle configuration) to display virtual widget114C (e.g., second portion of display region604). Some embodiments may provide a non-transitory computer readable medium containing instructions that when executed by at least one processor cause the at least one processor to perform duty cycle control operations for wearable extended reality appliances, the operations may comprise: receiving data representing virtual content in an extended reality environment associated with a wearable extended reality appliance; causing the wearable extended reality appliance to display the virtual content in accordance with a first duty cycle configuration; after causing the wearable extended reality appliance to display the virtual content in accordance with the first duty cycle configuration, determining a second duty cycle configuration; and causing the wearable extended reality appliance to display the virtual content in accordance with the second duty cycle configuration. The percent during which each cycle of the display signal is set to “active” may be non-zero in both the first duty cycle configuration and the second duty cycle configuration. The second duty cycle configuration may differ from the first duty cycle configuration. According to some embodiments, the operations may further include determining when to switch from the display in accordance with the first duty cycle configuration to the display in accordance with the second duty cycle configuration based on detection of an event, for example as described above. According to some embodiments, the first duty cycle configuration for the first display region includes a selection of different duty cycles for a display device associated with a left eye of a wearer of the wearable extended reality appliance and for a display device associated with a right eye of the wearer of the wearable extended reality appliance. The terms “duty cycle configuration”, “display region”, and “different” may be understood as described earlier. The term “selection” may refer to election or choosing an option from several options. The term “display device associated with a left eye” (e.g., or right eye) may refer to a device configured to accommodate vision corrective requirements (e.g., to correct for one or more of emmetropia, myopia, hyperopia, and astigmatism) for the left or right eye, respectively. For example, the display device (e.g., for the left and/or right eye) may include one or more optically lenses to adjust a view seen through the wearable extended reality appliance, for example to adjust the focus of light onto the retina of the left and/or right eye and/or magnify an image. As another example, the display device (e.g., for the left and/or right eye) may include a coating or filter, such as an anti-reflective or polarized coating to reduce glare. As another example, the display device (e.g., for the left and/or right eye) may include one or more photosensitive materials (e.g., photochromic dyes) to block incoming ultraviolet light. For example, the wearer of the wearable extended reality appliance may have different vision corrective requirements for each eye (e.g., the left eye may require correction for astigmatism and high myopia, and the right eye may require correction only for low myopia). Additionally, or alternatively, the wearer may wish to use the left eye to view content up close and the right eye to see content from a distance. As another example, the wearer of the wearable extendible reality appliance may have undergone cataract surgery in one eye. The wearable extended reality appliance may thus include a different display device for each eye, each display device accommodating the vision requirements of each eye, e.g., to adjust the focus of light onto the retina of each eye, reduce glare, and/or filter certain wavelengths (e.g., ultraviolet light). When determining the duty cycle configuration, a different duty cycle configuration may be selected for each of the display devices, e.g., to accommodate the seeing requirements of each eye of the user. For example, a higher duty cycle configuration may be selected for the display device associated with the higher myopia eye than the lower myopia eye. By way of a non-limiting example with reference toFIG.6, the left eye of user100may have undergone surgery to remove a cataract and correct for vision impairment, whereas the right eye may have a myopia of −4 diopters. Wearable extended reality apparatus110may be a pair of smart glasses allowing user100to view the physical environment simultaneously with virtual content. The smart glasses may include a smart left lens (e.g., display device associated with the left eye) that is clear (e.g., no vision correction) with an anti-LV coating, and a smart right lens (e.g., display device associated with the right eye) correcting for the myopia but without any anti-UV coating. When determining the duty cycle configuration, processing device460(FIG.4) may select a different duty cycle configuration for the smart left lens and the smart right lens to accommodate the different corrective needs for each eye. For example, a lower duty cycle configuration may be used for the left eye to ease eye strain following cataract surgery, and a higher duty cycle configuration for the right eye to provide a bright display. Some embodiments may provide a system for duty cycle control for wearable extended reality appliances, the system including at least one processor programmed to: receive data representing virtual content in an extended reality environment associated with a wearable extended reality appliance; identify in the extended reality environment a first display region and a second display region separated from the first display region; determine a first duty cycle configuration for the first display region; determine a second duty cycle configuration for the second display region, wherein the second duty cycle configuration differs from the first duty cycle configuration; and cause the wearable extended reality appliance to display the virtual content in accordance with the determined first duty cycle configuration for the first display region and the determined second duty cycle configuration for the second display region. For example, turning toFIG.6in conjunction withFIG.4, system600may include processing device460, which may be programmed to receive data representing virtual content in extended reality environment620associated with wearable extended reality appliance110. Processing device460may identify in extended reality environment620a first display region602and a second display region604, separated from first display region602. Processing device460may be determined duty cycle configuration610(e.g., 60%) for display region602and duty cycle configuration612for display region604, where duty cycle configuration612differs from duty cycle configuration610. Processing device460may cause wearable extended reality appliance110to display the virtual content in accordance with duty cycle configuration610for display region602and duty cycle configuration612for the display region604. FIG.9illustrates a block diagram of an example process900for controlling a duty cycle for wearable extended reality appliances consistent with embodiments of the present disclosure. In some embodiments, process900may be performed by at least one processor (e.g., processing device460of extended reality unit204, shown inFIG.4) to perform operations or functions described herein. In some embodiments, some aspects of process900may be implemented as software (e.g., program codes or instructions) that are stored in a memory (e.g., memory device411of extended reality unit204, shown inFIG.4) or a non-transitory computer readable medium. In some embodiments, some aspects of process900may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, process900may be implemented as a combination of software and hardware. Referring toFIG.9, process900may include a step902of receiving data representing virtual content in an extended reality environment associated with a wearable extended reality appliance. As described earlier, data formatted for displaying virtual content in an extended reality environment via a wearable extended reality appliance may be received. For example, the data may be generated by a processor configured with the wearable extended reality appliance or may be received from an external computing device via a transceiver. By way of a non-limiting example with reference toFIG.6, processing device460(FIG.4) of wearable extended reality appliance110may receive data representing virtual content (e.g., virtual widgets114D and114D, and virtual screen) in extended reality environment620associated with a wearable extended reality appliance110. Process900may include a step904of identifying in the extended reality environment a first display region and a second display region separated from the first display region may be identified. As described earlier, the extended reality environment may include a virtual display generated, for example, by a wearable extended reality appliance. A processing device may be configured to identify one or more regions in the virtual display. For example, a processing device may identify a first region associated with work-related content, and a second region associated with personal content. By way of a non-limiting example, processing device460(FIG.4) of wearable extended reality appliance110may identify in extended reality environment620(FIG.6), display regions602and display region604separated from first display region602. Display region602may be associated with displaying work-related documents, such as charts and text documents for editing. Display region604may be associated with displaying virtual accessories to assist user100, such as virtual widgets114C and114D providing weather updates and notifications, respectively. Process900may include a step906of determining a first duty cycle configuration for the first display region. As described earlier, a processing device may be configured to determine a duty cycle configuration for the first display region, for example to adjust the intensity of the display, and/or manage power consumption. For example, if the first display region is for work-related content, a relatively high duty cycle configuration (e.g., 80%) may be determined, e.g., to draw the attention of the user. By way of a non-limiting example, processing device460(FIG.4) of wearable extended reality appliance110may determine duty cycle configuration610for display region602inFIG.6. For example, processing device460may determine that display region602is currently the primary area of focus for user100and may apply a duty cycle configuration of 60%. Process900may include a step908of determining a second duty cycle configuration for the second display region, where the second duty cycle configuration differs from the first duty cycle configuration. As described earlier, the virtual display generated, for example, by a wearable extended reality appliance may include first and second display regions. A processing device may be configured to determine a different duty cycle configuration for each display region, for example to separately adjust the intensity of the display and/or power consumption for each display region. For example, the second display region may be designated for personal content and the first display region may be designated for work related content. While the wearer of the extended reality applicant is engaged in work, the processing device may determine a lower duty cycle configuration (e.g., 40%) for the second display region, e.g., to facilitate the wearer in maintaining focus on the work-related content. By way of a non-limiting example, processing device460(FIG.4) of wearable extended reality appliance110may determine duty cycle configuration612(e.g., 20%) for display region604, which differs from duty cycle configuration610(e.g., 60%) determined for display region602ofFIG.6. The different duty cycle configurations610and612applied to each display region602and604, respectively, may facilitate user100in concentrating on display region602, and avoid being distracted by updates from virtual widgets114C and114D in display region604. Process900may include a step910of causing the wearable extended reality appliance to display the virtual content in accordance with the determined first duty cycle configuration for the first display region and the determined second duty cycle configuration for the second display region. As described earlier, a processing device may be configured to control the display of virtual content in different display regions of a virtual display generated by a wearable extended reality appliance. For example, the processing device may control the display of the virtual content by determining the duty cycle configuration for applying to each display region. Additionally, the processing device may cause the wearable extended reality device to display virtual content in each display region according to each determined duty cycle configuration. For example, the processing device may control signals (e.g., by controlling the level, intensity, frequency, timing, power level, phase, and any other signal attribute affecting the duty cycle) carried to each display region via one or more data and/or power lines coupling the processing device to each display region of the display of the wearable extended reality appliance. Consequently, content in the first display region may be displayed according to the first duty cycle configuration, and content in the second display region may be displayed according to the second duty cycle configuration. Returning to the example above, work-related content may be displayed in the first display region according to an 80% duty cycle and personal content may be displayed in the second display region according to a 40% duty cycle. By way of a non-limiting example, processing device460(FIG.4) of wearable extended reality appliance110may cause wearable extended reality appliance110to display virtual screen112in display region604ofFIG.6according to duty cycle configuration610and virtual widgets114C and114D in display region606according to duty cycle configuration610. Extended reality environments may include virtual and physical display areas, such as virtual displays (e.g., bounded regions defining virtual screens), and physical objects such as walls and surfaces. An extended reality appliance may present virtual objects anywhere in the extended reality environment, at differing distances from a user. Users may wish to organize virtual objects, such as to unclutter a virtual display area or to change a presentation mode for content. For example, content extracted from a virtual display may be modified (e.g., magnified) when presented outside the virtual display. The description that follows includes references to smart glasses as an exemplary implementation of a wearable extended reality appliance. It is to be understood that these examples are merely intended to assist in gaining a conceptual understanding of disclosed embodiments, and do not limit the disclosure to any particular implementation for a wearable extended reality appliance. The disclosure is thus understood to relate to any implementation for a wearable extended reality appliance, including implementations different than smart glasses. Some embodiments involve a non-transitory computer readable medium containing instructions that when executed by at least one processor cause the at least one processor to perform operations for extracting content from a virtual display. The term “non-transitory computer-readable medium” may be understood as described earlier. The term “instructions” may refer to program code instructions that may be executed by a computer processor. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, and any other computer processing technique. The term “processor” may be understood as described earlier. For example, the at least one processor may be one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5), and the instructions may be stored at any of memory devices212,311,411, or511, or a memory of mobile device206. The term “content” may refer to data or media formatted for presenting information to a user via, for example, an interface of an electronic device. Content may include, for example, any combination of data formatted as alphanumerical text, image data, audio data, video data, and any other data type for conveying information to a user. The term “extracting content” may refer to content that is separated or pulled out, e.g., from other content. The term “virtual display” may refer to a virtual object mimicking and/or extending the functionality of a physical display screen, as described earlier. A virtual display may function as a container (e.g., 2D frame or 3D box) for multiple other virtual objects. For example, at least one processor may display virtual content, including multiple virtual objects, via a wearable extended reality appliance. One of the virtual objects may be a virtual display containing one more of the other virtual objects, e.g., as a frame or box encasing a group of the other virtual objects. The at least one processor may execute instructions to separate or pull out (e.g., extract) one or more virtual objects (e.g., content) of the group of objects contained in the virtual display. As an example, a virtual display may contain a group of virtual objects, including a virtual document and several virtual widgets and the at least one processor may remove one of the widgets (e.g., extract content) from the virtual display. By way of a non-limiting example,FIG.10illustrates an exemplary environment depicting a user1016of a wearable extended reality appliance (e.g., a pair of smart glasses1006) moving content between a virtual display1002and an extended reality environment1004. Extended reality environment1004may be generated via a system (e.g., system200ofFIG.2). Virtual display1002may serve as a frame containing a group1050of multiple virtual objects, such as a virtual document1008, virtual widgets1010inside a virtual menu bar1024, a virtual workspace1012, and a virtual house plant1014. A user1016donning smart glasses1006may interface with content displayed by smart glasses1006, e.g., via gestures, voice commands, keystrokes on a keyboard1018, a pointing device such as an electronic mouse1022, or any other user interfacing means. Processing device460(FIG.4) may extract content from virtual display1002, (e.g., in response to input from user1016). As an example, processing device460may extract virtual house plant1014from virtual display1002. Some embodiments involve generating a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance. The terms “virtual display” and “wearable extended reality appliance” may be understood as described earlier. The term “generating” may refer to producing, synthesizing, constructing, or creating. The term “virtual object” may refer to a visual rendition of an item by a computer. Such an object may have any form, such as an inanimate virtual object (e.g., icon, widget, document; representation of furniture, a vehicle, real property, or personal property; an animate virtual object (e.g., human, animal, robot); or any other computer-generated or computer supplied representation) as described earlier. For example, an extended reality appliance may produce (e.g., generate) a virtual object by activating selected pixels to render the virtual object overlaid against the physical environment surrounding the user and viewable through transparent portions of the viewer. The term “presents” may refer to displaying, demonstrating, or communicating, e.g., to convey information encoded as text, image data, audio data, video data, haptic data, or any other communications medium. For example, an electronic display may present information visually, and a speaker may present information audibly. The term “group of virtual objects” may refer to a collection or cluster of one or more virtual objects. As an example, one or more virtual objects may be grouped inside a virtual display of the wearable extended reality appliance. The term “located” may refer to a station, placement, or position of an object. The term “virtual distance” may refer to a spatial separation or gap between a wearable extended reality appliance and one or more virtual objects or between the one or more virtual objects, as perceived by a user wearing the wearable extended reality appliance. The distance may be along a two-dimensional plane (e.g., the floor), or through a three-dimensional volume (e.g., accounting for the height of the surrounding physical environment in addition to floor distance). The distance may be absolute (e.g., relative to the Earth, or based on GPS coordinates), or relative (e.g., with respect to an object in the extended reality environment). The distance may be relative to a physical object, a virtual object, the wearable extended reality appliance, and/or the user (e.g., the distance may be relative to more than one reference). As an example, the wearable extended reality appliance may present multiple virtual objects inside a virtual display appearing as though located at particular spatial separations from the user (e.g., at arm's length, or on a wall opposite the user). By way of a non-limiting example, inFIG.10, smart glasses1006may produce (e.g., generate) virtual display1002as a frame containing group1050of multiple virtual objects, such as virtual document100, virtual widgets1010inside virtual menu bar1024, virtual workspace1012, and virtual house plant1014. Smart glasses1006may display virtual display1002to appear at a virtual distance D1 from user1016. For example, D1 may be measured relative to a 3D coordinate system1028as the distance from smart glasses1006to the bottom left corner of virtual display1002. Some embodiments involve generating an extended reality environment via the wearable extended reality appliance, wherein the extended reality environment includes at least one additional virtual object presented at a second virtual distance from the wearable extended reality appliance. The term “extended reality environment,” e.g., also referred to as “extended reality,” “extended reality space,” or “extended environment,” may refer to all types of real-and-virtual combined environments and human-machine interactions at least partially generated by computer technology, as described earlier. For example, an extended reality environment may encompass the field-of-view of a user donning a wearable extended reality appliance and may include the physical environment surrounding the user as well as virtual content superimposed thereon. A processing device of the wearable extended reality appliance may produce or generate an extended reality environment by selectively activating certain pixels of a viewer of the wearable extended reality appliance to render virtual content overlaid on the physical environment viewable via transparent portions of the viewer. For example, the extended reality environment created by the wearable extended reality appliance may contain multiple virtual objects. Some virtual objects may be grouped inside a virtual display positioned at a first perceived (e.g., virtual) distance from the user. The wearable may display at least one additional virtual object at a second virtual distance from the user. The first and second virtual distances may be measured across a 2D plane (e.g., the floor), or through a 3D space of the extended reality environment (e.g., to account for a height of displayed content). The first and second virtual distances may be the same or different (e.g., larger, or smaller). As an example, the first and second virtual distances may be substantially the same as measured across the floor (e.g., in 2D) but may differ along the height dimension. As another example, the first and second virtual distances may differ in one or more directions as measured across the floor (e.g., in 2D) and also along the height dimension. As an example, the at least one processor may determine the first and/or second virtual distances based on a 3D spatial map of the physical environment surrounding the wearable extended reality appliance (e.g., as a mesh of triangles or a fused point cloud). The first and/or second virtual distances may be determined based on one or more physical objects in the extended reality environment, data stored in memory (e.g., for the location of stationary objects), predicted behavior and/or preferences of the wearer of the wearable extended reality appliance, ambient conditions (e.g., light, sound, dust), and any other criterion for determining a distance for presenting virtual objects. For example, a physical object may be detected via sensors interface472ofFIG.4). By way of a non-limiting example, inFIG.10, smart glasses1006may generate extended reality environment1004to include virtual content, such as virtual objects grouped inside virtual display1002and virtual mobile phone1026. Processing device460(FIG.4) may display the virtual content overlaid on the surrounding physical environment seen by user1016through smart glasses1006. Processing device460may display virtual display1002to appear at a distance D1 from user1016and may display virtual mobile phone1026to appear at a distance D2 from user1016. Some embodiments involve receiving input for causing a specific virtual object from the group of virtual objects to move from the virtual display to the extended reality environment. The term “receiving” may refer to accepting delivery of, acquiring, retrieving, obtaining, or otherwise gaining access. The term “input” may include information, such as a stimulus, response, command, or instruction, e.g., targeted to a processing device. For example, an input provided by a user may be received by the at least one processor via an input interface (e.g., input interface430ofFIG.4and/or input interface330ofFIG.3), by a sensor associated with the wearable extended reality appliance (e.g., sensor interface470or370), by a different computing device communicatively coupled to the wearable extended reality appliance (e.g., mobile device206and/or remote processing unit208ofFIG.2), or any other source of input. A user input may be provided via a keyboard, a touch sensitive screen, an electronic pointing device, a microphone (e.g., as audio input or voice commands), a camera (e.g., as gesture input), or any other user interfacing means. An environmental input (e.g., relating to ambient noise, light, dust, physical objects, or persons in the extended reality environment) may be provided via one or more sensors (e.g., sensor interface470or370). A device input (e.g., relating to processing, memory, and/or communications bandwidth) may be received by a processing device (e.g., any of server210ofFIG.2, mobile communications device206, processing device360, processing device460, or processing device560ofFIG.5). The term “causing” may refer to invoking or triggering an action or effect. For example, a user input to open an application may lead (e.g., cause) at least one processor to open the application. As another example, an ambient light level may be received (e.g., as an environmental input) leading to (e.g., causing) at least one processor to adjust the brightness of displayed content. The term “specific virtual object from the group of virtual objects” may refer to a distinct, or particular virtual object out of a collection of multiple virtual objects. The term “move” may refer to relocating or changing a position. For example, a specific widget (e.g., a specific virtual object) included in a group of virtual objects displayed inside a virtual display may be relocated (e.g., moved) to a different location in the extended reality environment, external to the virtual display, e.g., in response to an input. The input may be a user input, e.g., requesting to move the specific widget, an environmental input, e.g., an ambient light setting affecting the visibility of displayed objects, a device input, e.g., relating to the operation of the wearable extended reality appliance, or any other type of input. As an example, a wearable extended reality appliance may present a virtual display presenting a group of multiple virtual objects. An input may be received to cause a particular one of the multiple virtual objects to be relocated to a different display location, outside the virtual display. The input may be received from a user who may wish to view the specific virtual object from a distance nearer than the virtual display. As another example, the input may be received from a sensor detecting an obstruction (e.g., bright light, or obstructing object) blocking the specific virtual object. As yet another example, the input may be received from a software application monitoring the density of content displayed inside the virtual display. Some embodiments include receiving the input from an image sensor indicative of a gesture initiated by a user of the wearable extended reality appliance. The term “image sensor” may include a detector (e.g., a camera) configured to capture visual information by converting light to image data, as described earlier. The term “gesture” may refer to a movement or sequence of movements of part of the body, such as a hand, arm, head, foot, or leg to express an idea or meaning. A gesture may be a form of non-verbal or non-vocal communication in which visible bodily actions or movements communicate particular messages. A gesture may be used to communicate in place of, or in conjunction with vocal communication. For example, raising a hand with the palm forward may be a hand gesture indicating to stop or halt an activity, and raising a thumb with the fist closed may indicate approval. A gesture may be detected as an input using an image sensor (e.g., image sensor472ofFIG.4) and/or a motion detector (e.g., motion sensor473) associated with the wearable extended reality appliance. In some examples, the input from the image sensor (such as images and/or videos captured using the image sensor) may be analyzed (for example, using a gesture recognition algorithm) to identify the gesture initiated by the user. In one example, the gesture may be indicative of a desire of the user to cause the specific virtual object to move from the virtual display to the extended reality environment. In one example, the gesture may be indicative of the specific virtual object and/or of a desired position in the extended reality environment for the specific virtual object. By way of a non-limiting example, reference is made toFIG.11, which is substantially similar toFIG.10with a noted difference inFIG.10, virtual house plant1014is presented inside virtual display1002, and inFIG.11, version1014A of virtual house plant1014is displayed external to virtual display1002, as though resting on a desk top1020. InFIG.10, image sensor472of smart glasses1006may capture an image of a pointing gesture performed by user1016. Processing device460(FIG.4) may analyze the image using a gesture recognition algorithm and identify the pointing gesture as a user input requesting to relocate (e.g., move) a specific virtual object (e.g., a particular virtual object such as virtual house plant1014) external to virtual display1002. InFIG.11, processing device460may respond to the user input by causing virtual house plant1014of group1050of virtual objects to be displayed at a new location in extended reality environment1004, external to virtual display1002, e.g., as though resting on desk top1020. In some implementations, processing device460may present virtual house plant1014inside virtual display1002concurrently with displaying version1014A of virtual house plant1014external to virtual display1002. In some embodiments the input includes at least one signal reflecting keystrokes on a keyboard. The term “signal” may refer to a function that can vary over space and time to convey information observed about a phenomenon via a physical medium. For example, a signal may be implemented in any range of the electromagnetic spectrum (e.g., radio, IR, optic), as an acoustic signal (e.g., audio, sonar, ultrasound), a mechanical signal (e.g., pressure or vibration), as an electric or magnetic signal, or any other type of signal. The phenomenon communicated by the signal may relate to a state, the presence or absence of an object, an occurrence or development of an event or action, or lack thereof. The term “reflecting” may refer to expressing, telling, or revealing a causality or consequence due to an action or state (e.g., temporary, or steady state). The term “keystroke” may refer to an action associated with selecting or operating a key of a physical or virtual keyboard. The term “keyboard” may refer to an input device including multiple keys, each representing an alphanumeric character (letters and numbers), and optionally including a numeric keypad, special function keys, mouse cursor moving keys, and status lights, as described earlier. A keystroke may be implemented by pressing a key of a mechanical keyboard, by touching or swiping a key of a keyboard displayed on a touch-sensitive screen, by performing a typing gesture on a virtual or projected keyboard, or by any other technique for selecting a key. The at least one processor may receive a signal associated with the input indicating (e.g., reflecting) one or more keystrokes performed by the user on a keyboard. In one example, the keystrokes may be indicative of a desire of the user to cause the specific virtual object to move from the virtual display to the extended reality environment. In one example, the keystrokes may be indicative of the specific virtual object and/or of a desired position in the extended reality environment for the specific virtual object. By way of a non-limiting example, inFIG.11, user1016may enter a request to remove virtual house plant1014from virtual display1002by performing one or more keystrokes on keyboard1018resting on desk top1020. Processing device460(FIG.4) of smart glasses1006may receive one or more signals associated with the keystrokes (e.g., via network interfaces320ofFIG.3and420ofFIG.4) as a user input and respond to the input accordingly. In some embodiments the input includes at least one signal reflecting a movement of a pointer. The term “movement” may refer to a motion dynamically changing a location, position and/or orientation, e.g., of a virtual or physical object. The term “pointer” may include technology enabling the selection of content by targeting the selected content in a focused manner. A pointer may be an electronic pointing device or may be implemented as a bodily gesture e.g., by the eye, head, finger, hand, foot, arm, leg, or any other moveable part of the body. Examples of electronic pointing devices may include an electronic mouse, stylus, pointing stick, or any other electronic pointing device. For example, a processing device may detect an IR signal of an electronic pointer maneuvered by a user. Alternatively, an image and/or motion sensor (e.g., image sensor472and/or motion sensor373ofFIG.4) may detect a pointing gesture by a user. In one example, the movement of the pointer may be indicative of a desire of the user to cause the specific virtual object to move from the virtual display to the extended reality environment. In one example, the movement of the pointer may be indicative of the specific virtual object and/or of a desired position in the extended reality environment for the specific virtual object. As an example, a user may extend the index finger to target (e.g., point to) a specific virtual object. Light reflecting off the index finger may be captured (e.g., via image sensor472ofFIG.4) and stored as one or more images. The at least one processor may analyze the one or more images to detect a pointing gesture by the index finger (e.g., movement of a pointer) in the direction of the virtual object. As another example, a user may manipulate an IR pointer in the direction of a virtual display. An IR sensor (e.g., sensor472) may detect IR light (e.g., a signal) emitted by the IR pointer and send a corresponding signal to the at least one processor. The at least one processor may analyze the signal to determine the IR pointer targeting the specific virtual object. By way of a non-limiting example, inFIG.11, user1016may remove virtual house plant1014from virtual display1002by selecting and dragging virtual house plant1014to desk top1020, external to virtual display1002. Processing device460(FIG.4) of smart glasses1006may detect the selecting and dragging operations of electronic mouse1022(e.g., via network interfaces320ofFIG.3and420ofFIG.4) and display version1014A of virtual house plant1014at the location indicated by electronic mouse1022. Some embodiments further include analyzing the input from a pointing device to determine a cursor drag-and-drop movement of the specific virtual object to a location outside the virtual display. The term “analyzing” may refer to investigating, scrutinizing and/or studying a data set, e.g., to determine a correlation, association, pattern, or lack thereof within the data set or with respect to a different data set. The term “pointing device” may refer to a pointer implemented as an electronic pointing device, as described earlier. The term “determine” may refer to performing a computation, or calculation to arrive at a conclusive or decisive outcome. The term “cursor drag-and-drop movement” may refer to interfacing with displayed content using a pointing device to control a cursor. A cursor may be a movable graphic indicator on a display identifying the point or object affected by user input. For example, a user may select a virtual object by controlling the pointing device to position the cursor on the virtual object and pushing a button of the pointing device. The user may move (e.g., drag) the selected object to a desired location by moving the pointing device while the object is selected (e.g., while pressing the button). The user may position the selected object at the desired location by releasing (e.g., dropping) the selection via the pointing device (e.g., by releasing the button). The combination of these actions may be interpreted by the at least one processor as a cursor drag-and-drop movement. The pointing device may include one or more sensors to detect a push and/or release of a button of the pointing device, and one or more motion sensors to detect dragging a selected object to a desired location. An input received from a pointing device may include electronic signals (e.g., caused by pressing or releasing a button, or a motion of a roller ball), IR, ultrasound, radio (e.g., Bluetooth, Wi-Fi), IMU, and any other type of signal indicating selection, dragging, and dropping by a pointing device. The one or more sensors may convert the input to an electronic signal and provide the electronic signal to the at least one processor, as described earlier. The term “location” may refer to a position or region, e.g., inside a larger area. A location may be relative to a physical and/or virtual object. For example, the location outside the virtual display may be relative to the wearable extended reality appliance, to an absolute coordinate system (e.g., GPS), to a physical object (e.g., a desk or wall), to virtual content such as the virtual display, or to any other reference. In one example, the cursor drag-and-drop movement may be indicative of a desire of the user to cause the specific virtual object to move from the virtual display to the extended reality environment. In one example, the cursor drag-and-drop movement may be indicative of the specific virtual object and/or of the location outside the virtual display. As an example, a user may maneuver a pointing device to move a specific virtual object to a different location external to the virtual display. The user may control the pointing device to position the cursor on a virtual object, press a button of the pointing device to select the virtual object, move the pointing device while pressing the button to reposition the selected object to a new location external to the virtual display, and release the pressed button to drop the virtual object at the new location. The pointing device may provide inputs to the at least one processor indicative of the cursor position, button press, dragging motion, and button release. The at least one processor may analyze the inputs to determine a cursor drag-and-drop movement by the pointing device relocating the virtual object to the new location, e.g., outside the virtual display. In response, the processing device may display the virtual object at the new location. By way of a non-limiting example, inFIGS.10and11, user1016may maneuver electronic mouse1022to move virtual house plant1014to a location external to virtual display1002. For example, user1016may use electronic mouse1022to maneuver a cursor over virtual house plant1014and push a button of electronic mouse1022to turn the focus thereon. While the focus is on virtual house plant1014, user1016may move (e.g., drag) electronic mouse1022to cause a corresponding movement by virtual house plant1014. When virtual house plant1014is positioned on desk top1020, e.g., external to virtual display1002, user1016may release the button to position (e.g., drop) virtual house plant1014on desk top1020. Throughout the maneuvering by user1016, electronic mouse1022may provide signals indicating any movements, button presses and releases as inputs to processing device460(FIG.4), e.g., as pointer input331via input interface330. Processing device460may analyze the inputs and determine a cursor drag-and-drop movement of electronic mouse1022corresponding to a repositioning virtual house plant1014from inside virtual display1002(e.g., as shown inFIG.10) to desk top1020, external to virtual display1002(e.g., as shown inFIG.11), and may update the position of virtual house plant1014, accordingly. Some embodiments further include analyzing movement of the pointer to determine a selection of an option in a menu bar associated with the specific virtual object. The term “selection” may refer to picking or choosing an object, e.g., from one or more objects. Selecting an object via a pointing device may turn the focus on the selected object such that subsequent input affects the selected object. The term “menu bar” may refer to a graphical control element including one or more selectable items, values, or other graphical widgets (e.g., buttons, checkboxes, list boxes, drop down lists, and pull-down lists). For example, one menu may provide access to functions for interfacing with a computing device and another menu may be used to control the display of content. A menu bar may include multiple drop-down menus that normally hide the list of items contained in the menu. Selecting a menu (e.g., using the pointer) may display the list of items. The term “option in a menu bar” may refer to a specific menu item displayed in the menu bar. The term “associated with” may refer to linked or affiliated with or tied or related to. In one example, the option in the menu bar may be indicative of a desire of the user to cause the specific virtual object to move from the virtual display to the extended reality environment. In one example, the option in the menu bar may be indicative of a desired position in the extended reality environment for the specific virtual object. As an example, a user may use a pointer to select an option on a menu bar associated with a specific virtual object, (e.g., as pointer input331via input interface330ofFIG.3). The option may allow altering the display of the virtual object (e.g., to enlarge, shrink, move, hide, or otherwise change the display of the virtual object). The at least one processor may analyze the movements of the pointer to detect the selection of the option and execute a corresponding action. Some embodiments involve in response to receiving the input, generating a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance from the wearable extended reality appliance, wherein the third virtual distance differs from the first virtual distance and the second virtual distance. The term “presentation of a version of the specific virtual object” may refer to another rendition or depiction of the specific virtual object. The version of the specific virtual object may be presented alongside or to replace the specific virtual object. The term “differs” may refer to being distinguished or distinct from, or otherwise dissimilar. The term “third virtual distance” may be interpreted in a manner similar to the interpretation of first distance and second distance describe earlier. For example, the version of the specific virtual object may be displayed (e.g., presented) to appear identical to, similar to, or different from the specific virtual object, e.g., the version of the specific virtual object may be a smaller or larger replica of the specific virtual object. As another example, the version of the specific virtual object may appear identical or similar to the specific virtual object but may be displayed in a different location, e.g., the specific virtual object may be displayed inside a virtual display, whereas the version of the specific object may appear identical but may be displayed external to the virtual display. As another example, the orientations or angular distances of the specific virtual object and the version of the specific virtual object relative to the user and/or the wearable extended reality appliance may be the same or different. As another example, the version of the virtual object may be presented inside the field-of-view of the wearable extended reality appliance, outside the field-of-view, or partially inside and partially outside the field-of-view. As a further example, the version of the specific virtual object may be rendered differently, e.g., using different colors, resolution, or a different coordinate system, e.g., the specific virtual object may be displayed as a two-dimensional (e.g., simplified) object, and the version of the specific virtual object may be presented as a three-dimensional life-like object. Thus, upon receiving an input to move a widget (e.g., specific virtual object) from the virtual display), the wearable extended reality appliance may generate a version of the widget and display the version at a virtual distance different than the virtual distance to the virtual display and to the additional virtual object. By way of a non-limiting example, InFIG.10, smart glasses1006may visually present virtual display1002and virtual mobile phone1026at virtual distances D1 and D2, respectively, from user1016(e.g., measured with respect to 3D coordinate system1028). User1016may perform a gesture corresponding to a request to move virtual house plant1014to a location in extended reality environment1004, external to virtual display1002. The gesture may be detected via image sensor474(FIG.4). Processing device460may analyze image data acquired by image sensor474to identify the gesture as a user input. InFIG.11, in response to the user input, processing device460may obtain a version1014A of virtual house plant1014(e.g., by retrieving version1014A from memory device411). Processing device460may display version1014A to appear as though resting on desk top1020at a distance D3 from user1016, where D3 differs from D1 and D2 (e.g., with respect to 3D coordinate system1028). While the example ofFIG.11shows version1014A replacing virtual house plant1014, in some implementations, version1014A may be displayed alongside (e.g., concurrently with) virtual house plant1014. In some implementations, version1014A may be displayed differently (e.g., larger/smaller, higher/lower resolution, modified color scheme) than virtual house plant1014, e.g., virtual house plant1014may be rendered as a 2D graphic image, and version1014A may be rendered as a 3D graphic image. In some embodiments, when the specific virtual object is a window including a group of control buttons in a particular area of the window, the group of control buttons include at least a control button for minimizing the window, a control button for maximizing the window, a control button for closing the window, and a control button for moving the window outside the virtual display; and wherein the input includes an activation of the control button for moving the window outside the virtual display. The term “window” may refer to a graphic control element (e.g., 2D or 3D) providing a separate viewing area on a display screen. A window may provide a single viewing area, and multiple windows may each provide a different viewing area. A window may be part of a graphical user interface (GUI) allowing users to input and view output and may include control elements, such as a menu bar along the top border. A window may be associated with a specific application (e.g., text editor, spread sheet, image editor) and may overlap or be displayed alongside other windows associated with the same, or different applications. A window may be resized (e.g., widened, narrowed, lengthened, or shortened), opened (e.g., by double clicking on an icon or menu item associated with the window), or closed (e.g., by selecting an “X” control element displayed at a corner of the window. The term “control button” may refer to a graphic element that invokes an action upon selection (e.g., via a pointing device, keystroke, or gesture). For example, an operating system may receive a notification when the user selects a control button and may schedule a processing device to execute a corresponding action. The term “group of control buttons” may refer to a collection of one or more control buttons. The term “particular area of the window” may refer to a specific region within a window graphic control element, e.g., the group of control buttons may be located in a specific region of a window, such as across the top, or along a side as a menu bar. The term “minimizing the window” may refer to collapsing the window such to hide the window from view while allowing an application associated with the window to continue running. A minimized window may appear at the bottom of a display as an icon inside a task bar. The term “maximizing the window” may refer to expanding the window to occupy some or all of the display screen. The term “closing the window” may refer to removing the window from a display screen and halting the execution of the associated application. The term “moving the window” may refer to changing the position of a window in a display screen. As an example, a window may be dragged up/down, right/left or diagonally across a two-dimensional display. In a 3D-environment, such as an extended reality environment generated by a wearable extended reality appliance, a window may be additionally or alternatively dragged inwards/outwards. The term “activation of the control button for moving the window outside the virtual display” may be understood as selecting the control button to invoke an action that relocates the window external to the virtual display. As an example, a virtual display may include a window containing a virtual document. The window may be sized to fit inside the virtual display alongside other virtual objects and may include a menu bar with control buttons to minimize, maximize, close, and move the window. A user wishing to read and edit the virtual document may select the control button to move the window out of the virtual display, to display the window closer and larger (e.g., using a larger font size). By way of a non-limiting example, reference is made toFIG.12, which illustrates the exemplary environment ofFIGS.10and11(e.g., generated by system200) where the content includes a window having a control button for moving content between a virtual display and an extended reality environment, consistent with some embodiments of the present disclosure.FIG.12is substantially similar toFIGS.10and11with the noted difference that virtual display1002may present a window1200associated with a text editing application. Window1200may include a group1202of control buttons at the top region of window1200. From right to left, group1202of control buttons may include buttons for closing, maximizing, and minimizing, window1200, and additionally a control button1204for moving window1200outside of virtual display1002. User1016may activate control button1204by performing a pointing gesture (e.g., captured as image data via image sensor472ofFIG.4), using a mouse cursor, using a keyboard, and so forth. In response, processing device460may display version1200A of window1200external to virtual display1002. Some embodiments involve causing, in response to receiving the input, a presentation of the specific virtual object to be removed from the virtual display. The term “removed” may refer to eliminated or erased. Thus, in response to the input, the wearable extended reality appliance may display a version (e.g., copy) of the specific object in a region external to the virtual display and may remove the presentation of the virtual object from inside the virtual display. By way of a non-limiting example, inFIG.11, in response to receiving an input from user1016, processing device460(FIG.4) may display version1014A of virtual house plant1014(FIG.10) to appear to desk top1020(e.g., external to virtual display1002), and remove virtual house plant1014from being displayed inside virtual display1002. Some embodiments involve causing, in response to receiving the input, simultaneous presentations of the specific virtual object on the virtual display and the version of the specific virtual object at another location in the extended reality environment. The term “simultaneous” may refer to concurrent, or at substantially the same time. The term “another location” may refer to a separate location, e.g., different from an original location. Thus, in response to the input, the wearable extended reality appliance may display a version (e.g., copy) of the specific object in a region external to the virtual display and concurrently with displaying the virtual object inside the virtual display. By way of a non-limiting example, reference is made toFIG.13which illustrates the exemplary environment ofFIGS.10and11where a specific virtual object is displayed inside a virtual display concurrently with a version of the specific object displayed external to the virtual display, consistent with some embodiments of the present disclosure. In response to receiving a gesture input from user1016to display virtual house plant1014external to virtual display1002, processing device460(FIG.4) may display version1014A to appear as though resting on desk top1020at a distance D3 from smart glasses1006, concurrently with displaying virtual house plant1014inside virtual display1002at a distance D1 from smart glasses1006. As an example, virtual house plant1014may be a two-dimensional icon, and version1014A may be a realistic three-dimensional rendition of a house plant. Some embodiments involve determining the third virtual distance for presenting the version of the specific virtual object. The at least one processor may determine the third virtual distance based on, for example, criteria relating to the extended reality environment (e.g., virtual and/or physical considerations), criteria relating to the wearable extended reality appliance (e.g., device considerations), criteria relating to the communications network (e.g., bandwidth considerations), or any other criteria. For example, ambient light, and the presence of obstructing objects may be relevant for determining the third virtual distance. As another example, the type of virtual object may be used to determine the third virtual distance, e.g., a text document may be displayed closer to allow editing, and a decorative virtual object may be displayed further. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) may determine virtual distance D3 for presenting version1014A of virtual house plant1014. The virtual distance may be based on the distance between user1016and desk top1020. In some embodiments, the determination of the third virtual distance is based on at least one of the first virtual distance or the second virtual distance. The term “based on” may refer to established or founded upon, or otherwise derived from. For example, the at least one processor may determine the third virtual distance (e.g., the distance for displaying the version of the virtual object extracted from the virtual display) based on one or more of the other distances to the virtual display (e.g., the first virtual distance) and the additional virtual object (e.g., the second virtual distance). The third virtual distance may be determined to avoid obstruction by the virtual display and/or the additional virtual object. The determined third virtual distance may be greater or smaller than one or both the first and second virtual distances, or a combination (e.g., Euclidian distance) of the first and second virtual distances. As an example, the third virtual distance may have the same height along the vertical plane but may differ along the horizontal plane. In some examples, the third virtual distance may be a mathematical function of the first virtual distance and/or the second virtual distance. For example, the mathematical function may be a linear function, may be a non-linear function, may be a polynomial function, may be an exponential function, may be a multivariate function, and so forth. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) of smart glasses1006may determine virtual distance D3 for displaying version1014A of virtual house plant1014based on virtual distance D1 between virtual display1002and user1016, and/or based on virtual distance D2 between virtual mobile phone1026and user1016. Processing device460may determine virtual distance D3 so that virtual house plant1014A is not obstructed by virtual mobile phone1026and/or virtual display1002. In some embodiments, the determination of the third virtual distance is based on a type of the specific virtual object. The term “type of the specific virtual object” may refer to a category or classification of the specific virtual object. For example, a virtual object may be classified according to data type (e.g., text, image, video), data size (e.g., related to communications bandwidth, processing, and/or memory requirements), spatial size, whether the specific virtual object is 2D or 3D, whether the specific virtual object is interactive, transparency (e.g., displayed as semi-transparent or opaque), use (e.g., read-only, or editable), priority (e.g., urgent messages or work-related documents versus low priority ornamental objects), security (e.g., proprietary or privileged access), or any other criterion for determining a type for a virtual object. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) may determine virtual distance D3 to version1014A of virtual house plant1014based on virtual house plant1014being a decorative virtual object. Thus, D3 may be determined to be further from user1016than distance D2 to virtual display1002. In some embodiments, the determination of the third virtual distance is based on a physical object in the extended reality environment. The term “physical object in the extended reality environment” may refer to a real (e.g., tangible) article or item. For example, a physical object may be a tangible (e.g., real) bookcase, wall, or floor, a light source (e.g., a window, or light fixture), a person or animal. The physical object may be stationary or in motion, all or partially opaque, or transparent. The physical object may be detected by analyzing data acquired via a sensor (e.g., via sensors interface470ofFIG.4). For example, image or IR data may be acquired via an image sensor (e.g., image sensor472and/or372ofFIG.3), motion data may be acquired via a motion sensor (e.g., motion sensor473and/or373), ultrasound and/or other data may be acquired via other sensors (e.g., other sensors475and/or375). For example, image data captured using at least one image sensor may be analyzed using an object detection algorithm to detect the physical object. Thus, the wearable extended reality appliance may determine the distance for displaying the specific virtual object based on one or more physical items present in the extended reality environment, e.g., to prevent obstruction. As another example, the physical object may be used to scale the virtual object (e.g., to appear closer or further) from the wearable extended reality appliance. In some examples, the determination of the at third virtual distance may be based on at least one of a distance to the physical object, a position of the physical object, a size of the physical object, a color of the physical object, or a type of the physical object. In one example, the physical object may include a surface (such as a table including a table top surface), and the third virtual distance may be select to position the version of the specific virtual object on a central portion of the surface. In one example, the third virtual distance may be selected to be shorter than a distance to the physical object, for example to make the specific virtual object hide at least part of the physical object. In one example, the third virtual distance may be selected to be longer than a distance to the physical object, for example to make at least part of the specific virtual object hidden by the physical object. In one example, the third virtual distance may be selected to be similar to a distance to the physical object, for example to make the specific virtual object appear side by side with the physical object. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) of smart glasses1006may detect physical desk top1020inside extended reality environment1004. Upon receiving an input to extract virtual house plant1014from virtual display1002, processing device460may determine to display version1014A of virtual house plant1014to appear as though resting on desk top1020. Processing device460may determine virtual distance D3 separating version1014A from smart glasses1006based on the location of desk top1020. Some embodiments involve determining a position for presenting the version of the specific virtual object in the extended reality environment. The term “position” (e.g., for an object in the extended reality environment) may refer to a distance (e.g., relative to a physical and/or virtual object with respect to a 2D or 3D coordinate system) and/or an orientation, bearing, or pose of the object. For example, the position may determine where in the 3D space to display an object, as well as an angular orientation for the object (e.g., turned backwards, upside-down, rotated by an angle). Thus, in response to receiving an input to remove a virtual object from the virtual display, the at least one processor may determine the location and/or orientation, pose, or bearing for the version of the virtual object outside the virtual display. The position may be determined based on other objects (virtual and/or real) in the extended reality environment, on environmental conditions (e.g., ambient light, noise, or wind), on the size and/or shape of the extended reality environment, and any other criterion for determining a position for presenting virtual content. For example, the processor may determine a position of the virtual object in front of, behind, or to one side of another virtual or real object in the extended reality environment. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) may determine a position in extended reality environment1004for presenting version1014A of virtual house plant1014. Processing device460may determine the distance D3 between version1014A of virtual house plant1014and smart glasses1006as a three-dimensional diagonal with respect to 3D coordinate system1028. Processing device460may additionally determine an orientation for version1014A of virtual house plant1014, e.g., with respect to one or more axes of 3D coordinate system1028. Processing device460may determine to orient version1014A of virtual house plant1014such that the pot portion is below the leaves portion (e.g., rotation about the x-axis and z-axis), and to present the largest leaves in the direction of user1016(e.g., rotation about the y-axis). In some embodiments, the determination of the position is based on a physical object in the extended reality environment. For example, the wearable extended reality appliance may determine the position for the version of the specific virtual object based on one or more physical items, e.g., to avoid obstruction by the physical items and/or to integrate the specific virtual object with the physical environment surrounding the user. As another example, the wearable extended reality appliance may determine an angular position or orientation for presenting the version of the specific virtual object in the extended reality environment, e.g., relative to the wearable extended reality appliance and/or the user. In one example, the position for presenting the version of the specific virtual object may be determined based on a third virtual distance, and the third virtual distance may be determined as described earlier. In one example, image data captured using at least one image sensor and/or 3D data captured using Lidar may be analyzed using an object localization algorithm to detect a position of the physical object. Further, the position for presenting the version of the specific virtual object may be a mathematical function of the position of the physical object. In one example, image data captured using at least one image sensor may be analyzed using an object recognition algorithm to identify a type of the physical object. Further, the position for presenting the version of the specific virtual object may be determined based on the type of the physical object. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) may detect the presence of physical desk top1020in extended reality environment1004(e.g., via image sensor472). Processing device460may determine that desk top1020may provide a suitable surface for displaying version1014A of virtual house plant1014in a manner that integrates virtual content with the real (e.g., physical) environment surrounding user1016. Processing device460may determine the position for displaying version1014A of virtual house plant1014to appear as though resting on desk top1020without colliding with keyboard1018and electronic mouse1022(e.g., physical objects). In some embodiments, generating the presentation of the version of the specific virtual object in the extended reality environment includes implementing a modification to the specific virtual object. The term “implementing a modification” may refer to adjusting or changing one or more attributes of the virtual object, e.g., for presenting the specific virtual object. As an example, the modification may include adding or removing an audio presentation (e.g., accompanying sound) for a displayed object, or replacing a displayed object with an audible presentation. As another example, the modification may include adding or removing a haptic response associated with interfacing with the virtual object. As another example, the adjusted attributes may affect the display or appearance of the version of the specific virtual object. For example, such adjusted attributes may include one or more of size, magnification, texture, color, contrast, and other attributes associated with an appearance of the virtual object in the extended reality environment. In yet another example, the modification may include modifying a 2D specific virtual object to an associated 3D virtual object. In an addition example, the modification may include modifying at least one of a color scheme, a size or an opacity associated with the specific virtual object. In another example, the modification may include expanding unexpanded elements of the specific virtual object. For example, the specific virtual object may include a plurality of menus or sections. While in the virtual display, expanding the menus or sections may create clutter due to the limited size within the virtual display. When the specific virtual object is displayed outside the virtual display, the specific virtual object may spread over a larger area, and thus the menus or sections may be expanded without creating clutter. By way of a non-limiting example, inFIG.10, virtual house plant1014(e.g., the specific virtual object) is displayed inside virtual display1002as a simplified two-dimensional drawing or icon. Upon receiving an input to extract virtual house plant1014from virtual display1002, processing device460(FIG.4) may modify the appearance of virtual house plant1014, e.g., by retrieving from a memory (e.g., data structure212ofFIG.2) version1014A of virtual house plant1014, which may be a 3D rendition of a physical house plant generated, for example, by combining multiple high-resolution 2D images of the physical house plant. In some embodiments, the modification to the specific virtual object is based on a type of the specific virtual object. For example, the wearable extended reality appliance may modify how the specific virtual object is presented outside the virtual display based on the object type. Examples of object types include, for example, documents, icons, charts, graphical representations, images, videos, animations, chat bots, or any other category or sub-category of visual display. The object type may indicate how the user wishes to consume or use the content, e.g., a document intended for editing may be enlarged when extracted from the virtual display, and an object displayed inside the virtual display in 2D may be converted to a 3D rendition outside the virtual display. Other examples may include displaying a messaging app as an icon inside the virtual display and adding multiple control elements outside the virtual display, displaying a graphic image using low saturation inside the virtual display and with a higher saturation outside the virtual display, presenting a music widget visually inside the virtual display and audibly outside the virtual display, presenting a video widget as a stationary image inside the virtual display and as an animated video outside the virtual display. By way of a non-limiting example, inFIG.11, processing device460(FIG.4) may determine, based on virtual house plant1014ofFIG.10being an ornamental object, to modify the appearance when displayed outside of virtual display1002to a 3D version1014A of a house plant. As another example, inFIG.12, upon detecting an input to move window1200from virtual display1002, processing device460may determine that window1200includes an editable document and may increase the size of version1200A of window1200. In some embodiments, the modification to the specific virtual object includes changing a visual appearance of the specific virtual object. The term “changing a visual appearance of the specific virtual object” may refer to adjusting a display characteristic of the virtual object. For example, the modification may change a color, color scheme (e.g., black/white, grey scale, or full color gamut), resolution, sire, scaling, transparency, opacity, saturation, intensity, or any other display characteristic of the virtual object. As another example, the modification may convert a 2D image to a 3D rendition, or vice-versa. As another example, the specific virtual object (e.g., displayed inside the virtual display) may be a simplified representation of the virtual object (e.g., such as an icon or 2D drawing) and the version of the specific virtual object displayed external to the virtual display may include additional details and resolution (e.g., based on one or more high resolution images acquired of a physical object representative of the icon or 2D drawing). By way of a non-limiting example, inFIG.11, to generate version1014A of virtual house plant1014(e.g., shown as a 2D drawing inside virtual display1002), processing device460(FIG.4) may convert the 2D drawing to 3D version1014A. In some embodiments, the modification to the specific virtual object includes changing a behavior of the specific virtual object. The term “changing a behavior of the specific virtual object” may refer to modifying or adjusting interactive aspects of the specific virtual object, e.g., with respect to a user, a device, a software application, or any other entity interfacing with the virtual object. For example, a graphic image may be displayed as a static image inside the virtual display, and as an animated image (e.g., GIF) outside the virtual display. As another example, a text document may be read-only inside the virtual display and may be editable outside the virtual display. As another example, control elements such as buttons and text boxes may be inactive inside the virtual display and active when displayed external to the virtual display. As another example, when the specific virtual object is presented in the virtual display, menus and/or section of the specific virtual object may be automatically minimized when not in use (for example to minimize clutter, as described earlier), and when the specific virtual object is presented outside the virtual display, the menus and/or sections may be displayed in an expanded form, when not in use. By way of a non-limiting example, inFIG.12, window1200may be read-only while displayed inside virtual display1002. Upon receiving an input from user1016to move window1200out from virtual display1002(e.g., by selecting control button1204), processing device460may display version1200A external to virtual display1002. Version1200A of window1200may be editable by user1016, e.g., using a pointing device or hand gestures. Some embodiments involve receiving an additional input for causing another virtual object from the group of virtual objects to move from the virtual display to the extended reality environment; and in response to receiving the additional input, generating a presentation of a version of the another virtual object in the extended reality environment at a fourth virtual distance from the wearable extended reality appliance, wherein the fourth virtual distance differs from the first virtual distance, the second virtual distance, and the third virtual distance. The term “additional input” may refer to a separate input, e.g., different than the input associated with the specific virtual object (e.g., the first input). The inputs may be via the same or different medium. For example, a gesture input may be used to move a first virtual object (e.g., the specific virtual object) from the virtual display and an electronic pointer or voice command may be used to move a second virtual object. The additional input may be received at the same or different time (e.g., after) the first input, and/or may be associated with a different virtual object or group of objects. The term “in response to receiving the additional input” may refer to in reaction to, or consequent to receiving the additional input. The term “fourth distance” may be interpreted in a manner similar to the interpretation of first distance, second, and third distance describe earlier. Thus, the at least one processor may be responsive to multiple different inputs for manipulating and/or modifying the presentation of the same or different virtual objects in the extended reality environment and may respond to the different inputs accordingly. As an example, the at least one processor may respond to the additional input by moving a virtual object targeted by the additional input from the virtual display to a different location in the extended reality environment, e.g., that does not collide or overlap with another virtual object. By way of a non-limiting example, reference is now made toFIG.14, which illustrates the exemplary system ofFIGS.10and11where an additional virtual object included inside the virtual display is moved external to the virtual display, consistent with some embodiments of the present disclosure.FIG.15is substantially similar toFIG.11with the noted difference that, in addition to version1014A of virtual house plant1014located external to virtual display1002, virtual workspace1012is moved outside of virtual display1002, (e.g., indicated as version1012A of virtual workspace1012). After performing a pointing gesture to move virtual house plant1014out from virtual display1002, user1016may use a voice command to move virtual workspace1012out of virtual display1002. The voice command may be detected by a microphone (e.g., audio sensor472ofFIG.4) configured with smart glasses1006. In response, processing device460may determine a distance D4 (e.g., a fourth virtual distance) for displaying version1012A of virtual workspace1012smaller than distances D1, D2, or D3, e.g., to facilitate reading. Some embodiments involve identifying a trigger for halting the presentation of the version of the specific virtual object in the extended reality environment and for presenting the specific virtual object on the virtual display. The term “identifying” may refer to recognizing, perceiving, or otherwise determining or establishing an association. A processing device may identify a type of input by parsing the input and performing one or more comparisons, queries, or inference operations. The term “trigger” may refer to an event, occurrence, condition, or rule outcome that when provokes, causes, or prompts something, in this instance the halting of the presentation. For example, a warning signal that a device is overheating may trigger the device to shut down. The term “halting” may refer to pausing, delaying, ceasing, or terminating, e.g., an execution of an application. Thus, the at least one processor may identify a received input as a prompt (e.g., trigger) to cease presenting the version of the specific virtual object external to the virtual display and revert to presenting the specific virtual object inside the virtual display. The input may be provided as a gesture, via a pointing device or keyboard, as a voice command, or any other user interfacing medium. As an example, the user may drag the version of the specific virtual object back into the virtual display, delete or close the version of the specific virtual object, or perform any other operation to cease presenting the version of the specific virtual object outside the virtual display. In some examples, the trigger may be or include a person approaching the user. For example, a person approaching the user may be identified as described earlier. In some examples, the trigger may be or may include an interaction between the user and another person (e.g., a conversation). For example, audio data captured using an audio sensor included in the wearable extended reality appliance may be analyzed using speech recognition algorithms to identify a conversation of the user with another person. By way of a non-limiting example, reference is made toFIG.15, which illustrates the exemplary system ofFIGS.10and11where a trigger is identified for halting a presentation of content external to the virtual display, consistent with some embodiments of the present disclosure.FIG.15is substantially similar toFIG.11with the noted difference that user1016is pointing to version1014A of virtual house plant1014. Processing device460(FIG.4) may identify the point gesture as a trigger to halt presenting version1014A of virtual house plant1014on desk top1020(e.g., in extended reality environment1004), and to revert presenting virtual house plant1014inside virtual display1002, e.g., as illustrated inFIG.10. In some embodiments, the trigger includes at least one of: an additional input, a change in operational status of the wearable extended reality appliance, or an event associated with predefined rules. For example, the additional input may originate from the user, from the wearable extended reality appliance, or from another computing device (e.g., a peripheral device, server, mobile phone). The term “operational status of the wearable extended reality appliance” may refer to a functioning or working state of the wearable extended reality appliance. Examples of operational status of the wearable extended reality appliance may include a battery level, an amount of available memory, computation or communications capacity, latency (e.g., for communications, processing, reading/writing), a temperature (e.g., indicating overheating of an electronic component), a setting of a switch (e.g., hardware and/or software switch) relating to the operation of the wearable extended reality appliance, and any other parameter affecting the operation of the wearable extended reality appliance. The term “change in operational status” may refer to a development that alters the functioning of the wearable extended reality appliance, e.g., a battery level may become low, electronic components may overheat, memory buffers and communications channels may overflow. The term “predefined rules” may refer to a set of guidelines, regulations, or directives specified in advance, e.g., to govern the operation of the wearable extended reality appliance. Predefined rules may include general rules, and/or rules defined for a specific device, user, system, time, and/or context. The predefined rules may be stored in a database on a memory device (e.g., local and/or remote) and accessed via query. The memory device may be accessible only for privileged users (e.g., based on a device and/or user ID) or generally accessible. For example, one rule may increase the display intensity when ambient light exceeds a threshold, another rule may reorganize the display of content when the number of displayed virtual objects exceeds a threshold, and a third rule may invoke a second application in reaction to invoking a first application. The term “event associated with predefined rules” may refer to data or signal that is generated or received indicating a circumstance affiliated with one or more predefined rules. Examples of events may include an ambient light warning, a clutter event indicating the number of displayed virtual objects exceeds a threshold, a user input to move an object, or any other occurrence triggering a corresponding action. The at least one processor may handle each event in compliance with an associated rule. As an example, an ambient light warning may cause content to be displayed using an increased intensity, a clutter event may cause some content to be minimized, and a user input may cause the object to be displayed in a different location. Thus, the trigger received by the at least one processor (e.g., to halt the presentation of the version of the specific virtual object) may include one or more of an input, a notification indicating a change in the operating state of the wearable extended reality appliance, or an event affiliated with one or more rules. For example, the trigger may be a user pointing to close a virtual object, a voice command to move a virtual object back to the virtual display (e.g., additional inputs), a timeout event relating to the execution of a procedure (e.g., a change in operational status), or a clutter warning that the displayed virtual content exceeds a threshold. By way of a non-limiting example, inFIG.15, processing device (460) may receive a pointing gesture (e.g., additional input) from user1016, and may identify the pointing gesture as a trigger to halt the presentation of version1014A of virtual house plant1014on desk top1020). As another example, processing device460may receive a latency warning from network interface420as a trigger to halt the presentation of version1014A. As another example, processing device460may receive an ambient light warning from image sensor472as a trigger to halt the presentation of version1014A. Some embodiments involve, after generating the presentation of the version of the specific virtual object in the extended reality environment: while a focus of an operating system controlling the group of virtual objects is a particular virtual object presented in the virtual display, receiving a first input for task switching from a keyboard; and in response to receiving the first input for task switching, causing the focus of the operating system to switch from the particular virtual object presented in the virtual display to the version of the specific virtual object presented in the extended reality environment; while the focus of the operating system is the version of the specific virtual object presented in the extended reality environment, receiving a second input for task switching from a keyboard; and in response to receiving the second input for task switching, causing the focus of the operating system to switch from the version of the specific virtual object presented in the extended reality environment to another virtual object presented in the virtual display. The term “operating system” may refer to system software governing hardware and/or software resources of a computing device and providing services, such as resource allocation, task scheduling, memory storage and retrieval, and other administrative services. For example, a wearable extended reality appliance may be configured with an operating system to manage system resources needed to generate the extended reality environment. Referring toFIG.4, the operating system may allocate space in memory device411, schedule processing time for processing device460, schedule the sending and receiving of data via network interface420, manage event listeners for receiving notifications via sensors interface470, output interface450, and input interface430, and perform additional task needed by the wearable extended reality appliance to generate the extended reality environment. The term “operating system controlling the group of virtual objects” may refer to the operating system administering computing resources of a computing device, such a wearable extended reality appliance, for presenting virtual objects. The term “focus of an operating system” may refer to an element in a graphical user interface that is currently designated by the operating system as active. The operating system may allocate resources (e.g., stack, queue, and buffer memory) and schedule processing time such that user inputs received while a specific graphical element is in focus affect the specific graphical element. For example, if a user inputs a move, maximize, or minimize instruction while a specific graphical element in focus, the move, maximize or minimize instruction may be implemented with respect to that specific graphical element. In another example, if a user enters text (for example, through voice, through a virtual keyboard, through a physical keyboard, etc.) while a specific application is in focus, the text may be directed to the specific application. During an extended reality session, the focus of the operating system may switch between different virtual objects, some presented inside the virtual display, and some presented external to the virtual display. While the focus of the operating system is on a particular virtual object inside the virtual display, the operating system may allocate system resources for the particular virtual object such that received inputs are implemented with respect to that particular virtual object. The term “task switching” may refer to swapping a currently executed process with a different process. An operating system may suspend a currently executed process (e.g., task) by removing (e.g., popping) the process from a call stack and storing state data for the suspended process. The state data may allow to subsequently restore the execution of the suspended process from the point of suspension. The operating system may initiate the execution of the different process by retrieving state data for the different process, and adding (e.g., pushing) the different process onto the call stack. Task switching may be invoked automatically (e.g., determined internally by the operating system), based on a user input, e.g., via voice command, pointing device, gesture, keyboard, or any other input means, based on an external input (e.g., from a peripheral device), or any other technique for invoking task switching. Thus, for example, “receiving a first input for task switching from a keyboard” may refer to receiving one or more keystroke inputs (e.g., “Alt+Tab”) via a keyboard requesting to switch to a different task. The operating system may identify the input as a request for task switching and may schedule the new task accordingly. The term “causing the focus of the operating system to switch from the particular virtual object presented in the virtual display to the version of the specific virtual object presented in the extended reality environment” may be understood as implementing a task switching that transfers the focus of the operating system from the particular virtual object inside the virtual display onto the specific virtual object external to the virtual display. Subsequent inputs may be implemented with respect to the version of the specific virtual object external to the virtual display. Thus, while the version of the specific object external to the virtual display is in focus, the operating system may allocate system resources such that received inputs are implemented with respect to the version of the specific object. The term “causing the focus of the operating system to switch from the version of the specific virtual object presented in the extended reality environment to another virtual object presented in the virtual display” may be understood as implementing a task switching that transfers the focus of the operating system from the version of the specific virtual object external to the virtual display onto the particular virtual object inside the virtual display. Subsequent inputs may be implemented with respect to the particular virtual object inside the virtual display. Thus, a user may use the keyboard to switch the focus between different virtual objects, inside and external to the virtual display, allowing the user to manipulate and control virtual objects anywhere in the extended reality environment. For example, a wearable extended reality appliance may present a virtual display including an editable document and a messaging widget. The wearable extended reality appliance may additionally present a larger, editable version of the messaging widget external to the virtual display. The user may use a keyboard to toggle the focus between the editable document inside the virtual display and the version of the messaging widget external to the virtual display, allowing the user to switch between editing the editable document and editing a message via the version of the messaging widget. By way of a non-limiting example, reference is now made toFIG.16, which illustrates the exemplary environment ofFIGS.10and11where a keyboard is provided for controlling the presentation of content in extended reality environment1004, consistent with some embodiments of the present disclosure.FIG.16is substantially similar toFIG.11with the noted difference that user1016is seated at desk top1020in position to type via keyboard1018. Virtual display1002may include group1050of multiple virtual objects, such as virtual workspace1012and virtual house plant1014. User1016may provide an input to generate version1014B of virtual house plant1014resting on desk top1020, e.g., inside extended reality environment1004and external to virtual display1002. While the focus of an operating system configured with smart glasses1006is on virtual workspace1012, user1016may type “Alt+Tab” on keyboard1018to switch the focus from virtual workspace1012(e.g., inside virtual display1002) to version1014B of virtual house plant1014on desk top1020(e.g., inside extended reality environment1004and external to virtual display1002). Switching the focus to version1014B of virtual house plant1014may allow an input entered by user1016to be applied to decrease the size of version1014B. Since version1014B of virtual house plant1014is currently in focus, processing device460(FIG.4) may decrease the size of version1014B of virtual house plant1014based on the input. While the focus of the operating system is on version1014B of virtual house plant1014, user1016may type “Alt+Tab” on keyboard1018to switch the focus from version1014B of virtual house plant1014to virtual workspace1012inside virtual display1002). Switching the focus to virtual workspace1012may allow edits entered by user1016via keyboard1018to be applied to virtual workspace1012. Since virtual workspace1012is now in focus, processing device460(FIG.4) may edit virtual workspace1012based on inputs entered by user1016. Some embodiments involve, after generating the presentation of the version of the specific virtual object in the extended reality environment, receiving an input from a keyboard for a thumbnail view presentation of virtual objects associated with the virtual display, and including a thumbnail version of the specific virtual object presented in the extended reality environment in the thumbnail view presentation of the virtual objects associated with the virtual display. The term “thumbnail view presentation” may refer to a miniature, symbol, icon or simplified depiction of a larger and/or more detailed object. A “thumbnail view presentation of virtual objects associated with the virtual display” may include one or more miniature, symbol, icon or simplified depictions (e.g., thumbnails) of objects affiliated with the virtual display. A “thumbnail version of the specific virtual object” may include a miniature, symbol, icon or simplified depiction of the specific virtual object. A group of image thumbnails may serve as an index for organizing multiple objects, such as images, videos, and applications. Objects affiliated (e.g., associated) with the virtual display may include objects displayed inside the virtual display or derived there from. For example, a version of a virtual object displayed inside a virtual display may be associated with the virtual display, even when the version is presented external to the virtual display. Thus, after the at least one processor presents the version of the specific virtual object external to the virtual display, the user may request a thumbnail view of any objects associated with the virtual display. The at least one processor may display a thumbnail view of objects presented inside the virtual display and any virtual objects associated there with. As an example, a virtual display may present a read-only virtual document. An editable version of the virtual document may be displayed external to the virtual display. A thumbnail view presentation of objects associated with the virtual display may include the read-only virtual document as well as the editable version of the virtual document. By way of a non-limiting example, reference is now made toFIG.17, which illustrates the exemplary system ofFIGS.10and11where the content in the virtual display is presented as a thumbnail view, consistent with some embodiments of the present disclosure.FIG.17is substantially similarFIG.16with the noted difference of a thumbnail view1700of virtual objects associated with virtual display1002. User1016may request to view thumbnail view1700via keyboard1018, or via other input devices (e.g., input interface330of input unit202ofFIG.3). Upon receiving the request, processing device460(FIG.4) may add to or replace the presentation of virtual display1002(FIG.10), virtual objects included therein, and virtual objects derived therefrom, such as version1014B (FIG.16) of virtual house plant1014(FIG.11) with thumbnail view1700. Thumbnail view1700may include a thumbnail representation of each virtual object included in virtual display1002and additionally, any virtual object associated there with, such as a thumbnail representation1702of version1014B of virtual house plant1014. In some implementations, processing device460may replace virtual display1002and objects derived therefrom with thumbnail view1700, e.g., as illustrated inFIG.17. In some implementations, processing device460may display virtual display1002and objects derived therefrom alongside thumbnail view1700. Thumbnail representation1702may correspond to version1014B of virtual house plant1014and thumbnail representation1704may correspond to virtual house plant1014. Some embodiments involve a system for extracting content from a virtual display, the system including at least one processor programmed to: generate a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance; generate an extended reality environment via the wearable extended reality appliance, wherein the extended reality environment includes at least one additional virtual object presented at a second virtual distance from the wearable extended reality appliance; receive input for causing a specific virtual object from the group of virtual objects to move from the virtual display to the extended reality environment; and in response to receiving the input, generate a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance from the wearable extended reality appliance, wherein the third virtual distance differs from the first virtual distance and the second virtual distance. By way of a non-limiting example,FIG.10shows extracting content from virtual display1002. The content may be extract using a system that may include at least one processor (e.g., one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5). The at least one processor may be programmed to generate virtual display1002via smart glasses1006(e.g., a wearable extended reality appliance). Virtual display1002may present a group of virtual objects (e.g., virtual document100, virtual widgets1010inside a virtual menu bar1024, a virtual workspace1012, and a virtual house plant1014). Virtual display1002may be located at a first virtual distance D1 from smart glasses1006. In the implementation shown, virtual display1002may be a flat (e.g., two-dimensional) display, and D1 may be the distance from smart glasses1006to the bottom left corner of virtual display1002. The at least one processor may generate extended reality environment1004via smart glasses1006. Extended reality environment1004may include at least one additional virtual object, such as virtual mobile phone1026, presented at a second virtual distance D2 from smart glasses1006(e.g., measured from smart glasses1006to the bottom left corner of virtual mobile phone1026). The at least one processor may receive input, such as a pointing gesture by user1016for causing a specific virtual object, such as virtual house plant1014from group1050of virtual objects to move from virtual display1002to the extended reality environment1004(e.g., external to virtual display1002). With reference toFIG.11, in response to receiving the input, the at least one processor may generate a presentation of a version1014A of virtual house plant1014in extended reality environment1004at a third virtual distance D3 from smart glasses1006, where the third virtual distance D3 differs from the first virtual distance D1 and the second virtual distance D2. FIG.18illustrates a block diagram of an example process1800for moving content between a virtual display and an extended reality environment, consistent with embodiments of the present disclosure. In some embodiments, process1800may be performed by at least one processor (e.g., one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5) to perform operations or functions described herein. In some embodiments, some aspects of process1800may be implemented as software (e.g., program codes or instructions) that are stored in a memory (e.g., any of memory devices212,311,411, or511, or a memory of mobile device206) or a non-transitory computer readable medium. In some embodiments, some aspects of process1800may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, process1800may be implemented as a combination of software and hardware. Referring toFIG.18, process1800may include a step1802of generating a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance. As described earlier, a wearable extended reality appliance may present multiple virtual objects grouped inside a virtual display rendered to appear as though located at particular distance from the wearer. Process1800may include a step1804of generating an extended reality environment via the wearable extended reality appliance, wherein the extended reality environment includes at least one additional virtual object presented at a second virtual distance from the wearable extended reality appliance. As described earlier, the wearable extended reality appliance may present one or more virtual objects appearing as though located at a distance from the user different from the particular distance to the virtual display. Process1800may include a step1806of receiving input for causing a specific virtual object from the group of virtual objects to move from the virtual display to the extended reality environment. As described earlier, the wearable extended reality appliance may receive an input (e.g., from the user) to relocate one of the virtual objects grouped inside the virtual display, external to the virtual display. Process1800may include a step1808of in response to receiving the input, generating a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance from the wearable extended reality appliance, wherein the third virtual distance differs from the first virtual distance and the second virtual distance. As described earlier, the wearable extended reality appliance may respond to the input by presenting another rendition of the specific virtual object external to the virtual display appearing as though located at a distance from the user (e.g., third virtual distance) different from the distance to the virtual display (e.g., the first virtual distance) and from the distance to the additional virtual object (e.g., the second virtual distance). The rendition of the specific virtual object may be presented in place of, or concurrently with the specific virtual object inside the virtual display. The relative orientation of a wearable extended reality appliance to an associated physical input device may correspond to an operational mode for the wearable extended reality appliance. For example, when the wearable extended reality appliance is in a first orientation relative to the input device (e.g., the wearer is close to and facing the input device), a first operational mode may be applied to interface with the wearer. Conversely, when the wearable extended reality appliance is in a second orientation relative to the input device (e.g., the wearer is remote from, or facing away from the input device), a second (e.g., different) operational mode may be applied to interface with the wearer. In some embodiments, operations may be performed for selectively operating a wearable extended reality appliance. A link between a wearable extended reality appliance and a keyboard device may be established, e.g., communicatively coupling the wearable extended reality appliance to the keyboard device. Sensor data from at least one sensor associated with the wearable extended reality appliance may be received. The sensor data may be reflective of a relative orientation of the wearable extended reality appliance with respect to the keyboard device. Based on the relative orientation, a specific operation mode for the wearable extended reality appliance may be selected from a plurality of operation modes. For example, one specific operation mode may be associated with receiving input via the physical input device. A user command based on at least one signal detected by the wearable extended reality appliance may be identified. An action responding to the identified user command in a manner consistent with the selected operation mode may be executed. In some instances, the description that follows may refer toFIGS.19to24, which taken together, illustrate exemplary implementations for performing operations for selectively operating a wearable extended reality appliance, consistent with some disclosed embodiments.FIGS.19to24are intended merely to facilitate the conceptualizing of one exemplary implementation for performing operations for selectively operating a wearable extended reality appliance and do not limit the disclosure to any particular implementation. The description that follows includes references to smart glasses as an exemplary implementation of a wearable extended reality appliance. It is to be understood that these examples are merely intended to assist in gaining a conceptual understanding of disclosed embodiments, and do not limit the disclosure to any particular implementation for a wearable extended reality appliance. The disclosure is thus understood to relate to any implementation for a wearable extended reality appliance, including implementations different than smart glasses. Some embodiments provide a non-transitory computer readable medium containing instructions for performing operations for selectively operating a wearable extended reality appliance. The term “non-transitory computer-readable medium” may be understood as described earlier. The term “containing instructions” may refer to including program code instructions stored thereon, for example to be executed by a computer processor. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, optimization algorithms, and any other computer processing technique. The term “performing operations” may involve calculating, executing, or otherwise implementing one or more arithmetic, mathematic, logic, reasoning, or inference steps, for example by a computing processor. The term “wearable extended reality appliances” may refer to a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human, as described earlier. The term “selectively operating a wearable extended reality appliance” may include choosing how the wearable extended reality appliance functions or operates, for example based on one or more criteria or conditions. Thus, program code instructions (e.g., a computer program) may be provided (e.g., stored in a memory device of a computing device, such as any of memory devices311ofFIG.3,411ofFIG.4, or511ofFIG.5). The program code instructions may be executable by a processing device (any of processing devices360,460, and560, and mobile device206ofFIG.2). Executing the program code instructions may cause the processing device to choose or elect how a wearable extended reality appliance functions (e.g., operates). For example, the wearable extended reality appliance may function in a different manner depending on one or more criterion, and the processing device may elect a specific manner of functioning based on a determined criterion. For example, the wearable extended reality appliance may be configured to display content according to a first display configuration when the wearer is seated at a work station and display the content according to a second display configuration when the wearer is away from the work station. As another example, the first/second display configurations may define a specific region of the field of view of the wearer of the extended reality appliance, a specific size, intensity, transparency, opacity, color, format, resolution, level of detail, or any other display characteristic. By way of example, when the wearer is seated at the work station, an electronic mail application may be displayed larger and brighter than when the wearer is away from the work station. As another example, when the wearer is seated at the desk, the wearable extended reality appliance may be configured to present content visually (e.g., on a virtual screen), whereas when the wearer is walking outdoors, the wearable extended reality appliance may be configured to present the content audibly, e.g., via a speaker. Reference is now made toFIGS.19and20which, together, are a conceptual illustration of an environment for selectively operating a wearable extended reality appliance, consistent with some disclosed embodiments.FIGS.19and20include a wearer1900donning a wearable extended reality appliance (e.g., a pair of smart glasses1902). Smart glasses1902may be associated with a keyboard1904resting on a table surface1912included with a work station1906. Smart glasses1902may be configured to display content, such as a forecast weather app1908. Processing device460(FIG.4) may be configured to control the operation of smart glasses1902based on one or more criterion. For example, turning toFIG.19, when wearer1900is sitting at a work station1906in proximity to keyboard1904, processing device460may cause smart glasses1902to display forecast weather app1908inside a virtual screen1910at a fixed distance from work station1906and/or from keyboard1904(e.g., tethered to work station1906). Turning toFIG.20, when wearer1900is away from work station1906, processing device may cause smart glasses1902to display forecast weather app1908at a fixed distance from smart glasses1902and follow the gaze of wearer1900(e.g., tethered to smart glasses1902). Some embodiments include establishing a link between a wearable extended reality appliance and a keyboard device. The term “establishing” may refer to setting up, creating, implementing, participating in, or constructing. The term “link” may refer to a connection that joins or couples two separate entities, such as, a wearable extended reality appliance and a keyboard device. For example, a communications link may couple two disparate entities to create a channel for exchanging information as signals. The signals may be analog (e.g., continuous) or digital signals (e.g., discrete) and the communications link may be synchronous, asynchronous, or isochronous. In some embodiments, the link between the wearable extended reality appliance and the keyboard is wireless. A wireless communications link may be established between the wearable extended reality appliance and/or the keyboard device via scanning (e.g., actively and/or passively) and detecting nearby devices according to a predetermined wireless communications protocol. Wireless links may be established by recognizing authorized devices, sharing of recognized credentials, or through any form of pairing. Examples of wireless communications technology may include transceivers for sending and receiving information via radio waves (e.g., Wi-Fi, Bluetooth, Zigbee, RFID, GPS, broadband, long, short or medium wave radio), microwave, mobile (e.g., telephony) communications, infrared signals, and ultrasound signals. In some embodiment, a wireless infrared communications link may be established by optically coupling an IR emitter with an IR detector configured with one or both of the wearable extended reality appliance and a keyboard device. As another example, a wireless ultrasound communications link may be established by coupling an ultrasound emitter (e.g., speaker) with an ultrasound receiver (e.g., microphone) configured with one or both of the wearable extended reality appliance and a keyboard device. In some embodiments, the communications link between the wearable extended reality appliance and the keyboard is a wired connection. For example, a wired communications link may be created by physically coupling the wearable extended reality appliance and/or the keyboard device using a wire, cable, or fiber, for example, in compliance with a wired communications standard (e.g., USB, USB-C, micro-USB, mini-B, coaxial cable, twisted cable, Ethernet cable). Other examples of wired communications technology may include serial wires, cables (e.g., multiple serial wires for carrying multiple signals in parallel such as Ethernet cables), fiber optic cables, waveguides, and any other form of wired communication technology. In some embodiments, the communications link may include some combination of the wired and wireless technologies described earlier. The term “wearable extended reality appliance” may be understood as described earlier. The term “keyboard device” may refer to an input device including multiple keys representing alphanumeric characters (letters and numbers), and optionally, a numeric keypad, special function keys, mouse cursor moving keys, and status lights. In some embodiments, the keyboard device is selected from a group consisting of: a laptop computer, a standalone network connectable keyboard, and a wireless communication device having a display configured to display a keyboard. For example, the keyboard device may be a mechanical keyboard, an optical keyboard, a laser projected keyboard, a hologram keyboard, a touch sensitive keyboard (e.g., displayed on a touch-sensitive electronic display), a membrane keyboard, a flexible keyboard, a QWERTY keyboard, a Dvorak keyboard, a Colemak keyboard, a chorded keyboard, a wireless keyboard, a keypad, a key-based control panel, a virtual keyboard (e.g., synthesized by a processor and displayed in an extended reality environment), or any other array of control keys. Selecting a key of a keyboard (e.g., by pressing a mechanical key, touching a touch sensitive key, selecting a key of projected or hologram keyboard) may cause a character corresponding to the key to be stored in an input memory buffer of a computing device. Thus, a communications channel (e.g., link) may be created (e.g., established) between a wearable extended reality appliance and a keyboard device, allowing the exchange of data there between. For example, a user donning a wearable extended reality appliance may enter a workspace including a keyboard device. The wearable extended reality appliance and the keyboard device may each include transceivers for transmitting and receiving radio signals, for example according to a Bluetooth protocol, allowing the wearable extended reality appliance and the keyboard device to detect each other (e.g., pair) and communicate along a Bluetooth channel. By way of a non-limiting example, turning toFIG.20, network interface320(FIG.3) of keyboard device1904may emit a Bluetooth radio signal configured to be detected by a Bluetooth receiver. Network interface420(FIG.4) of smart glasses1902may scan for a Bluetooth radio signal and detect the Bluetooth radio signal emitted via network interface320of keyboard device1904. Keyboard device1904and smart glasses1902may exchange data via network interface320and network interface420, respectively, complying with a Bluetooth protocol to pair smart glasses1902with keyboard device1904. Some embodiments include receiving sensor data from at least one sensor associated with the wearable extended reality appliance, the sensor data being reflective of a relative orientation of the wearable extended reality appliance with respect to the keyboard device. The term “receiving” may refer to accepting delivery of, acquiring, retrieving, obtaining or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor. The processor may be local (e.g., integrated with the wearable extended reality appliance, or in the vicinity thereof, such as a local server or mobile phone) or remote (e.g., as a cloud or edge server). The data may be received via a communications channel, such as a wired channel (e.g., cable, fiber) and/or wireless channel (e.g., radio, cellular, optical, IR) and subsequently stored in a memory device, such as a temporary buffer or longer-term storage. The data may be received as individual packets or as a continuous stream of data. The data may be received synchronously, e.g., by periodically polling a memory buffer, queue or stack, or asynchronously, e.g., via an interrupt event. For example, the data may be received by any of processors360ofFIG.3,460ofFIG.5,560ofFIG.6, and/or a processor associated with mobile device206ofFIG.1, and stored in any of storage devices311,411,511, or a memory of mobile device206. The term “sensor” may include one or more components configured to detect a signal (e.g., visible and/or IR light, radio, electric and/or magnetic, acoustic such as sound, sonar or ultrasound, mechanical, vibration, heat, humidity, pressure, motion, gas, olfactory, or any other type of physical signal) emitted, reflected off, and/or generated by an object. The term “sensor data” may refer to information produced by the sensor based on a signal detected by the sensor. For example, the sensor may include a converter that converts a sensed signal to a format configured for communicating to a processing device, such as an electronic signal (e.g., for communicating via a wired communications link), a radio signal (e.g., for communication via a radio communications link), an IR signal (e.g., for communication via an infrared communications link), or an acoustic signal (e.g., for communicating via an ultrasound communications link). The sensor data may be transmitted in a binary format (e.g., as discrete bits) or an analog format (e.g., as continuous time-variant waves). The term “associated” may refer to the existence of an affiliation, relationship, correspondence, link or any other type of connection or correlation. Thus, the wearable extended reality appliance may be affiliated (e.g., associated) with a sensor. For example, the sensor may be mechanically coupled (e.g., physically attached) to the wearable extended reality appliance. Additionally, or alternatively, the sensor may be non-mechanically coupled (e.g., physically detached but communicatively coupled) to the wearable extended reality appliance, e.g., via optic, infrared, radio, ultrasound or any other type of non-wired communications means. In one example, the sensor may be mechanically coupled (e.g., physically attached) to the keyboard. In one example, the sensor may be physically separated and/or remote from both the keyboard and the wearable extended reality appliance. In one example, the sensor may be associated with the wearable extended reality appliance via a data structure stored in a memory device (e.g., the memory device may be included in the wearable extended reality appliance, may be included in the keyboard, or may be external to both the wearable extended reality appliance and the keyboard). In one example, the sensor may capture data associated with the wearable extended reality appliance and the association between the wearable extended reality appliance and the sensor may be through the captured data. For example, the sensor may be an image sensor, and the captured data may include an image of the wearable extended reality appliance. The sensor may detect signals associated with the wearable extended reality appliance. For example, the detected signals may relate to the state of the wearable extended reality appliance (e.g., position, orientation, velocity, acceleration, alignment, angle) relative to physical environment of the wearable extended reality appliance. For example, the physical environment may include physical objects, such as a floor surface, ceiling, walls, table surface, obstructing objects (e.g., book case, house plant, person) and the detected signal may relate the state of the wearable extended reality appliance relative to one or more of the physical objects. The sensor may convert the detected signals to sensor data and transmit the sensor data to a processing device via a communications link as described earlier. For example, the data may be received via an input device or sensor configured with an input device (e.g., input unit202ofFIG.1), from mobile communications device (e.g., device206), from remote processing unit (e.g., processing unit208), by a processing device configured with smart glasses1902(processing device460ofFIG.4), or from any other local and/or remote source. For example, the wearable extended reality appliance may include a GPS sensor (e.g., motion sensor473ofFIG.4) to detect a location of wearable extended reality appliance. The GPS sensor may convert the detected location to an electric signal that may be transmitted via a wired communications link to a processing device configured inside the wearable extended reality appliance (e.g., processing device460). As another example, the wearable extended reality appliance may include a motion sensor (e.g., motion sensor472ofFIG.4), such as an inertial measurement unit (IMU) to detect the motion (e.g., velocity and acceleration) and orientation of the wearable extended reality appliance. The IMU may convert the detected motion and orientation to a radio signal (e.g., sensor data) for transmission (e.g., via network interface420) to a remote processing device (e.g., remote processing unit208ofFIG.2). As another example, a camera (e.g., image sensor) may be configured with a work station in the vicinity of the wearable extended reality appliance. The camera may be configured to capture one or more images of the wearable extended reality appliance as the wearer approaches the work station. The camera may convert the captured images pixels to radio signals (e.g., sensor data), for transmitting to a mobile device (e.g., mobile device206). The term “being reflective of” may refer to indicating, expressing, revealing, or in any other way suggesting an associated state or condition (e.g., temporary, or steady state). For example, sensor data transmitted by an IMU integrated with a wearable extended reality appliance may reveal (e.g., be reflective of) a motion and/or current position and orientation of the wearer of the wearable extended reality appliance. As another example, image data (e.g., sensor data) captured by a camera of a wearable extended reality appliance adjacent to an object may indicate (e.g., be reflective of) the position of the wearable extended reality appliance relative to the object. The term “orientation” may refer to the direction (e.g., alignment and/or position in 2D and/or 3D) in which an object (e.g., a person, animal, or thing) is pointing. The term “relative orientation” may refer to an orientation (e.g., direction, alignment, distance and/or position) of an object with respect to a coordinate system or with respect to another object. The coordinate system may be fixed (e.g., relative to the Earth) or non-fixed (e.g., relative to a different object whose position and/or orientation may change with time and/or context). Thus, for example, the relative orientation between the wearable extended reality appliance and the keyboard may be determined directly, or by calculating the orientation of each of the wearable extended reality appliance and the keyboard relative to a fixed object (e.g., relative to the floor, ceiling), for example based on a 3D spatial map of the physical environment including the wearable extended reality appliance and the keyboard device (e.g., as a mesh of triangles or a fused point cloud). The relative orientation may be determined based on any combination of image data acquired by a camera, position, location, and orientation data acquired by an IMU and/or GPS unit configured with the wearable extended reality appliance, based on ultrasound, radio, and/or IR signals emitted and reflected off objects in the environment including the wearable extended reality appliance and the keyboard device, data store in memory (e.g., for a stationary keyboard device), predicted behavior of the wearer of the wearable extended reality appliance, and/or any other means for tracking the relative orientation between the wearable extended reality appliance and the keyboard device. Thus, a sensor may detect a relative alignment, position, distance or orientation of the wearable extended reality appliance as compared to (e.g., with respect to) the keyboard device. For example, a camera within imaging range of the wearable extended reality appliance and the keyboard device may acquire an image of both the wearable extended reality appliance and the keyboard device. The image data may be converted to an electronic or radio signal (e.g., sensor data) and transmitted to a processing device via a network interface. The processing device may analyze the information encoded in the electronic or radio signal to determine the relative orientation of the wearable extended reality appliance and the keyboard device. The image data may thus be reflective of the orientation of the wearable extended reality appliance relative to the keyboard device. As another example, the wearable extended reality appliance may include an IR emitter, and the position of the keyboard device may be known in advance (e.g., physically tethered to a surface of a work station). A processing device (e.g., associated with the work station) may receive an IR signal (e.g., sensor data) from by the IR emitter of the wearable extended reality appliance and analyze the received signal to determine the position and orientation of the wearable extended reality appliance relative to the surface of the work station. The IR signal may thus be reflective of the relative orientation of the wearable extended reality appliance to the surface, and thus the keyboard device tethered thereto. As another example, a gyroscope of an IMU configured with the wearable extended reality appliance may detect an orientation of the wearable extended reality appliance. The IMU may convert the orientation to an electronic signal (e.g., sensor data) and transmit the electronic signal to a processing device via a radio communications channel (e.g., Wi-Fi, Bluetooth). The processing device may analyze the received signal to determine the orientation of the wearable extended reality appliance relative to the keyboard device (e.g., having a known, fixed position). By way of a non-limiting example, turning toFIG.20, wearer1900donning smart glasses1902(such as smart glasses) may approach work station1906. A camera (e.g., image sensor472ofFIG.4) configured with (e.g., associated with) smart glasses1902may capture an image of keyboard device1904resting on table surface1912and provide the image to processing device460. In addition, an IMU configured with motion sensor473of smart glasses1902may sense the orientation of smart glasses1902relative to a floor parallel to surface1912and transmit the sensed orientation to processing device460. The image data and/or orientation data may be analyzed by processing device460to determine the relative orientation of smart glasses1902with respect to keyboard device1904, and thereby may be reflective of the relative orientation there between. In some embodiments, the relative orientation includes a distance between the wearable extended reality appliance and the keyboard device, wherein the operations further include analyzing the sensor data to determine an indicator of the distance. The term “distance” may refer to a spatial separation or gap between the wearable extended reality appliance and the keyboard device. For example, the distance may be measured in absolute terms, such as a Euclidian distance measured in centimeters, meters, feet, yards, or any other distance unit (e.g., floor tiles separating two objects accounting for only a horizontal planar distance). For example, the distance may be measure in relative terms, such as by measuring a distance relative to other objects (e.g., fixed and/or mobile objects) in the environment, such as a bookcase, window, or beacon (e.g., emitting a signal). Techniques for measuring distance between two objects may include capturing and analyzing images of the objects, receiving GPS signals from a GPS satellite, detecting sonar, Lidar, or radar signals emitted, reflected, and/or absorbed from the objects, applying interferometry (e.g., by detecting a Doppler shift) on signals emitted and/or reflected from the objects, and any other technique for measuring distance. The term “analyzing” may refer to investigating, scrutinizing and/or studying a data set, for example, to determine a correlation, association, pattern, or lack thereof within the data set or with respect to a different data set. The term “indicator of a distance” may refer to information allowing determination of the distance between two objects, for example by expressing or revealing an effect of the distance, e.g., on a signal. For example, distance may cause attenuation, a phase or time shift on a signal, such as a light, infrared, radio, or acoustic wave. In some embodiments, image data received by an image sensor may be analyzed, for example, using one or more image processing techniques such as convolutions, fast Fourier transforms, edge detection, pattern recognition, and clustering to identify objects from the image pixels. Mathematical functions, such as geometric, algebraic, and/or scaling functions may be applied to the identified objects to determine the distance there between. For example, a beacon may emit signals (e.g., IR, radio, ultrasound) that may be reflected off the wearable extended reality appliance and the keyboard device. The reflected signals may be investigated (e.g., analyzed), for example based on timing, angle, phase, attenuation, Doppler shift, as indicators of the distance between the wearable extended reality appliance and the keyboard device. Additionally, or alternatively, the wearable extended reality appliance may include a camera and/or a beacon emitting signals, and the distance to the keyboard device may be determined based on images acquired by the wearable camera, and/or signals emitted by the wearable beacon reflected off an object (e.g., the keyboard device). By way of a non-limiting example, turning toFIG.19, a camera (e.g., image sensor472ofFIG.4) configured with smart glasses1902may acquire an image of keyboard device1904while wearer1900is seated at work station1906. Processing device460may analyze the image to identify the perspective angle and scale of keyboard device1904, for example, relative to other identified objects captured in the image, and/or based on absolute dimensions of keyboard device1904known in advance. The image analysis may indicate the distance between smart glasses1902and keyboard device1904. In some embodiments, the relative orientation includes a facing direction of the wearable extended reality appliance with respect to the keyboard device, wherein the sensor data reflective of the relative orientation includes image data, and where the operations further include analyzing the image data to determine an indicator of the facing direction. The term “facing direction” may refer to an angle or orientation corresponding to a line-of-sight of the wearer of the wearable extended reality appliance or corresponding to at least a portion of the field of view of the wearer, e.g., at a particular point in time. The term “image data” may refer to information acquired by a camera, e.g., as image pixels. The term “an indicator of the facing direction” may refer to information allowing determination of the orientation of the gaze of the wearer of the wearable extended reality appliance, by expressing or revealing an effect of the gaze. For example, a camera configured with the wearable may be aligned with a frontal head pose of the wearer such that the camera acquires images substantially corresponding to the field-of-view, or directional gaze of the wearer, e.g., as the wearer turns his head, the camera acquires images corresponding to what the wearer sees. Image data (e.g., sensor data) acquired by the camera may be provided to a processing device for analysis. The processing device may apply image processing techniques (such as egomotion or ego-positioning algorithms) to the image data to detect the presence of the keyboard device. Based on the analysis, the processing device may determine the facing direction of the wearable extended reality appliance with respect to the keyboard. For example, if the keyboard device is positioned substantially centered in the image data, the processing device may determine that the facing direction of the wearable extended reality appliance is aligned with the keyboard. Conversely, if no keyboard is detected in the image data or the keyboard is detected in a peripheral region of the image, the processing device may determine that the facing direction of the wearable is not aligned with the keyboard device, e.g., aligned with an object other than the keyboard device. As another example, a camera tethered to the keyboard device may capture an image of the wearer of the wearable extended reality appliance and the image data (e.g., sensor data) provided to a processing device for analysis. The processing device may apply image processing techniques to the image data to detect the directional gaze (e.g., head pose) of the wearer. Based on the analysis, the processing device may determine the facing direction of the wearable extended reality appliance with respect to the keyboard. In some examples, a machine learning model may be trained using training examples to determine facing directions from images and/or videos. An exemplary training example may include a sample image and/or a sample video and an associated label indicating a facing direction corresponding to the sample image and/or the sample video. The trained machine learning model may be used to analyze the image data and determine the indicator of the facing direction. In some examples, at least part of the image data may be analyzed to calculate a convolution of the at least part of the image data and thereby obtain a result value of the calculated convolution. Further, in response to the result value of the calculated convolution being a first value, one indicator of the facing direction may be determined, and in response to the result value of the calculated convolution being a second value, a different indicator of the facing direction may be determined. By way of a non-limiting example, reference is now made toFIG.21which is substantially similar toFIGS.19and20with the notable difference that wearer1900is facing away from keyboard device1904. Image sensor472(FIG.4) configured with smart glasses1902may acquire an image of the field of view of wearer1900. Processing device460may analyze the image data. When keyboard device1904is absent from or in a peripheral region of the image data, processing device460may determine that the facing direction of smart glasses1902is away from keyboard device1904. In some embodiments, the sensor data reflective of the relative orientation is received from the keyboard device while the keyboard device is located on a surface. The term “sensor data reflective of the relative orientation is received from the keyboard device” may be understood as the keyboard device reflecting and/or emitting a signal from which the relative orientation may be determined. For example, light reflecting off the keyboard device may be captured by a camera and converted to image pixels (e.g., sensor data). The term “located” may refer to a station, placement, or position of an object. The term “surface” may include an upper layer of an object, such as a flat or top planar layer of a supporting plank, or board; or a contoured outer surface. For example, a surface may be a topmost layer of a table, and may be made of a hard, smooth material (e.g., wood, plastic, stone, metal, ceramic) capable of supporting other objects in a steady-state (e.g., stable) manner. For example, the keyboard device may be stationed or placed (e.g., located) on a plank or board forming an upper layer (e.g., surface) of a desk. A camera (e.g., configured with the wearable extended reality appliance and/or a work station including the surface and keyboard device) may sense light waves (e.g., ambient light) reflecting off the keyboard device during the time period (e.g., while) the keyboard device is positioned on the desk. The camera may convert the sensed light waves to pixels or image data (e.g., sensor data). A processing device may analyze the image data and detect the keyboard device and the alignment and size of the keyboard device relative to other objects detected in the image data. Based on the alignment and size, the processing device may determine the relative orientation of the keyboard device and the wearable extended reality appliance. By way of a non-limiting example, turning toFIG.19, image sensor472(FIG.4) configured with smart glasses1902may acquire an image of keyboard device1904resting on table surface1912. Processing device460may analyze the image to determine the distance between keyboard device1904and smart glasses1902(e.g., based on known absolute size and dimensions of keyboard device1904), for example using triangulation. Some embodiments include, based on the relative orientation, selecting from a plurality of operation modes a specific operation mode for the wearable extended reality appliance. The term “based on” may refer to being established by or founded upon, or otherwise derived from. The term “selecting” may refer to choosing, electing, or discriminately picking, for example one from multiple possible choices. For example, the selection may be performed by a processing device integrated with, local to, and/or remote from the wearable extended reality appliance or any combination thereof (such as one or more of processing devices360ofFIG.3,460ofFIG.4,560ofFIG.5). For example, a selection may be implemented by querying a database storing multiple possible choices using one or more criterion as filters, rules, and/or conditions for the search query. The database may be local or remote (e.g., with respect to a processing device implementing the search). Additionally, or alternatively, the selection may include performing one or more of logical, inference, interpolation, extrapolation, correlation, clustering, convolution, and machine learning operations. e.g., based on one or more criterion. The criterion may be, for example, user defined, hardware defined, software defined, or any combination thereof. The criterion may relate, for example, to distance, orientation, alignment, communication and/or processor latency and/or bandwidth (e.g., for either one or both of the wearable extended reality appliance and keyboard device), use context, the type of application for which the operation mode is being applied (e.g., work or personal use, high or low priority), the location of the wearer (e.g., private or public location, work or home, indoor or outdoor), the type of keyboard device (e.g., virtual, physical, or projected), user defined preferences, system defined preferences, or any other criterion relating to the operation of the wearable extended reality appliance and/or the keyboard device. The term “operation modes” may refer to configurations, arrangements (e.g., including one or more parameter settings, default or custom settings, preferences) for performing or implementing one or more actions, functions, or procedures. For example, the operation modes may be based on one or more default settings (e.g., hardware and/or software), and/or user-defined settings and preferences. In some embodiments, the operation modes may be based one use context, use type, preferences, or user needs. Examples of user needs may include visibility and/or attention needs (e.g., based on user feedback and/or machine learning of the behavior and/or preferences of the wearer), the presence of noise and/or objects in the vicinity of the wearer, and any other criterion affecting the user experience of the wearer. In some embodiments, the operation modes may be based on an efficiency goal, a power consumption goal, an emissions goal, or environmental conditions (e.g., ambient light, dust level, wind, temperature, pressure, humidity). In some embodiments, the operation modes may correspond to device requirements of the wearable extended reality appliance and/or keyboard device, such as processing, memory, and internal communication (e.g., bus) capacity, availability and/or limitations. In some embodiments, the operation modes may be based on communications requirements of the communications network linking the wearable extended reality appliance with the keyboard device (e.g., communications bandwidth capacity, availability, or latency). For example, different operation modes may be defined for indoor versus outdoor use of a wearable extended reality appliance. As another example, different operation modes may be defined based the time of day, day of week (e.g., holiday or work day). As another example, different operation modes may be defined for relatively mobile uses (e.g., regularly moving away from a work station) versus relatively stationary uses of a wearable extended reality appliance (e.g., rarely moving away from a work station). As another example, different operation modes may be defined for when the wearer is in proximity to a work station or remote from the work station (e.g., affecting the ability to communicate with another device tethered to the work station). The term “specific operation mode” may refer to a distinct, special, or precise configuration or arrangement for performing one or more action, functions, or procedures. For example, from multiple different operation modes defined for the wearable extended reality appliance, a single (e.g., specific) operation mode may be chosen (e.g., selected) based on one or more criterion, such as any one or more of the criterion described earlier. The operation modes may be store in memory, such as a memory device integrated with the wearable extended reality appliance or otherwise accessible by the wearable extended reality appliance. For example, the operation modes may be stored in a memory device, such as one or more of a memory device configured with the wearable extended reality appliance (e.g., memory device411ofFIG.4), in a memory device of a remote processing unit (e.g., memory device511ofFIG.5), in a memory device of a mobile device (e.g., mobile device206ofFIG.1) or any other memory device. A processing device (e.g., one or more of processing device460, processing device560, a processing device configured with mobile device206ofFIG.2, or any other processing device) may access one or more of the operation modes by querying the memory device based on one or more rules. For example, one rule for querying the memory device for an operation mode may be related to the relative orientation between the wearable extended reality appliance and the keyboard device. Thus, the relative orientation of the wearable extended reality appliance to the keyboard device may be used to choose a specific operation mode from multiple candidate operation modes for wearable extended reality appliance. For example, one operation mode may be suitable for Bluetooth or Wi-Fi communication. Thus, when the wearable extended reality appliance is sufficiently close to the keyboard device to establish a Bluetooth or Wi-Fi communications channel, the Bluetooth or Wi-Fi operation mode may be selected, respectively. As another example, an operation mode that tethers the display to the keyboard device may be suitable for when the wearable extended reality appliance is facing towards the keyboard device and a different operation mode that tethers the display to the directional gaze of the wearer may be suitable for when the wearer is facing away from the keyboard device. As another example, an operation mode presenting content audibly may be suitable for when the wearable extended reality appliance is moving quickly relative to the keyboard device (e.g., in the context of an exercise application) and a different operation mode presenting content visually may be suitable for when the wearer is relatively stationary relative to the keyboard device (e.g., in the context of editing a document, or viewing content. By way of a non-limiting example, turning toFIG.19, image sensor472(FIG.4) configured with smart glasses1902may capture an image of keyboard device1904and send the image pixels to processing device460. Concurrently, motion sensor473configured with smart glasses1902may detect that smart glasses1902are relatively stationary (e.g., over a time period, such as 5 seconds) and send the motion sensor data to processing device460. Processing device460may analyze the image pixels and the motion data received from image sensor472and motion sensor473, respectively, and the relatively close and stable (e.g., steady state) position and orientation of smart glasses1902with respect to keyboard device1904. Processing device460may determine that wearer1900is in a sitting position at work station1906and facing keyboard device1904. In response, processing device460may query memory device411for an operation mode suited to sitting at work station1906from multiple operation modes for smart glasses1902stored in memory device411, thereby selecting a specific mode from the plurality of available operation modes for smart glasses1902. The selected operation mode may cause a forecast weather app1908to be displayed inside virtual screen1910tethered to work station1906. The forecast may include a high level of detail, such as the forecast for the next twelve hours. Turning toFIG.20, based on data received from image sensor472and motion sensor473, processing device460may determine that wearer1900is relatively distant from keyboard device1904and the position and orientation of smart glasses1902with respect to keyboard device1904is unstable (e.g., not in steady state). For example, a smart watch1914tracking steps walked wearer1900may notify processing device460that wearer1900is in motion (e.g., walking). In response, processing device460may query memory device411for an operation mode suited to walking while away from work station1906from the multiple operation modes stored in memory device411, thereby selecting a specific mode from the plurality of available operation modes. The selected operation mode may cause forecast weather app1908to be displayed at a predefined distance from smart glasses1902(e.g., tethered to wearer1900while moving). The forecast may include a lower level of detail, such as the forecast only for the next two hours. Some embodiments include identifying a user command based on at least one signal detected by the wearable extended reality appliance. The term “identifying” may refer to recognizing, perceiving, or otherwise determining or establishing an association with a known entity. For example, identifying may be a result of performing one or more logical and/or arithmetic operations associated with a comparison (e.g., via query), inference, interpolation, extrapolation, correlation, convolution, machine learning function, and any other operation facilitating identification. The term “user command” may refer to an order, direction or instruction issued by an individual interfacing with a computing device. Examples of user commands may include vocalized instruction (e.g., detected by a microphone), head, hand, foot, and/or leg motions or gestures (e.g., detected by a camera and/or an IMU or GPS device), eye motions (e.g., detected via an eye tracker), data entered, or selections made via a manual input device (e.g., buttons, switches, keyboard, electronic pointing device, touch-sensitive screen), data entered or selections made via a foot-operated device (e.g., pedal, footswitch), and/or any other technique for interfacing between a user and a computing device. The term “identifying a user command” may involve recognizing the user command, e.g., based on a comparison, correlation, clustering algorithm, or any other identification technique. For example, a vocalized user command may be identified by invoking voice recognition software. As another example, a head, eye, hand, foot, and/or leg gesture or motion command may be identified via gesture and/or motion recognition software. As another example, a command entered as data via in input interface of a computing device may be identified by an event listener configured with an operating system running on the computing device. Examples of user commands may include a request to invoke, close, or change the execution of an application on a device, turn on/off a device or device setting and/or change the operation of a device, send and/or receive a notification or document, retrieve, upload, store, or delete a notification or document, or perform any other user-invoked activity. The term “signal” may refer to an information transmission. A signal, for example, may involve a function that can vary over space and time to convey information observed about a phenomenon via a physical medium. For example, a signal may be implemented in any range of the electromagnetic spectrum (e.g., radio, IR, optic), as an acoustic signal (e.g., audio, sonar, ultrasound), a mechanical signal (e.g., motion or pressure on a button or keyboard), as an electric or magnetic signal, or via any other physical medium. For example, the phenomenon may relate to a state, presence or absence of an object, an occurrence or development of an event or action, or lack thereof. For example, light waves (e.g., signals) reflecting off a body in motion and/or performing a gesture may be detected by a camera and stored as image data. As another example, motion and/or a gesture may be detected by a motion detector (e.g., IMU, GPS signals). The term “detected” may refer to sensing (e.g., discovering or discerning) information embedded or encoded in a signal via a sensor (e.g., detector) corresponding to the signal type. Examples of detectors corresponding to signal types may include an antenna detecting electro-magnetic signals, a camera detecting optical and/or infrared signals, a microphone detecting acoustic signals, electrical and/or magnetic sensors detecting electric and/or magnetic fields (e.g., in analog electronic circuitry), semiconductor diodes or switches detecting an electric current or voltage (e.g., consequent to the performing of one or more logical operations, such as based on a user input), and any other type of detector capable of sensing a signal. For example, the wearable extended reality appliance may include a detector integrated thereon. For example, the detector may be an audio sensor (e.g., audio sensor472ofFIG.4), an image sensor (e.g., image sensor472), a motion sensor (e.g., motion sensor472), an environmental sensor (e.g., environmental sensor474), and additional sensors (e.g., sensors472). The detector may sense an incoming analog signal (e.g., sound, light, motion, temperature) and convert the analog signal to an analog electronic signal via a transducer. The analog electronic signal may be processed (e.g., using a filter, transform, convolution, compression) and converted to a digital format (e.g., encoded as bits) via an analog-to-digital converter. The digitized signal may be stored in memory (e.g., memory device411). A processing device (e.g., processing device460) may retrieve the digitized signal and apply one or more digital signal processing techniques, such as a digital filter, a smoothing algorithm, a transformation, a convolution, a correlation, a clustering algorithm, or any other digital signal processing technique. The processing device may compare the processed signal to a database of user commands (e.g., store in memory device411). Based on the comparison, the processing device may identify the user command, for example if the digitized signal matches a predefined user command within a threshold, such as a cluster associated with the user command. Optionally, the processing device may apply a machine learning algorithm to identify the digitized signal as a user command. In some embodiments, the user command includes a voice command and the at least one signal is received from a microphone included in the wearable extended reality appliance. The term “voice command” may refer to a user command implemented by speaking words or uttering predefined sounds associated with a user command. The term “microphone” may refer to an audio sensor or voice input device as described earlier. The microphone may generate an audio signal that may be digitized and stored in a memory device. A processing device may apply a voice recognition algorithm to the digitized audio signal to identify the user command. By way of a non-limiting example, turning toFIG.19, audio sensor471(FIG.4) configured with smart glasses1902may detect a sound produced by a words uttered by wearer1900. Audio sensor471may generate an audio signal corresponding to the detected sound. The audio signal may be sampled (e.g., digitized) and stored in memory device411(e.g., as bits). Processing device460may apply a speech recognition algorithm to the digitized sound to identify a sequence of words associated with a user command. In some embodiments, the user command includes a gesture and the at least one signal is received from an image sensor included in the wearable extended reality appliance. The term “gesture” may refer to a movement or sequence of movements of part of the body, such as a hand, arm, head, foot, or leg to express an idea or meaning. A gesture may be a form of non-verbal or non-vocal communication in which visible bodily actions or movements communicate particular messages. A gesture may be used to communicate in place of, or in conjunction with vocal communication. For example, raising a hand with the palm forward may be a hand gesture indicating to stop or halt an activity, and raising a thumb with the fist closed may be a hand gesture indicating approval. A camera (e.g., image sensor) associated with the wearable extended reality appliance may capture one or more images of a gesture performed by the wearer (e.g., using the hand, arm, head, foot, and/or leg). The camera may store the image pixels in a memory device. A processing device may analyze the image pixels using a gesture recognition algorithm to identify the gesture as the user command. By way of a non-limiting example, turning toFIG.22, image sensor472(FIG.4) configured with smart glasses1902may detect light reflected off a hand of wearer1900and convert the reflected light as image pixels. Processing device460may apply a gesture recognition algorithm to the image pixels to identify a hand gesture (e.g., pointing of the index finger) associated with a user command. For example, the user command may be associated with invoking forecast weather app1908. As another example, a motion detector (e.g., IMU) configured with the wearable extended reality appliance may detect a head gesture performed by the wearer and convert the head gesture to an electronic signal via a transducer. The electronic motion signal may be digitized (e.g., sampled) and stored in a memory device. A processing device may apply a head gesture recognition algorithm to the digitized motion signal to identify the user command. In some embodiments, identifying the user command may account for context and/or circumstances associated and/or unassociated with a user issuing the user command. For example, identifying the user command may take into account the time of day, and/or physical environment, location (e.g., public or private), a history of the user issuing the user command (e.g., habits and behavior based on machine learning), a context of the user command (e.g., based on actions performed immediately prior to the user command), and any other criterion relevant to identifying a user command. Additionally, or alternatively, a user command may be identified based on a signal detected by the wearable extended reality appliance by receiving the signal from an additional device (e.g., smart watch, mobile phone) in communication (e.g., wireless communication) with the wearable extended reality appliance. For example, the wearer of the wearable extended reality appliance may enter text into an application of a mobile phone and the mobile phone may send a notification to a processing device configured with the wearable extended reality appliance (e.g., processing device460ofFIG.4). As another example, the wearer of the wearable extended reality appliance may push a button on a smart watch and the smart watch may send a notification to a processing device configured with the wearable extended reality appliance (e.g., processing device460ofFIG.4). By way of a non-limiting example, turning toFIG.20, a microphone (e.g., audio sensor471ofFIG.4) configured with smart glasses1902may sense a sound emitted by wearer1900. The microphone may convert the sound to an electronic signal, which may be digitized (e.g., via sampling) and store in memory device411. Processing device460may retrieve the digitized sound from memory device411and perform a voice recognition algorithm to identify the words “Open Weather Forecast”, corresponding to a user command for invoking forecast weather app1908, thereby detecting the user command. As another example, wearer1900may don a smart watch1914communicatively coupled to smart glasses1902. Wearer1900may press a button of smart watch1914associated with invoking forecast weather app1908. Smart watch1914may transmit a notification to smart glasses1902indicating the button press (e.g., via a Bluetooth link). Processing device460may receive the notification (e.g., detect the signal) and determine an association with a user command to invoke weather application108, thereby detecting the user command. Some embodiments include executing an action responding to the identified user command in a manner consistent with the selected operation mode. The term “executing” may refer to carrying out or implementing one or more operative steps. For example, a processing device may execute program code instructions to achieve a targeted (e.g., deterministic) outcome or goal, e.g., in response to receiving one or more inputs. The term “action’ may refer to the performance of an activity or task. For example, performing an action may include executing at least one program code instruction (e.g., as described earlier) to implement a function or procedure. The action may be user-defined, device or system-defined (e.g., software and/or hardware), or any combination thereof. The action may correspond to a user experience (e.g., preferences, such as based on context, location, environmental conditions, use type, user type), user requirements (attention or visibility limitations, urgency or priority of the purpose behind the action), device requirements (e.g., computation capacity/limitations/latency, resolution capacity/limitations, display size capacity/limitations, memory and/or bus capacity/limitations), communication network requirements (e.g., bandwidth, latency), and any other criterion for determining the execution of an action, e.g., by a processing device. The action may be executed by a processing device configured with the wearable extended reality appliance, a different local processing device (e.g., configured with a device in proximity to the wearable extended reality appliance), and/or by a remote processing device (e.g., configured with a cloud server), or any combination thereof. Thus, “executing an action responding to the identified command” may include performing or implementing one or more operations in reaction to an identified user command, e.g., to address the user command or in association with or correspond to the user command. For example, upon receiving a request from a user for data, a computing device may query a database stored on a memory device to locate and retrieve the requested data, thereby executing an action responding to the identified user command. As another example, upon receiving a voice command from a user to send a message to a second user, a computing device may apply a voice recognition algorithm to identify the user command, query a table storing a device ID associated with the second user, establish a communications link with the computing device of the second user (e.g., based on the device ID), and transmit the message over the communications link, thereby executing an action responding to the identified user command. The term “in a manner consistent with” may refer to complying with one or more predefined rules or conditions, meeting one or more requirements, or keeping within defined limitations, settings, or parameters, e.g., defined for the selected operation mode. For example, compliance with a selected operation mode may relate to specifying display parameters, such as the resolution, color, opacity, size and amount of rendered content. As another example, compliance with a selected operation mode may relate to available memory, processor and/or communications bandwidth, latency requirements (e.g., to display less or lower resolution content under lower bandwidth capacity and more or higher resolution content under higher bandwidth capacity). As another example, compliance with a selected operation mode may relate to user defined preferences (e.g., to display less detail while the user is walking and more detail when the user is stationary). By way of another example, compliance with a selected operation mode may relate to environmental conditions (e.g., to replace a visual display with an audible representation under bright sunlight or very windy conditions, or conversely to replace an audible rendition with a visual display under noise conditions). As another example, compliance with a selected operation mode may relate to a location of the wearer of the wearable extended reality appliance (e.g., content may be rendered differently at work versus at home). As another example, compliance with a selected operation mode may relate to the current activity (e.g., sitting, walking, driving, lying down) of the wearer, e.g., the display of content may be limited while the wearer is driving. Thus, an action performed in response to a user command may be performed in compliance with the operation mode of the wearable extended reality appliance (e.g., based on the relative orientation to the keyboard device). For example, a sensor may detect a wearable extended reality appliance while in motion and located beyond a threshold distance from a keyboard device. Thus, the orientation between the wearable extended reality appliance and the keyboard device may change dynamically due to the wearer walking. This may correspond to a walking mode for the extended reality appliance when the wearer is away from the keyboard device. The walking mode may include settings to present content in a manner to avoid distracting the wearer while walking (e.g., to avoid having to read text). While walking, the wearer of the wearable extended reality appliance may vocalize a command to receive a message. In response, the message may be presented audibly via a speaker using a speech synthesizer, e.g., consistent with the walking mode. As another example, a sensor may detect a wearable extended reality appliance in a second orientation (e.g., leaning back in a chair in proximity to but facing away from the keyboard device). The second orientation may correspond to a resting mode for the wearable extended reality appliance. The resting mode may include settings to present content visually via the wearable tethered to the line-of-sight of the wearer, e.g., to allow the wearer to view content while facing away from the keyboard device. In response to the user command, the message may be presented visually locked to the line-of-sight of the wearer, e.g., consistent with the resting mode allowing the wearer to read the message while facing away from the keyboard device. As another example, a sensor may detect a wearable extended reality appliance in a third orientation (e.g., sitting upright in a chair adjacent to the keyboard device). The third orientation may correspond to a work mode for the wearable extended reality appliance. The work mode may include settings to present content visually via the wearable tethered to the work station, e.g., the keyboard device, allowing the wearer to view content displayed in a virtual screen above the keyboard device. In response to the user command, the message may be presented visually in the virtual screen, tethered to the work station and keyboard device. By way of a non-limiting example, turning toFIG.19, a camera (e.g., image sensor472ofFIG.4) may capture an image of keyboard device1904in proximity to wearer1900and at an angle indicating that wearer is seated upright facing keyboard device1904. Processing device460may analyze the image and determine a close-upright orientation between smart glasses1902and keyboard device1904. Based on the close-upright orientation, a close-upright mode may be selected for smart glasses1902. For example, the close-upright mode may be associated with displaying more content than when positioned far from keyboard device1904and displaying content inside virtual screen1910locked (e.g., tethered) to work station1906. Wearer1900may vocalize the user command “Open weather forecast”, corresponding to invoking a weather application and displaying the weather forecast. Processing device460may display forecast weather app1908to include the forecast for the next twelve hours. Forecast weather app1908may be displayed inside virtual screen1910, locked to work station1906and keyboard device1904, e.g., in a manner consisted with the close-upright mode. By way of another non-limiting example, turning toFIG.20, a camera (e.g., image sensor472ofFIG.4) may capture an image of keyboard device1904beyond a predefined threshold from wearer1900. Processing device460may analyze the image and determine a remote orientation between smart glasses1902and keyboard device1904. Based on the remote orientation, a remote operation mode may be selected for smart glasses1902. For example, the remote operation mode may be associated with displaying less content than when positioned in proximity to keyboard device1904, and displaying content locked (e.g., tethered) to the line-of-sight of wearer1900. Wearer1900may vocalize the words “Open weather forecast”, corresponding to a user command to invoke a weather application and display the weather forecast. Processing device460may display the weather forecast for the next two hours and locked to the line-of-sight of wearer1900, e.g., in a manner consisted with the remote operation mode. Some embodiments include accessing a group of rules associating actions responding to user commands with relative orientations between the keyboard device and the wearable extended reality appliance, determining that the relative orientation corresponds to a specific rule of the group of rules, and implementing the specific rule to execute an associated action responding to the identified user command. The term “accessing” may refer to obtaining. e.g., at least for the purpose of reading, or acquiring relevant information. For example, data may be accessed by a processing device querying a data store, such as a database. The term “group of rules” may refer to a set of guidelines, regulations, or directives. A group of rules may include general rules, or may include rules defined for a specific device, user, system, time, and/or context. Thus, a group of rules may be stored in a database on a memory device (e.g., local and/or remote) and accessed via query. The memory device may be accessible only for privileged users (e.g., based on a device and/or user ID) or generally accessible. The term “associating” may refer to linking, tying, relating, or affiliating. Thus, the group of rules may link or relate one or more actions to one or more relative orientations between the keyboard device and the wearable extended reality appliance. The actions (e.g., linked to relative orientations via the rules) may be performed in response to user commands. For example, an action to invoke an application may be performed in response to a vocalized user command “invoke app”. Moreover, one or more rules may define how to perform the action, in other words, how to invoke the application. In particular, a rule may define how to invoke the application based on the relative orientation between the wearable extended reality appliance and the keyboard. For example, when the wearable extended reality appliance is within Bluetooth communication range of the keyboard (e.g., a first orientation), a rule may cause a first version of the application to be invoked. However, when the wearable is outside Bluetooth communication range from the keyboard (e.g., a second orientation), a rule (e.g., the same or different rule) may cause a second version of the application to be invoked. For example, the first version of the application may include more content and/or may be displayed in a larger format than the second version. As another example, an action to display content may be performed in response to a hand gesture (e.g., user command) performed by a wearer of a wearable extended reality appliance. However, a rule may define how to display the content, depending on the orientation between the wearable extended reality appliance and the keyboard device. For example, when the wearable extended reality appliance is facing towards the keyboard (e.g., a third orientation), a rule may cause content to be displayed according to a first color scheme, and when the wearable extended reality appliance is facing away from the keyboard (e.g., a fourth orientation), the same or different rule may cause content to be displayed according to a second color scheme. For example, the first color scheme may be suitable for displaying content against a blank wall positioned behind the keyboard device, and the second color scheme may be suitable for displaying content suspended (e.g., floating) in a room together with other distracting objects. The term “determining” may refer to performing a computation, or calculation to arrive at a conclusive or decisive outcome. The term “specific rule of the group of rules” may refer to a distinct, special, or precise rule from multiple different rules. Thus, a particular (e.g., specific) rule may be selected from a group of rules based on a computation resulting in a decisive conclusion. For example, when the relative orientation changes dynamically the processing device may calculate an associated velocity and query for a specific rule corresponding to the velocity. As another example, when the relative orientation indicates a stationary position facing a blank wall, the processing device may query for a specific rule corresponding to viewing content on a blank wall while in a stationary position. The term “corresponds” may refer to correlated with, or in conformance with. Thus, a relative orientation (e.g., between a user and a device, or between two devices) may be used to decide (e.g., determine) which rule to apply when performing an action. For example, a first rule causing content to be presented using a small format may correspond to a first relative orientation (e.g., when the relative distance between a wearable extended reality appliance and a keyboard device is large, e.g., above a threshold). A second rule causing content to be presented using a large format may correspond to a second relative orientation (e.g., when the relative distance between the wearable appliance and the keyboard device is small, e.g., below a threshold). Thus, in response to a user command to display content, if the wearable extended reality apparatus is beyond the threshold from the keyboard device (e.g., the first relative orientation), the first rule may be applied to display content (e.g., using the small format). When the wearable extended reality apparatus is within the threshold of the keyboard (e.g., the second relative orientation), the second rule may be applied to display content (e.g., using the larger format). The term “implementing” may refer to materializing, fulfilling, carrying out, or applying. For example, implementing a rule may cause the rule to be applied when performing an action. The term “associated action responding to a user command” may refer to an action (e.g., performed in response to a user command, as defined earlier) that is linked, or corresponds to a relative orientation between the wearable extended reality appliance and the keyboard device by the rule. Thus, the rule may create the association (e.g., link) between the action performed in response to a user command and the relative orientation between the wearable extended reality appliance and the keyboard device. For example, changing the relative orientation of the wearable extended reality appliance may affect the ambient lighting conditions, and/or introduce virtual and/or real obstructions. The changed orientation may thus affect how displayed content appears to the wearer of the wearable extended reality appliance (e.g., the result of executing an action in response to a user command). The specific rule may address the effect of the changed orientation by modifying one or more display settings (e.g., to accommodate the different ambient lighting, or obstruction). For example, the specific rule may rearrange content and adjust the brightness to produce a more satisfactory user experience. For example, a user command “display message” may cause a wearable extended reality appliance to present a message including an image (e.g., perform an action responding to an identified user command). However, the relative orientation of the wearable to the keyboard device may determine how the message and image will be presented, based on one or more predefined rules. For example, when the wearable extended reality appliance is in motion and moving away from the keyboard device (e.g., the wearer is walking away from the keyboard) a first rule may be implemented to present the message and accompanying image audibly, e.g., to avoid distracting the wearer with visual content while walking and to conserve communications bandwidth as the wearer exits Bluetooth range of the work station. When the wearable extended reality appliance is in motion and moving towards from the keyboard device, (e.g., the wearer is walking towards the keyboard), a second rule may be implemented to present the message audibly but present the accompanying image visually, e.g., to avoid distracting the wearer with text while walking, but allow the wearer to see the image since there is sufficient bandwidth. When the wearable extended reality appliance is stationary and facing the keyboard device, (e.g., the wearer is sitting at the work station facing the keyboard), a third rule may be implemented to present the message and accompanying image visually, e.g., to provide a full visual experience to the wearer and avoid distracting while working. By way of a non-limiting example, turning toFIGS.19and20, in response to the voice command “Open weather forecast” (e.g., a user command), smart glasses1902may be configured to display the forecast (e.g., perform an action responding to a user commands). However, based on the relative orientation between smart glasses1902and keyboard device1904, one or more rules may be defined governing how the action is to be performed. Referring toFIG.19, wearer1900is shown sitting at work station1906. Processing device460(FIG.4) may determine the relative orientation between smart glasses1902and keyboard device1904(e.g., in proximity, facing, and stationary). Processing device460may query a rules database for a rule to apply when smart glasses1902are in proximity to, facing, and stationary with respect to keyboard device1904(e.g., determine that the relative orientation corresponds to a specific rule). The selected rule may define parameters for rendering content suitable to situations where a wearer of a pair of smart glasses is sitting at a work station, facing a keyboard. For example, the rule may cause content to be displayed in a virtual screen tethered to the work station, using a large format and rich with details. Accordingly, processing device460may implement the rule to present forecast weather app1908, using a large display format and including the weather for the next twelve hours, inside virtual screen1910, (e.g., implement the specific rule to execute an associated action responding to the identified user command). Referring toFIG.20, wearer1900is shown walking towards work station1906. Processing device460may determine the relative orientation between smart glasses1902and keyboard device1904(e.g., distant, facing, and moving towards). Processing device460may query the rules database for a rule to apply when smart glasses1902are distant from, facing, and moving towards keyboard device1904. The rule may cause content to be displayed in a virtual screen tethered to smart glasses1902, using a small format with few details. Accordingly, processing device460may implement the rule to present forecast weather app1908, using a small display format and including the weather for only the next two hours, locked to the directional gaze of smart glasses1902. In some embodiments, the relative orientation includes both distance information and facing information, and wherein the operations further include, for distances within a threshold: when the wearable extended reality appliance is facing the keyboard device, selecting a first operation mode, and when the wearable extended reality appliance is facing away from the keyboard device, selecting a second operation mode. The term “distance information” may include one or more measurements, assessments, or evaluations indicating an amount of space (e.g., distance as described earlier) separating two objects, such as between a wearable extended reality appliance and a keyboard device. The term “facing information” may include one or more measurements, assessments, or evaluations indicating a facing direction as described earlier. For example, the facing information may include a relative angle there between, a vertical (e.g., height) disparity, a planar (e.g., horizontal) disparity, an orientation relative to another object in the vicinity (e.g., a wall, floor, bookcase), and/or with respect to the Earth (e.g., based on a compass). For example, a processing device may apply a facial recognition algorithm to an image acquired from the perspective of the keyboard device to determine if the wearer is facing the keyboard device. Additionally, or alternatively, a processing device may apply an object recognition algorithm to an image acquired from the perspective of the wearable extended reality appliance to detect the presence of the keyboard device in the field of view of the wearer, to determine the facing information. The term “distances within a threshold” may refer to a position inside or within a zone, for example demarcated by a boundary, limit, or border (e.g., threshold) marking the zone. For example, a threshold may be defined relative to another object (e.g., fixed or mobile), based on the ability of a device to interface with another device, based on a sensory capability inside the zone (e.g., within the threshold), and/or lack of sensory capability outside the zone (e.g., beyond the threshold). For example, when the distance between a wearer of a wearable appliance and a virtual screen is within reading range, the distance may be within the threshold. As another example, when a device is within Bluetooth range of another device, the distance between the two devices may be within the threshold. As another example, a threshold may demarcate a zone based on a focus or attention capability of a user (e.g., based on environmental conditions such as ambient light, ambient noise, or the presence of distracting objects, or people). For example, the distance between a wearer of a wearable appliance and another device may be determined to be within the threshold based on one or more environmental conditions. As another example, a threshold may demarcate a zone based on user preferences or user behavior, e.g., determined based on user input and/or machine learning. For example, when the distance between a wearer of a wearable appliance and a keyboard device is within a manual typing range, the distance may be within the threshold. A processing device (e.g., processing device460ofFIG.4) may determine when a wearable extended reality appliance is within a threshold of a keyboard device based on the distance information received with the relative orientation. The term “facing the keyboard device” may refer to a pose of the wearable extended reality appliance substantially aligned with or pointing in a direction toward the keyboard device, such that a wearer of the wearable extended reality appliance sees the keyboard device in a substantially centered region of his field of view. The term “facing away from the keyboard device” may refer to a pose of the wearable extended reality appliance substantially unaligned with or pointing in a direction away from the keyboard device, such that a wearer of the wearable extended reality appliance does not see the keyboard device or sees the keyboard device in a peripheral region of his field of view. Thus, the relative orientation may include data relating to the distance between the wearable and the keyboard (e.g., distance information), and additionally, data relating to the directional gaze of the wearer of the wearable with respect to the keyboard device (e.g., facing information). Either one or both of the distance information and the facing information may be used to determine the operation mode for the wearable extended reality appliance. For example, when the wearable extended reality appliance is sufficiently close to the keyboard device (e.g., to establish a Bluetooth connection), a Bluetooth operation mode may be applied to the wearable extended reality appliance. For example, the Bluetooth channel may allow displaying content using a high resolution. However, the directional gaze of the wearer (e.g., the facing information) may be used to determine additional settings for the wearable extended reality appliance. For example, when the wearable is facing the keyboard device, an operation mode corresponding to a forward-facing pose may be applied to display content on a virtual screen tethered to the keyboard device, and when the wearable extended reality appliance is facing to the side (e.g., away from the keyboard device), an operation mode corresponding to the side-facing pose may be applied to display content on a virtual screen tethered to the wearable extended reality appliance. By way of a non-limiting example, turning toFIGS.19and21, processing device460(FIG.4) may detect smart glasses1902positioned within 50 cm of keyboard device1904(e.g., distance information). For example, processing device may receive image data from image sensor472and analyze the image data to determine the distance information. Based on the distance information, processing device460may determine that the distance between smart glasses1902and keyboard device falls within a threshold for presenting content visually with a high level of detail. For example, a weather forecast may be displayed visually and include the forecast for the next twelve hours. Additionally, processing device460may determine where to display the weather forecast based on the directional gaze of wearer1900. For example, processing device460may receive facing information from an IMU (e.g., motion sensor473) configured with smart glasses1902to determine the directional gaze. Turning toFIG.19, based on the facing information, processing device460may detect a forward-facing pose for smart glasses1902and select a forward-facing operation mode (e.g., from multiple operation modes stored in memory device411). For example, the forward-facing operation mode may cause twelve-hour forecast1908to be displayed on virtual screen1910tethered (e.g., docked) to work station1906. Turning toFIG.21, based on the facing information, processing device460may detect a side-facing pose for smart glasses1902and select a side-facing operation mode for smart glasses1902. For example, the side-facing operation mode may cause twelve-hour forecast1908to be displayed tethered to smart glasses1902. Some embodiments include determining that the user command is a query, and wherein in the first operation mode, the action responding to the query involves providing a first response that includes displaying information on a virtual display screen associated with the keyboard device, and in the second operation mode, the action responding to the query involves providing a second response that excludes from displaying information on the virtual display screen. The term “query” may refer to a search request, question, or inquiry. A query may include conditions that must be fulfilled, such as filters or rules that narrow down the search results. For example, a query may be formulated using a query language (e.g., SQL, OQL, SPARQL, OntoQL, and any other query language), and a processing device may determine the user command is a query by identifying query language terms in the user command. As another example, a query may target a database or knowledge base, and the processing device may determine the user command is a query based on the target. A processing device may receive a user command and determine that the user command is a query. For example, if the user command is text (e.g., entered via the keyboard device), a processing device may parse and tokenize the text to identify the query. As another example, if the user command is a voice command, a processing device may apply a speech recognition package to the voice command to tokenize and identify spoken words as the query. By way of another example, if the user command is a gesture, a processing device may apply a gesture recognition package to identify the query. The term “action responding to the query” may be understood as an action reacting or replying to a user command, as described earlier, where the user command is a query. The term “response” may include information received in reply or reaction to submitting a query. For example, a response to a query for an address for a contact name may be a single address for a single contact with the name), several addresses for the contact), several contacts with the same name all having one address, each with one or more addresses). The term “displaying” may refer to selectively activating pixels of an electronic display device or otherwise visually presenting information (e.g., a viewer of a wearable extended reality appliance). The term “displaying information” may refer to selectively activating pixels or otherwise presenting information to convey a facts and/or knowledge. For example, a processing device of the wearable extended reality appliance may display content on a virtual screen by selectively activating certain pixels of the viewer to render virtual content overlaid on the physical environment viewable via transparent portions of the viewer. The term “excludes” may refer to omitting or precluding. Thus, if the wearable extended reality appliance is facing the keyboard device (e.g., the first operation mode is selected), a response to the query may display information on a virtual display associated with the keyboard device. If the wearable extended reality appliance is facing away from the keyboard device (e.g., the second operation mode is selected), a response to the query may omit (e.g., exclude) information from being display on the virtual display associated with the keyboard device. For example, the information may be presented audibly, in a tactile manner, or on a virtual screen associated with the wearable extended reality appliance. As another example, the query may be a request for directions to a target location. When the wearer is facing the keyboard device, the directions may be displayed on a virtual screen above the keyboard device (e.g., in the line-of-sight of the wearer). When the wearer is facing away from the keyboard device, however, the directions may be recited audibly via a speaker, and/or displayed in a virtual screen tethered to the wearable extended reality display. By way of a non-limiting example, turning toFIGS.19and21, wearer1900submits a query by vocalizing the words “open weather forecast”. Processing device460(FIG.4) may submit a query to a remote weather server (not shown) via network214(FIG.2), for example using a weather APT, and retrieve the current weather forecast. Turning toFIG.19, based on the front-facing orientation of smart glasses1902with respect to keyboard device1904, processing device460may apply the first operation mode and display forecast weather app1908in virtual screen1910, tethered to work station1906and keyboard device1904. Turning toFIG.21, based on the side-facing orientation of smart glasses1902, e.g., away from keyboard device1904, processing device460may apply the first operation mode and exclude forecast weather app1908from being displayed in virtual screen1910. Instead, processing device460may display forecast weather app1908tethered to smart glasses1902. In some embodiments, the first response includes visually presenting a most likely answer to the query and at least one alternative answer to the query; and the second response includes audibly presenting the most likely answer to the query without presenting the at least one alternative answer to the query. The term “visually presenting” may include displaying information on a virtual and/or physical display. The term “audibly presenting” may include conveying information by playing an audio signal via a speaker and voice synthesizer. The term “a most likely answer” may refer to a response having a high degree of certainty associated with the response. Thus, for example, when there are multiple possible responses to a query, the responses may be assigned a probability. In one example, the response may be based on an output of a machine learning model trained using training examples to generate answers to queries. An example of such a training example may include a sample query, together with a desired answer to the sample query. In one example, the machine learning model may be a generative model, or more specifically text generation model. In one example, the machine learning model may provide two or more possible answers, and may provide confidence level associated with each possible answer. Each possible response may be based on a possible answer, and the probability assigned to the possible response may be a mathematical function (such as a linear function, a nonlinear function, a polynomial function, etc.) of the confidence level. And a response having a maximum probability may be selected from among the possible responses as the “most likely answer.” The term “alternative answer” may refer to a response to a query different than a probably or expected response to a query, e.g., having a relatively lower probability. Thus, when the wearer is facing the keyboard device (e.g., the first operation mode is applied to display the first response on the virtual screen associated with the keyboard device), the most probable answer to the query may be displayed together with another, e.g., alternative answer. For example, the query may request a driving route to a destination, and in response, the shortest driving route may be visually presented (e.g., on a map) together with an additional (e.g., longer) route. When the wearer is facing away from the keyboard device (e.g., the second response is provided that avoids displaying information on the virtual screen associated with the keyboard device), the most probable response may be vocalized via a speaker, without providing the alternative response. For example, in response to the query requesting the driving route, the shortest driving route may be described audibly via a speaker, and audible description of the additional route may be omitted. By way of a non-limiting example, turning toFIGS.19and23, wearer1900submits a query to receive a weather forecast by issuing a voice command “Open weather forecast”.FIG.23is substantially similar toFIG.19with the notable different that wearer1900is facing away from keyboard device1904and receives content audibly via a speaker configured with smart glasses1902(e.g., speaker453ofFIG.4). Referring toFIG.19, processing device460(FIG.4) may determine that smart glasses1902are within the threshold distance and facing keyboard device1904and may present the first response to the query using the first operation mode for smart glasses1902. Accordingly, processing device460(FIG.4) may visually present the weather forecast on virtual display1910, where the forecast may include the most probable forecast (e.g., topmost forecast), in addition to two other alternative forecasts (e.g., bottom two forecasts). Referring toFIG.23, processing device460may determine that smart glasses1902are within the threshold distance but face away from keyboard device1904. Processing device460may present the second response to the query using the second operation mode for smart glasses1902. Accordingly, processing device460may audibly present the most probable forecast (“clouds and rain”) via a speakers configured with smart glasses1902(e.g., speaker453) and may omit presenting the alternative forecasts. In some embodiments, the first response includes visually presenting an answer to the query and additional relevant information, and the second response includes audibly presenting the answer to the query without presenting the additional relevant information. The term “answer” may refer to a response or information provided as feedback to a query. The term “additional relevant information” may refer to supplementary facts, advice, or data pertaining to the query, that may not be included in a direct or narrow response to a query. The term “without presenting the additional relevant information” may refer to omitting or withholding presenting the additional information. For example, in response to a query for directions, when the user is facing the keyboard device, a driving route may be displayed together with indications for service stations along the route, and opening hours for the service stations. However, when the user is facing away from the keyboard device, the driving route may be described audibly, and the information relating to the service stations may be omitted. By way of a non-limiting example, turning toFIGS.19and23, wearer1900submits a query to receive a weather forecast by issuing a voice command “Open weather forecast”. Referring toFIG.19, smart glasses1902are within the threshold distance and facing keyboard device1904. Thus, processing device (FIG.4) may apply the first operation mode and display forecast weather app1908predicting rain. In addition, processing device may present wind speeds (e.g., additional relevant information) as text accompanying the rain forecast. Referring toFIG.23, smart glasses1902are within the threshold distance but facing away from keyboard device1904. Thus, processing device may apply the second operation mode and audibly present the weather forecast (e.g., clouds and rain) via speakers configured with smart glasses1902(e.g., speakers453), and may withhold presenting the wind speeds. Some embodiments include determining that the user command is an instruction to present a new virtual object, wherein in the first operation mode, the action responding to the instruction includes presenting the new virtual object in a first location associated with a location of the keyboard device, and in the second operation mode, the action responding to the instruction includes presenting the new virtual object in a second location associated with a location of the wearable extended reality appliance. The term “instruction” may refer to a directive or order, e.g., to perform an action. The term “new virtual object” may refer to a virtual (e.g., computer synthesized) item not presented previously. Examples of a virtual object may include a virtual screen, virtual widget, virtual icon or image, virtual application, or any other virtual item or entity. The term “location” may refer to a position, e.g., relative to the keyboard device, work station, wearable extended reality appliance, or any other object of reference. For example, a user command may be a directive to open a new window or widget for an application. When the user is facing the keyboard device, the new virtual object may be displayed docked, or relative to the keyboard device (e.g., above or to the side of the keyboard device). However, when the user is facing away from the keyboard device, the new virtual object may be displayed docked, or relative to the wearable extendable reality appliance. By way of a non-limiting example, turning toFIGS.19and21, processing device460(FIG.4) may determine that the voice command “Open weather forecast” is an instruction to invoke a weather application, requiring to display a new virtual window to display the weather forecast. Referring toFIG.19, smart glasses1902are within the threshold distance and facing keyboard device1904. Thus, processing device may apply the first operation mode. Accordingly, processing device460may present forecast weather app1908as a new virtual object inside a location of virtual screen1910associated with work station1906and keyboard device1904. Referring toFIG.23, smart glasses1902are within the threshold distance but facing away from keyboard device1904. Thus, processing device may apply the second operation mode. Accordingly, processing device460may present forecast weather app1908as a new virtual object in a location associated with (e.g., relative to) smart glasses1902. In some embodiments, the first location is docked relative to the location of the keyboard device, and the second location changes with head movements of a user of the wearable extended reality appliance. The term “docked” may refer to locked, anchored or tethered. For example, a virtual widget docked to a work station may be anchored to the work station such that when a user leaves the work station, the virtual widget is no longer visible. As another example, a virtual widget docked to a wearable extended reality appliance may follow the gaze of the wearer. The term “relative to the location” may refer to with respect to, or as compared to the location. e.g., using the location as a reference. The term “changes with head movements of a user of the wearable extended reality appliance” may refer to following or tracking head motions (e.g., up/down, left/right, sideways) performed by the wearer of the wearable extended reality appliance. For example, a virtual widget displayed in a location that changes with head movements of a user of the wearable extended reality appliance may be anchored to the wearable extended reality appliance such that when the user turns his head, the virtual widget remains within the field of view of the user. Thus, when the wearer is facing the keyboard device, the virtual object may be displayed in a manner that is anchored or fixed (e.g., docked) relative to the keyboard device. When the wearer is facing away from the keyboard device, the virtual object may be displayed in a manner that is anchored to the wearable extended reality appliance and follows the gaze of the user as the user turns his head. By way of a non-limiting example, referring toFIG.19, smart glasses1902are within the threshold distance and facing keyboard device1904. In response to the voice command “Open weather forecast”, processing device may apply the first operation mode and present forecast weather app1908as a new virtual object inside a location of virtual screen1910, that is tethered (e.g., fixed relative to) work station1906and keyboard device1904. Referring toFIG.23, while wearer1900is within the threshold distance of keyboard device1904, wearer1900moves the head to the left until smart glasses1902are facing away from keyboard device1904. In response to the voice command “Open weather forecast”, processing device460may apply the second operation mode and present forecast weather app1908as a new virtual object tethered to smart glasses1902, e.g., moving leftwards following the directional gaze of wearer1900as wearer1900moves the head leftwards. In a similar manner, when wearer1900moves the head rightwards, smart glasses1902may present forecast weather app1908in a manner that tracks the head motion of wearer1900, e.g., moving rightwards. In the second operation mode, some embodiments further include: receiving image data captured using an image sensor included in the wearable extended reality appliance; analyzing the image data to detect a person approaching the user; and causing a modification to the second location based on the detection of the person approaching the user. In some examples, the image data may be analyzed using an object detection algorithm to detect the person. Further, the image data may be analyzed using a motion tracking algorithm to determine that the detected person is approaching the user. In some examples, the image data may be analyzed using a visual classification algorithm to determine that a person is approaching the user. The term “image data captured using an image sensor” may refer to the detection of light signals by an image sensor and conversion of the light signal to image pixels. The term “person approaching the user” may refer to another individual (e.g., other than the user) moving towards the user who may be wearing the wearable extended reality appliance. The term “causing a modification to the second location based on the detection of the person approaching the user” may include changing or adjusting the position where content is presented by accounting for the person approaching the user, e.g., so that the content does not obstruct the person, or vice-versa. For example, the modification may be based on a direction from which the person is approaching the user (for example, moving the second location to another direction). In one example, the modification may be based on a distance of the approaching person (for example, avoiding the modification for far away persons). In one example, the modification may be based on a direction of movement of the person (for example, estimating whether the person is indeed approaching the user based on the direction of motion), e.g., when the person approaches from the left, the location for presenting content may be shifted rightwards and the reverse. In one example, the modification may be based on a speed of the person (for example, applying the modification when the person stops moving or moves very slowly). In one example, the modification may be based on a gesture or a facial expression of the person (for example, when the person is gesturing to the user, the modification may be applied), e.g., when the person is seeking the attention of the wearer, the location for presenting content may be shifted to the side. By way of a non-limiting example, reference is now made toFIG.24which is substantially similar toFIG.22with the notable difference of a person2400approaching wearer1900. A camera configured with smart glasses1902(e.g., image sensor472ofFIG.4) may capture an image of person2400approaching wearer1900. Processing device460may analyze the image to detect person2400approaching the wearer1900from the left. For example, the position of person2400may overlap with and thus obstruct the display of forecast weather app1908. Processing device460may cause the location of forecast weather app1908to move (e.g., shift) upwards and rightwards, based on the detected position of person2400, e.g., to prevent forecast weather app1908from obstructing person2400. In the first operation mode, some embodiments further include: receiving image data captured using an image sensor included in the wearable extended reality appliance; analyzing the image data to detect a surface that the keyboard device is placed on; and selecting the first location based on the detected surface. A surface may include the top layer of a desk, table, stool, a sliding tray (e.g., for a keyboard), or any other flat, level area positioned as the top layer of an object. A surface may support another object resting on the surface such that the object is immobile. For example, when the wearer is facing the keyboard device, an optical sensor configured with the wearable extended reality appliance may capture an image of the work station including a table supporting the keyboard device. The image may be analyzed (e.g., by one or more of processing devices360ofFIG.3,460ofFIG.4,560ofFIG.5) to detect the keyboard device resting on the surface of a desk. The new virtual object instructed by the user command may be displayed based on the surface of the desk. For example, the new virtual object may be displayed as a virtual widget resting on the desk, or inside a virtual screen docked to the desk surface. In one example, the first location may be selected based on a position of an edge of the surface that the keyboard device is placed on. By way of a non-limiting example, turning toFIG.19, wearer1900is seated at work station1906adjacent to table surface1912. A camera configured with smart glasses1902(e.g., image sensor472ofFIG.4) may capture an image of work station1906with keyboard device1904resting thereon. Processing device460may analyze the image and detect keyboard device1904resting on table surface1912. Processing device460may determine to apply the first operation mode for smart glasses1902based on the distance and orientation of smart glasses1902to keyboard device1904. Processing device460may select the location to display forecast weather app1908based on table surface1912, e.g., by displaying forecast1908inside virtual screen1910located just above table surface1912. As another example, processing device460may select a location on table surface1912to display a weather widget1916. Some embodiments further include determining the action for responding to the identified user command based on the relative orientation of the wearable extended reality appliance with respect to the keyboard device and a posture associated with a user of the wearable extended reality appliance. The term “posture” may refer to a position for holding the body. In some embodiments, the posture is selected from a group including: lying, sitting, standing, and walking. The term “lying” may refer to reclining, e.g., horizontally. The term “sitting” may refer to a semi-upright position, resting the weight of the upper body on a horizontal surface. The term “standing” may refer to a stationary upright pose whereby the weight of the body is supported by the legs. The term “walking” may refer to an upright pose whereby the weight of the body is supported by the legs while in motion. For example, if the wearer is in an upright seated position such the wearable extended reality appliance is substantially above the keyboard device, content may be displayed in a virtual screen located above the keyboard device using a high resolution. However, if the wearer is reclining while sitting next to the keyboard device (e.g., the wearer is facing the ceiling), content may be displayed in a virtual screen against the ceiling using a lower resolution. By way of a non-limiting example, turning toFIG.19, based on the sitting posture and front-facing orientation of wearer1900, processing device460(FIG.4) may display forecast weather app1908in virtual screen1910above keyboard device1904and directly in front of wearer1900. Turning toFIG.20, based on the walking posture of wearer1900and relatively unstable (e.g., dynamic) orientation of smart glasses1902, processing device460may display forecast weather app1908tethered to smart glasses1902in a manner that tracks the directional gaze of wearer1900. Some embodiments further include determining the action for responding to the identified user command based on the relative orientation of the wearable extended reality appliance with respect to the keyboard device and types of virtual objects displayed by the wearable extended reality appliance. The term “types of virtual objects” may refer to a category or characterization of a virtual item. For example, a virtual object may be categorized according to context, time of presentation, duration of presentation, size, color, resolution, content type (e.g., text, image, video, or combinations thereof), resource demands (e.g., processing, memory, and or communications bandwidth), and any other characterization of virtual objects. For example, the response to a user command may depend on what additional virtual objects are currently displayed by the wearable extended reality appliance. When the wearer is seated at a desk adjacent to the keyboard device and is facing forwards, and the wearable extended reality appliance displays a calendar widget resting on the desk top, in response to a request to view a meeting schedule, an upcoming meeting may be displayed via the calendar widget (e.g., by one or more of processing devices360ofFIG.3,460ofFIG.4,560ofFIG.5). When the wearer is seated at the desk but facing away from the keyboard device, in response to a request to view the meeting schedule, the upcoming meeting may be displayed in a virtual window tethered to the directional gaze of the wearer. By way of a non-limiting example, turning toFIG.19, wearer1900is adjacent to and facing keyboard device1904while viewing content1924via smart glasses1902. In response to a request by wearer1900to open a weather forecast, processing device460(FIG.4) may display forecast weather app1908to the side of content1924, so as not to obstruct content1924. Some embodiments provide a system for selectively operating wearable extended reality appliance, the system including at least one processor programmed to: establish a link between a wearable extended reality appliance and a keyboard device; receive sensor data from at least one sensor associated with the wearable extended reality appliance, the sensor data being reflective of a relative orientation of the wearable extended reality appliance with respect to the keyboard device; based on the relative orientation, select from a plurality of operation modes a specific operation mode for the wearable extended reality appliance; identify a user command based on at least one signal detected by the wearable extended reality appliance; and execute an action responding to the identified user command in a manner consistent with the selected operation mode. For example, turning toFIG.19, processing device460(FIG.4) may be programmed to establish a link between smart glasses1902and keyboard device1904. Processing device460may receive position data (e.g., sensor data) from at least one sensor (e.g., GPS of motion sensor473) associated with smart glasses1902. The sensor data may be reflective of a relative orientation of smart glasses1902with respect to keyboard device1904. Based on the relative orientation, processing device460may select from a plurality of operation modes (e.g., stored in memory device411) a specific operation mode for smart glasses1902(e.g., to display content in virtual screen1910). Processing device460may identify a user command based on at least one signal (e.g., an audio signal) detected by audio sensor471of smart glasses1902. Processing device460may execute an action responding to the identified user command in a manner consistent with the selected operation mode (e.g., by displaying forecast weather app1908in virtual screen1910). FIG.25illustrates a block diagram of example process2500for interpreting commands in extended reality environments based on distances from physical input devices, consistent with embodiments of the present disclosure. In some embodiments, process2500may be performed by at least one processor (e.g., processing device460of extended reality unit204, shown inFIG.4) to perform operations or functions described herein. In some embodiments, some aspects of process2500may be implemented as software (e.g., program codes or instructions) that are stored in a memory (e.g., memory device411of extended reality unit204, shown inFIG.4) or a non-transitory computer readable medium. In some embodiments, some aspects of process2500may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, process2500may be implemented as a combination of software and hardware. Referring toFIG.25, process2500may include a step2502of establishing a link between a wearable extended reality appliance and a keyboard device. As described earlier, a communications channel may be created between a wearable extended reality appliance and a keyboard device. For example, a Bluetooth channel may communicatively couple the wearable extended reality appliance with the keyboard device. Process2500may include a step2504of receiving sensor data from at least one sensor associated with the wearable extended reality appliance, the sensor data being reflective of a relative orientation of the wearable extended reality appliance with respect to the keyboard device. As described earlier, a sensor, such as an image sensor, a motion sensor, an IR sensor, and/or a radio sensor configured with the wearable extended reality appliance may sense data expressing the relative orientation of the wearable extended reality appliance and the keyboard device. Process2500may include a step2506of, based on the relative orientation, selecting from a plurality of operation modes a specific operation mode for the wearable extended reality appliance. As described earlier, the relative orientation of the wearable extended reality appliance to the keyboard device may be used to choose a specific operation mode from multiple candidate operation modes for wearable extended reality appliance. Process2500may include a step2508of identifying a user command based on at least one signal detected by the wearable extended reality appliance. As described earlier, the wearable extended reality appliance may include a detector, such as a microphone (e.g., audio sensor472ofFIG.4), an image sensor (e.g., image sensor472), a motion sensor (e.g., motion sensor472), an environmental sensor (e.g., environmental sensor474), and additional sensors (e.g., sensors472). A processing device (e.g., one or more of processing devices360ofFIG.3,460ofFIG.4,560ofFIG.5) may analyze the detected signal to identify a user command. Process2500may include a step2510of executing an action responding to the identified user command in a manner consistent with the selected operation mode. As described earlier, the action performed in response to a user command may be performed in compliance with the operation mode of the wearable extended reality appliance (e.g., based on the relative orientation to the keyboard device). Videos of users wearing an extended reality appliance interacting with virtual objects tend to depict the interaction from the perspective of the user. For example, a video may be from the perspective of the user wearing the extended reality appliance, such as a virtual depiction of the user's hands interacting with a virtual object. An outside observer would only see the user's hands moving in the physical environment while the user interacts with the virtual object (e.g., the observer would not be able to see the virtual object as the user interacts with it). There is a desire to be able to generate a video of the user interacting with the virtual object from the perspective of the outside observer, such that the outside observer may see the virtual object as the user interacts with the virtual object. Disclosed embodiments may include methods, systems, and non-transitory computer readable media for facilitating generating videos of individuals interacting with virtual objects. It is to be understood that this disclosure is intended to cover methods, systems, and non-transitory computer readable media, and any detail described, even if described in connection with only one of them, is intended as a disclosure of the methods, systems, and non-transitory computer readable media. Some disclosed embodiments may be implemented via a non-transitory computer readable medium containing instructions for performing the operations of a method. In some embodiments, the method may be implemented on a system that includes at least one processor configured to perform the operations of the method. In some embodiments, the method may be implemented by one or more processors associated with the wearable extended reality appliance. For example, a first processor may be located in the wearable extended reality appliance and may perform one or more operations of the method. As another example, a second processor may be located in a computing device (e.g., an integrated computational interface device) selectively connected to the wearable extended reality appliance, and the second processor may perform one or more operations of the method. As another example, the first processor and the second processor may cooperate to perform one or more operations of the method. The cooperation between the first processor and the second processor may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processors. Some embodiments include a non-transitory computer readable medium containing instructions for causing at least one processor to perform operations for generating videos of individuals interacting with virtual objects. The terms “non-transitory computer readable medium,” “processor.” “instructions,” and “virtual objects” may be understood as described elsewhere in this disclosure. As described below, one or more processors may execute the one or more instructions for generating one or more videos. As used herein, the term “video” may include a single still image, a series of one or more still images (e.g., a time lapsed sequence), or a continuous series of images (e.g., a video). The one or more videos may illustrate one or more interactions of an individual with one or more virtual objects. An individual may interact with a virtual object in the extended reality environment in a similar manner as the individual may interact with an object in the physical environment. In some embodiments, any interaction that an individual could have with an object in the physical environment may be replicated in the extended reality environment. For example, the individual may interact with the virtual object by holding it in one or both of their hands and may rotate the virtual object, by squeezing the virtual object, by manipulating the virtual object, by looking at the virtual object, by bringing the virtual object closer to themselves, or by moving the virtual object farther away from themselves. In some embodiments, the individual may select one of several virtual objects by performing a predefined hand gesture, for example “picking up” the virtual object in the extended reality environment. In some embodiments, the individual may release a virtual object they are holding by performing a predefined hand gesture, for example “dropping” or “tossing” the virtual object in the extended reality environment. In some embodiments, the individual may interact with the virtual object through voice commands or gesture commands. In one example, a gesture command for interacting with the virtual object may include virtually touching the virtual object. In another example, a gesture command for interacting with the virtual object may be remote from the virtual object and include no virtual touch. Some embodiments include causing a wearable extended reality appliance to generate a presentation of an extended reality environment including at least one virtual object. The terms “wearable extended reality appliance,” “extended reality environment,” and “virtual object” may be understood as described elsewhere in this disclosure. The presentation of the extended reality environment may be what the user sees while wearing the extended reality appliance and may permit the user to perceive and/or interact with the extended reality environment. The extended reality environment may be presented to the user of the wearable extended reality appliance by any of the mechanisms described earlier. Causing the wearable extended reality appliance to generate the presentation of the extended reality environment may be performed by a processor associated with the wearable extended reality appliance. In some embodiments, the processor may be a part of the wearable extended reality appliance, such as the processing device460shown inFIG.4. In some embodiments, the wearable extended reality appliance may receive data for display from a processor remote from and in communication with the wearable extended reality appliance, such as from a computing device associated with the wearable extended reality appliance. For example, the remote processor may include processing device360in input unit202as shown inFIG.3. As another example, the remote processor may include processing device560in remote processing unit208as shown inFIG.5. The at least one virtual object may be, for example, at least one virtual two-dimensional (2D) object (such as a virtual display screen, a virtual glass or other transparent surface, a virtual 2D graph, a virtual 2D presentation/slides, a 2D virtual user interface, etc.). In another example, the at least one virtual object may be at least one virtual three-dimensional (3D) object (i.e., have volume), such as a puzzle cube, a ball, etc. In another example, the at least one virtual object may include at least one virtual 2D object and at least one virtual 3D object. Some embodiments include receiving first image data from at least a first image sensor, the first image data reflecting a first perspective of an individual wearing the wearable extended reality appliance. It is noted that the phrase “user of the wearable extended reality appliance” and “individual wearing the wearable extended reality appliance” may be interchangeable and for purposes of description herein may have a similar meaning. Similarly, the short forms of these terms (e.g., “user” and “individual”) may also be interchangeable and have a similar meaning. Some embodiments include receiving image data (such as the first image data) from the wearable extended reality appliance. An “image sensor” as used herein may include a CCD sensor, a CMOS sensor, or any other detector capable of detecting images. Image data includes the output of the image sensor, or data derived or developed from the output of the image sensor. In some embodiments, the image data may be received from an image sensor (e.g., the at least a first image sensor), such as a CCD sensor or a CMOS sensor located on or otherwise associated with the wearable extended reality appliance. Image data may be received via either a wired or wireless transmission, which transmission may be in the form of digital signals. For example, an image sensor472as shown inFIG.4may be employed in the wearable extended reality appliance. The image data received from or captured by the image sensor (such as the first image data) may be associated with the physical environment of the user and may include one or more still images, a series of still images, or video. In some embodiments, the first image data reflecting the first perspective of the individual wearing the wearable extended reality appliance may represent what the individual would see (i.e., the “first perspective”) when the extended reality environment is not being displayed. For example, the first image data may capture an image of the physical environment where the user is located while wearing the extended reality appliance, such as a room. In some embodiments, the first perspective may include items within the field of view of the user when the extended reality environment is not being displayed. In some embodiments, the first image data may be stored in the wearable extended reality appliance. In some embodiments, the first image data may be stored in a device separate from the wearable extended reality appliance and in wired or wireless communication with the first image sensor such that the first data may be transmitted to the device, either upon initiation by the wearable extended reality appliance or by the device. In some embodiments, the first image sensor is part of the wearable extended reality appliance. In some embodiments, the first image sensor may be considered to be “part” of the wearable extended reality appliance when it is physically attached to, physically embedded in, or otherwise associated with the wearable extended reality appliance. For example, the first image sensor may be physically attached to the wearable extended reality appliance via a physical connection mechanism such as a clip, a bracket, or a snap-fit arrangement. As another example, the first image sensor may be physically attached to the wearable extended reality appliance via an adhesive. In some embodiments, the first image sensor may be located elsewhere on the user and may be in communication with the wearable extended reality appliance. For example, the first image sensor may be clipped or attached to the user's shirt or clothing. In some embodiments, the first image sensor may be located on an exterior portion of the wearable extended reality appliance, such that the image sensor may be positioned to capture first image data corresponding to the individual's head position.FIG.26illustrates an exemplary wearable extended reality appliance2610including a first image sensor2612. WhileFIG.26illustrates wearable extended reality appliance2610as a pair of glasses, wearable extended reality appliance2610may take on other forms (e.g., goggles) as described herein. First image sensor2612is shown inFIG.26as being located to one side of a frame portion of wearable extended reality appliance2610(e.g., the left side of the individual's head). In some embodiments, first image sensor2612may be located on other portions of wearable extended reality appliance2610without affecting the operation of first image sensor2612. For example, first image sensor2612may be located on the right side of wearable extended reality appliance2610relative to the individual's head. As another example, first image sensor2612may be located in the middle of wearable extended reality appliance2610such that first image sensor2612does not block the individual's vision through the lenses of wearable extended reality appliance2610. In some embodiments, more than one image sensor2612may be located on the exterior portion of wearable extended reality appliance2610. For example, a first image sensor may be located on the left side of the frame and a second image sensor may be located on the right side of the frame. As another example, a first image sensor may be located on the left side of the frame, a second image sensor may be located on the right side of the frame, and a third image sensor may be located in the middle of the frame. FIG.27illustrates an exemplary view from the perspective of the individual wearing the extended reality appliance (for example, as captured by first image sensor2612). Image2710is an image of the physical environment around the user and is captured from the perspective of the first image sensor (i.e., the perspective of the individual wearing the extended reality appliance). Image2710includes a depiction2712of the user's hands (and may include a portion of the user's arms) in the physical environment and a depiction of a computing device2714including an image sensor2716. It is understood that computing device2714and image sensor2716are provided for purposes of illustration, and that other devices including one or more image sensors may also be used, or the device including the second image sensor may not be depicted or may be only partly depicted in image2710. Some embodiments include receiving second image data from at least a second image sensor, the second image data reflecting a second perspective facing the individual. In some embodiments, the second image sensor may have similar structural and/or functional characteristics as the first image sensor described herein. The second image data reflecting the second perspective facing the individual wearing the extended reality appliance may be from a position in the physical environment such that the second image sensor faces the individual. In some embodiments, the second image sensor may be placed in any location in the physical environment of the individual (and within an imaging range of the second image sensor) such that the second image sensor may capture one or more images of the individual. In some embodiments, the second image sensor is a part of a computing device selectively connected to the wearable extended reality appliance. In some embodiments, the computing device may include an input device or an integrated computational interface device as described herein. The computing device may be selectively connected to the wearable extended reality appliance via a wired connection or a wireless connection as described herein. In some embodiments, the second image sensor may be considered to be “part” of the computing device when it is physically attached to, physically embedded in, or otherwise associated with the computing device. For example, the second image sensor may be physically attached to the computing via a physical connection mechanism such as a clip, a bracket, or a snap-fit arrangement. As another example, the second image sensor may be physically attached to the computing device via an adhesive. As another example, the second image sensor may be embedded in a portion of a housing of the computing device. As another example, the second image sensor may be associated with the computing device by being located separate from the computing device (such as in a standalone device) and in wired or wireless communication with the computing device. FIG.26illustrates an exemplary computing device2620including a second image sensor2622. In some embodiments, computing device2620may include more than one second image sensor. In some embodiments, second image sensor2622may be included in a standalone device (i.e., not selectively connected to computing device2620or to the wearable extended reality appliance), but may be configured to communicate with computing device2620or the wearable extended reality appliance via either wired or wireless communication. For example, second image sensor2622may be included in a security camera in wireless communication with computing device2620. In some embodiments, if a plurality of second image sensors are used, the second image sensors may each be located in a separate device, and each separate device may be in wired or wireless communication with each other and/or with computing device2620. For example, one second image sensor may be located in computing device2620and another second image sensor may be located in a security camera separate from computing device2620. As another example, a plurality of security cameras may be used, each security camera including a separate second image sensor. In embodiments where the method is performed by a processor, the processor may be configured to receive the second image data from the second image sensor. In some embodiments, the second image data may be stored in a device that includes the second image sensor, such as the computing device or the separate device. In such embodiments, the second image data may be transmitted to the processor, for example, upon receiving a command from the processor or on a periodic basis. Some disclosed embodiments include obtaining image data from the computing device. In some embodiments, the image data may be obtained from an image sensor, such as a CCD or CMOS sensor located on or otherwise associated with the computing device. For example, the image sensor372as shown inFIG.3may be employed in the computing device. The image data received or generated by the image sensor may be associated with the physical environment of the user and may include one or more still images, a series of still images, or video. FIG.28illustrates an exemplary view from the perspective of the second image sensor in the computing device, facing the individual wearing the extended reality appliance (i.e., from the second perspective). Image2810includes a depiction2812of the user in the physical environment and a depiction2814of the user's hands in the physical environment. As shown in image2810, while the user's hands may interact with a virtual object, the virtual object cannot be seen from the second perspective. FIG.29illustrates exemplary virtual objects, as seen from the perspective of the user of the wearable extended reality appliance. Image2910shows the virtual objects as seen in the extended reality environment. Image2910includes a virtual depiction2912of a puzzle cube, a virtual depiction2914of a volleyball, and a virtual depiction2916of a vase of flowers. As shown in image2910, the relative sizes of virtual depictions2912,2914, and2916may vary depending on the virtual distance the virtual object is from the user. The term “virtual distance,” as used herein, represents a distance between virtual objects displayed in the extended reality environment, or a distance between the user or the wearable extended reality appliance and a virtual object displayed in the extended reality environment. Similar to the relative sizes of objects in the physical environment, using perspective geometry, a virtual object near the user may appear larger than a virtual object in the background. For example, if the user is holding the puzzle cube, virtual depiction2912may appear larger than virtual depictions2914or2916. In some embodiments, the virtual objects may have “fixed” locations when the user is not interacting with the virtual object. As shown in image2910, the user is interacting with the puzzle cube and not the volleyball or the vase of flowers. If the user were to change the virtual object the user is interacting with, for example by virtually releasing the puzzle cube and virtually picking up the vase of flowers, image2910may be updated to reflect that the user is now interacting with the vase of flowers and not the puzzle cube. For example, the puzzle cube may be placed in the same location that the vase of flowers was in before the user virtually picked up the vase of flowers. In some embodiments, the virtual objects may change locations and/or appearance, even when the user is not interacting with the virtual object. Some embodiments include identifying in the first image data first physical hand movements interacting with the at least one virtual object from the first perspective. Referring toFIG.27, image2710(including the first image data) may be analyzed to identify user hand movements (i.e., the first physical hand movements) while interacting with the virtual object. For example, the first physical hand movements may be identified by performing an image recognition algorithm, a visual detection algorithm, or a visual recognition algorithm, such as a visual activity recognition algorithm or a visual gesture recognition algorithm. In some embodiments, the algorithm may include any machine learning algorithm described earlier. For example, a machine learning model may be trained using training examples to identify hand movements interacting with virtual objects in images and/or videos. An example of such training example may include a sample image and/or a sample video of sample hands, together with a label indicating movements of the sample hands interacting with a sample virtual object. The trained machine learning model may be used to analyze the first image data and identify the first physical hand movements interacting with the at least one virtual object. In another example, the first image data may be analyzed to calculate a convolution of the at least part of the first image data and thereby obtain a result value of the calculated convolution. Further, the identification of the first physical hand movements interacting with the at least one virtual object may be based on the result value of the calculated convolution. For example, in response to the result value of the calculated convolution begin a first value, the first physical hand movements may be identified as interacting with the at least one virtual object, and in response to the result value of the calculated convolution begin a second value, the first physical hand movements may be identified as not interacting with any virtual object. In embodiments where the method is performed by a processor, the processor may be configured to perform one or more of these algorithms to identify the first physical hand movements. In some embodiments, the algorithms to identify the first physical hand movements may be performed by a specialized processor in communication with the processor performing the method, for example, by a graphics processing unit (GPL), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA). Some embodiments include identifying in the second image data second physical hand movements interacting with the at least one virtual object from the second perspective. Referring toFIG.28, image2810may be analyzed to identify user hand movements (i.e., the second physical hand movements) while interacting with the virtual object. In some embodiments, the second physical hand movements may be identified in a similar manner as the first physical hand movements. In some embodiments, the first physical hand movements and the second physical hand movements may be the same physical hand movements, but from different perspectives (i.e., from the first perspective and the second perspective). In some embodiments, the first physical hand movements and the second physical hand movements may be different physical hand movements. For example, the first physical hand movements and the second physical hand movements may be captured at different moments in time. Some embodiments include analyzing at least one of the first image data or the second image data to determine an interaction with the at least one virtual object. Analyzing the first image data and/or the second image data to determine an interaction with the virtual object may include performing an image recognition algorithm, a visual detection algorithm, or a visual recognition algorithm, such as a visual activity recognition algorithm or a visual gesture recognition algorithm. In some embodiments, the algorithm may include any machine learning algorithm described earlier. For example, a machine learning model may be trained using training examples to determine interaction with virtual objects. An example of such training example may include a sample image and/or a sample video, together with a label indicating interaction with a sample virtual object. The trained machine learning model may be used to analyze the first image data and/or the second image data and identify the interaction with the at least one virtual object. In another example, the first image data may be analyzed to calculate a convolution of the at least part of the first image data and thereby obtain a first result value of the calculated convolution. Further, the second image data may be analyzed to calculate a convolution of the at least part of the second image data and thereby obtain a second result value of the calculated convolution. Further, the identification of the first physical hand movements interacting with the at least one virtual object may be based on the first result value of the calculated convolution and the second result value of the calculated convolution. For example, in response to a first combination of the first and second result values, an interaction with the at least one virtual object may be determined, and in response to a second combination of the first and second result values, no interaction with the at least one virtual object may be determined. In embodiments where the method is performed by a processor, the processor may be configured to perform the one or more algorithms to determine the interaction. In some embodiments, the algorithms to determine the interaction may be performed by a specialized processor in communication with the processor performing the method, for example, by a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA). In some embodiments, the analyzing may include analyzing both the first image data and the second image data. In some embodiments, the interaction may correspond only to the first physical hand movements. In some embodiments, the interaction may correspond only to the second physical hand movements. In some embodiments, the interaction may correspond to both the first physical hand movements and the second physical hand movements. The interaction with the virtual object may include any interaction the user may have with a virtual object in the extended reality environment. The scope of the possible interactions with the virtual object may mirror the scope of possible interactions an individual may have with a corresponding physical object in the physical environment. For example, if the user is holding the virtual puzzle cube, the possible interactions include any interaction the user may have with a physical puzzle cube in the physical environment, such as turning the entire puzzle cube in their hands, turning one portion of the puzzle cube with one hand while holding the rest of the puzzle cube with the other hand, picking up the puzzle cube, putting the puzzle cube on a surface, throwing the puzzle cube, or handing the puzzle cube to another person. As another example, if the user is holding a virtual volleyball, the possible interactions include any interaction the user may have with a physical volleyball in the physical environment, such as turning the volleyball in their hands, hitting the volleyball from a variety of different hand positions (e.g., serving, bumping, setting, or spiking), throwing the volleyball, or handing the volleyball to another person. As the user interacts with the virtual object in the extended reality environment, the appearance of the virtual object may change. For example, if the user is holding the virtual volleyball and rotates the volleyball away from themselves with both hands, it may appear to the user in the extended reality environment that both hands are rotating away from themselves while the volleyball is also rotating. In some embodiments, the changes in the extended reality environment may be detected by an input device as described herein, such as a pair of haptic gloves. To properly display these changes to the user, the presentation of the extended reality environment may be updated to reflect these changes. While such changes are occurring in the presentation of the extended reality environment, the first image sensor and the second image sensor may capture the first image data and the second image data of the user's hands rotating away from the user's body (i.e., the first physical hand movements and the second physical hand movements). In some embodiments, determining the interaction with the virtual object may include determining how the appearance of the virtual object is changed based on the user's interaction. For example, if the user is holding the virtual puzzle cube and the interaction is that the user turns the top portion of the puzzle cube in a counterclockwise direction, in the extended reality environment, the user would see the top portion of the puzzle cube turning in the counterclockwise direction. Some embodiments include rendering for display a representation of the at least one virtual object from the second perspective. So that the virtual object may be properly seen from the second perspective (i.e., as the virtual object may appear to an outside observer as if the outside observer is “looking into” the extended reality environment), it needs to be rendered (e.g., generated, drawn, illustrated, pictured, shown, represented, or presented) from the second perspective. In the extended reality environment, the virtual object may be rendered (e.g., generated as part of the presentation of the extended reality environment) from the user's perspective. The virtual object may therefore be shown (or generated for presentation to the user) from the second perspective (e.g., that of an external observer “looking into” the extended reality environment). For example, if the user is holding the virtual puzzle cube, the user may see a certain color combination facing in the user's direction in the extended reality environment. To render the puzzle cube from the second perspective (i.e., to render the face of the puzzle cube that the outside observer would see if the user were holding the puzzle cube in the physical environment), the rendering may include using ray casting algorithms, artificial intelligence (AI) algorithms, machine learning (ML) algorithms, 3D models of the puzzle cube, and/or information from the wearable extended reality device (e.g., what the user sees in the extended reality environment) about the puzzle cube. For example, the algorithms may use information about the virtual object to render the virtual object from any angle (i.e., from the first perspective or the second perspective). Then, as the user interacts with the virtual object, changing what the user sees in the extended reality environment (i.e., from the first perspective), the algorithm may correspondingly update how the virtual object appears from other angles (i.e., from the second perspective). In some embodiments, the rendering may be based on stored views of the virtual object from various angles and may select a view that represents an opposite side of the virtual object from what the user is currently viewing in the extended reality environment. In some embodiments, the information about the virtual object may be stored in a database or other data storage that may be accessed as part of the rendering. Some embodiments include melding the rendered representation of the at least one virtual object from the second perspective with the second image data to generate a video of the individual interacting with the at least one virtual object from the second perspective. As noted above, the rendered representation of the virtual object from the second perspective is what the outside observer would see if they could “look into” the extended reality environment. The melding may include a process of combining the rendered virtual object from the second perspective with the second image data (for example, similar to a “green screen” or “chroma key” effect in television or movies where one image is layered or composited with a second image). In some examples, the melding may include image stitching and/or object blending algorithms. In some examples, the melding may include using a generative model analyze the rendered representation of the at least one virtual object from the second perspective and the second image data to generate the video of the individual interacting with the at least one virtual object from the second. In embodiments where the method is performed by a processor, the processor may be configured to perform the melding, such as by performing a chroma keying algorithm. In some embodiments, the melding may be performed by a specialized processor in communication with the processor performing the method, for example, by a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA). In some embodiments, generating the video of the individual interacting with the virtual object from the second perspective may be performed by combining several melded still images together. In some embodiments, generating the video may be performed by melding a video of the rendered representation of the virtual object from the second perspective with a video of the individual from the second perspective. In embodiments where the method is performed by a processor, the processor may be configured to generate the video. In some embodiments, the video may be generated by a specialized processor in communication with the processor performing the method, for example, by a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA). In some embodiments, the generated video may be stored for later playback. For example, the generated video may be stored on the computing device. As another example, the generated video may be stored on a separate storage remote from the wearable extended reality appliance and the computing device, such as on a cloud-based storage service. In some embodiments, the generated video may not be stored and may be displayed on a screen (e.g., a television, a monitor, a tablet, a mobile phone, a mobile device, or other display device). FIG.30illustrates an exemplary melded view from the perspective of the second image sensor. As illustrated in the figure, melded image3010includes an individual wearing extended reality appliance3012and interacting with a virtual object3016. Melded image3010also includes a depiction3014of the user's hands, in the physical environment from the second perspective, holding a virtual object3016(shown as a puzzle cube). In some embodiments, melded image3010may include a still image, a series of one or more still images, or a video. By melding the rendered representation of the virtual object from the second perspective with the second image data (i.e., the video captured from the second perspective), the generated video may represent what the outside observer sees as the user interacts with the virtual object. In some embodiments, the identified first physical hand movements are associated with a gesture for causing a movement of the at least one virtual object. A gesture for causing a movement of the virtual object may include any hand motion that the user may make to move the virtual object in the extended reality environment. For example, the user may turn the virtual object (i.e., a movement of the virtual object) in their hands by rotating one or both of their hands in the direction that they wish to turn the virtual object (i.e., physical hand movements associated with a gesture). In some embodiments, the identified second physical hand movements may be associated with a gesture for causing the movement of the at least one virtual object. In some embodiments, both the identified first physical hand movements and the identified second physical hand movements may be associated with a gesture for causing the movement of the at least one virtual object. The gesture may cause movement of a virtual object as the result of the gesture being recognized through machine vision as having a particular function. For example, image analysis performed on hand motion may detect a type of rotation that corresponds in memory to a particular object movement, and that movement may then be translated to the associated virtual object. In some embodiments, rendering the representation of the at least one virtual object from the second perspective reflects the movement of the at least one virtual object. As the virtual object is moved in the extended reality environment, the rendered representation of the virtual object from the second perspective is updated to reflect the movement. The rendering to reflect the movement of the virtual object may be performed in a similar manner as described above. For example, as shown inFIG.30, if the user turns one portion of the virtual puzzle cube away from the user, the rendered representation of the virtual puzzle cube from the second perspective would show that the portion of the virtual puzzle cube is turned toward the outside observer. In some embodiments, the user may interact with the virtual object in ways other than by moving the virtual object. For example, the user interaction may include changing the virtual object's size by simultaneously moving both hands toward the virtual object to make the virtual object smaller or by simultaneously moving both hands away from the virtual object to make the virtual object larger. As another example, the user may change the orientation of the virtual object by turning it. As another example, the user may change the location of the virtual object in the extended reality environment by moving the virtual object from one portion of the extended reality environment to another portion of the extended reality environment. For example, the user may move the virtual object from the left side of the extended reality environment to the right side of the extended reality environment. As another example, the user may change the appearance of the virtual object by changing the color of the virtual object. In some embodiments, the user may change the appearance of the virtual object by using a user interface tool. For example, to change the color of a virtual object, the user may use a color picker tool. In some embodiments, the user may change information presented by the virtual object. For example, if the virtual object is a virtual screen containing a text document, the user may scroll through the text, may add text, or may delete text, thereby changing the information presented on the virtual screen. In such an example, the rendered representation of the virtual screen may include a “backwards” version of the text document such that the text orientation is the opposite of what the user sees in the extended reality environment (i.e., the rendered representation may appear to the outside observer as if the outside observer was looking through a transparent screen of text from behind). In some embodiments, the identified first physical hand movements are associated with a gesture for causing a modification to a visual appearance of a portion of a surface of the at least one virtual object. For example, the identified first physical hand movements may be associated with a predefined gesture (i.e., the user moves their hands in a predefined way or to a predefined location in the extended reality environment) to activate a user interface tool to enable the user to change a color of a portion of the surface of the virtual object. For example, if the user moves their left hand to the upper left corner of the extended reality environment, a user interface tool panel may be activated (i.e., appear in the extended reality environment) from which the user may select a tool with which they may interact with the virtual object. For example, the user may use a color picker tool to change the color of a virtual object. As another example, the user may size changing tool to change the size of the virtual object. As another example, the user may use a text tool to add text to the virtual object. As another example, the user may use one or more drawing tools, such as a predefined shape tool (e.g., a square, a rectangle, or a circle) or a freehand drawing tool (e.g., a pencil, a marker, or a paintbrush), to draw on the virtual object. In some examples, the at least one virtual object may include a user interface, and the gesture for causing the modification to the visual appearance of the portion of the surface of the at least one virtual object may include at least one of gesture for entering data into the user interface, gesture for selecting an element of the user interface, gesture for minimizing at least one element of the user interface, or gesture for expanding at least one element of the user interface. In some embodiments, the portion of the surface is visible from the second perspective and is not visible from the first perspective. For example, if the virtual object is a volleyball, the user may change the color of one panel of the volleyball (i.e., a portion of the surface of the volleyball) and that one panel may be visible from the outside observer's perspective (i.e., the second perspective) but not from the user's perspective (i.e., the first perspective). In some embodiments, rendering the representation of the at least one virtual object from the second perspective reflects the modification to the visual appearance of the surface of the at least one virtual object. The rendering to reflect the modification to the visual appearance of the surface of the virtual object may be performed in a similar manner as described above. The rendering would show the modified appearance of the surface of the virtual object as seen from the perspective of the outside observer. In some embodiments, the modification to the visual appearance of the portion of the surface of the virtual object may be visible to the user interacting with the virtual object in the extended reality environment but may not be visible from the second perspective (i.e., to the outside observer) in the generated video. For example, if the virtual object is a volleyball, the user may change the color of one panel of the volleyball (i.e., a portion of the surface of the volleyball) and that one panel may be visible from the user's perspective in the extended reality environment (i.e., the first perspective) but not from the outside observer's perspective (i.e., the second perspective). In some embodiments, the operations include determining a position of the at least one virtual object in the extended reality environment. In some embodiments, determining the position of the virtual object may be based on a location of the virtual object relative to a fixed location in the extended reality environment (for example, relative to the user's location in the extended reality environment). For example, in the extended reality environment, the virtual object may not be in the user's hands but may be located at a distance from the user (i.e., the user would have to reach to hold the virtual object). In some embodiments, determining the position of the virtual object may be based on a location of the virtual object relative to one or more other virtual objects. In some embodiments, the position of the virtual object may be determined based on a distance from a fixed location or on a distance from a predetermined coordinate position. For example, the extended reality environment may include an internal coordinate system and the position of a virtual object may be determined based on the location of the virtual object in that coordinate system. In some embodiments, the coordinate system may include a field-of-view of the extended reality environment, as described elsewhere in this disclosure. In some embodiments, the coordinate system may include an entirety of the extended reality environment, including portions of the extended reality environment outside the field-of-view. In some embodiments, rendering the representation of the at least one virtual object from the second perspective is based on the determined position. The rendering of the virtual object from the second perspective based on the determined position may be performed in a similar manner as described above. Continuing the above example, the virtual object may be rendered from the second perspective as being positioned at a distance from the user. For example, from the second perspective, the virtual object may appear to be closer than to the user (i.e., the virtual object may appear to be located between the outside observer and the user). In some embodiments, the determined position of the at least one virtual object includes a distance between the at least one virtual object and the individual. For example, in the extended reality environment, the virtual object may appear to be 0.5 meters away from the individual. In some embodiments, the distance may be any distance and may be measurable in any units, such as, but not limited to, millimeters, centimeters, meters, inches, or feet. In some embodiments, rendering the representation of the at least one virtual object from the second perspective is based on the determined distance. The rendering of the virtual object from the second perspective based on the determined distance may be performed in a similar manner as described above. Since objects appear smaller at greater distances, the determined distance may impact the size of the object rendered. Similarly, the determined distance may also impact perspective, and the rendered perspective may reflect the determined distance. Continuing the above example, if the virtual object is 0.5 meters away from the individual in the extended reality environment, then from the second perspective, the virtual object may also be rendered to be 0.5 meters away from the individual. As another example, in the extended reality environment, the user may move the virtual object toward the user or away from the user. So, if the user moves the virtual object toward herself, the virtual object may appear to the user in the extended reality environment as getting larger, whereas from the second perspective, the virtual object may appear to the outside observer as getting smaller. Similarly, if the user moves the virtual object away from herself in the extended reality environment, the virtual object may appear to the user in the extended reality environment as getting smaller, whereas from the second perspective, the virtual object may appear to the outside observer as getting larger. In some embodiments, the determined position of the at least one virtual object includes a spatial orientation of the at least one virtual object within the extended reality environment. The spatial orientation of the virtual object within the extended reality environment relates to the position, attitude, inclination, and/or rotation of the virtual object in the extended reality environment relative to other objects in the extended reality environment, including the user. For example, if the extended reality environment includes multiple virtual objects in different locations, the relative orientation of each virtual object (i.e., the spatial orientation of each virtual object relative to each other) may be determined. As another example, the spatial orientation of a virtual object may be determined based on a predetermined coordinate position. For example, the extended reality environment may include an internal coordinate system and the spatial orientation of a virtual object may be determined based on the spatial orientation of the virtual object in that coordinate system. In some embodiments, the coordinate system may include a field-of-view of the extended reality environment, as described elsewhere in this disclosure. In some embodiments, the coordinate system may include an entirety of the extended reality environment, including portions of the extended reality environment outside the field-of-view. In some examples, the spatial orientation of the at least one virtual object may be selected to make a selected side of the at least one virtual object to face the user. For example, the selected side may include textual data, and the selection of the spatial orientation may make the textual data viewable (and in some cases readable) by the user. In some examples, the spatial orientation of the at least one virtual object may be selected to make a selected side of the at least one virtual object be directed to a selected direction in the extended reality environment. For example, a spatial orientation for virtual vase of flowers2916may be selected so that the opening of the virtual vase is facing up. In another example, a spatial orientation of a virtual arrow may be selected so that the virtual arrow points to a selected location and/or direction. In some embodiments, rendering the representation of the at least one virtual object from the second perspective is based on the determined spatial orientation. The rendering of the virtual object from the second perspective based on the determined spatial orientation may be performed in a similar manner as described above. Continuing the above example, the relative position of each virtual object as viewed in the extended reality environment may be maintained when the virtual objects are rendered from the second perspective. As another example, the position of each virtual object when rendered from the second perspective may be maintained based on the location and/or spatial orientation of the virtual object in a coordinate system as described above. Referring toFIG.29, image2910includes virtual depiction2912of a puzzle cube, virtual depiction2914of a volleyball, and virtual depiction2916of a vase of flowers. Image2910is from the first perspective and shows how virtual objects2912,2914, and2916may appear to a user while viewing the extended reality environment. As shown in image2910, the spatial orientation of the virtual objects is that the vase of flowers2916appears to the user's left, the puzzle cube2912appears in front of the user or in the user's hands (i.e., in the center of the extended reality environment), and the volleyball2914appears to the user's right. FIG.31is a melded image3110from the second perspective. Image3110includes a depiction3112of the user in the physical environment, a depiction3114of the user's hands in the physical environment holding a virtual object3116of a puzzle cube, a depiction3118of a volleyball, and a depiction3120of a vase of flowers. As can be seen by comparing image2910and image3110, the relative positions of the virtual objects (i.e., the spatial orientation) from the first perspective (as shown in image2910) are maintained in the second perspective (as shown in image3110). From the second perspective as shown in image3110, the spatial orientation of the volleyball3118appears to the outside observer's left, the puzzle cube3116appears in front of the user or in the user's hands (i.e., in the center of image3110), and the vase of flowers3120appears to the outside observer's right. In some embodiments, the at least one virtual object includes text presented by the wearable extended reality appliance on a side of the at least one virtual object that faces the individual. In some embodiments, the text may be presented as a layer on top of the virtual object such that part of the virtual object is occluded. In some embodiments, “text” as used herein may also include an image, a logo, a text message, instructions, icons, arrows, or an alert. In some embodiments, generating the video includes providing a representation of the text in the video. Text may include characters, words, sentences, lettering, symbols, or any other form of expression. A representation of text may include an illustration or presentation of the particular form of expression from an associated perspective. For example, if the virtual object is transparent or intended to be transparent (e.g., a virtual screen including a text document), the generated video may include a representation of the text appearing as a “backwards” image of what the user sees (i.e., the text may appear to the outside observer to be in the opposite orientation). For example, if the text appears to the user in the extended reality environment (i.e., from the first perspective) in a left-to-right orientation, the representation of the text in the generated video may appear to the outside observer (i.e., from the second perspective) in a right-to-left orientation. In some embodiments, the text may only be visible to the individual in the extended reality environment and may not be visible to the outside observer (i.e., may be visible from the first perspective and may not be visible from the second perspective). For example, if the virtual object is a solid object (e.g., a coffee mug) with text on one side facing the individual in the extended reality environment, the outside observer would not be able to see through the coffee mug to read the text from the second perspective. This is the same result as would occur in the physical environment with a physical coffee mug (i.e., in the physical environment, the outside observer cannot see through the physical coffee mug to read the text facing the individual). In some embodiments, rendering the representation of the at least one virtual object includes determining an opacity for the representation of the at least one virtual object. The opacity of the virtual object represents how transparent the virtual object is and whether the user can “see through” the virtual object to be able to observe the physical environment. In some embodiments, the opacity of the virtual object may be automatically adjusted based on detection of activity in the physical environment. For example, if the outside observer approaches the individual wearing the extended reality appliance, the opacity of one or more virtual objects (or of the entire extended reality environment) may be adjusted such that the individual can see the outside observer in the physical environment. For example, if the virtual object is a virtual screen including text, the opacity of the virtual screen may be determined such that the outside observer may see the text when rendered from the second perspective; i.e., the opacity may be determined to be low enough that the outside observer may see through the virtual object when rendered from the second perspective. In some embodiments, the opacity is determined to be less than 75% when the at least one virtual object obscures at least a portion of the individual from the second perspective. For example, if the virtual object is a solid object (i.e., not meant to be transparent), then the opacity of the virtual object when rendered from the second perspective may be reduced (e.g., to 75% or less) such that the individual may be visible “through” the virtual object even if the individual may not be fully visible. For example, the opacity of the virtual object may be reduced such that the outside observer may be able to see through the virtual object to see the individual. As another example, the opacity of the virtual object may be reduced such that the outside observer may be able to see through the virtual object to read text or see an image on a side of the virtual object that faces the individual in the extended reality environment. In some embodiments, the at least one virtual object contains private information and rendering the representation of the at least one virtual object includes obscuring the private information. Private information may include any data or representation designated as confidential to one or more persons, or otherwise restricted for viewing purposes. The rendering of the virtual object to obscure the private information may be performed in a similar manner as described above. For example, the virtual object may include a virtual screen with a text document and the text document may contain private information. In some embodiments, the fact that the virtual object contains private information may be indicated by a private information identifier, such as a flag or other type of identifier. If the private information identifier is detected (e.g., the private information identifier is set or otherwise indicates that the virtual object contains private information), then rendering the virtual object from the second perspective may include obscuring the private information. For example, the rendered virtual object may include “greeked” text (e.g., rendering of the text as unreadable symbols or lines) such that the private information is not readable from the second perspective. As another example, the opacity of the virtual object may be adjusted such that the outside observer cannot see through the virtual object from the second perspective to read the private information. As another example, the text may be obscured by other means, such as blurring or distorting the text, covering the text with an opaque box, or changing the color of the text to match the background. In any of these examples, the user in the extended reality environment would still be able to read the private information. In some embodiments, the at least one virtual object includes a first object in a position visible from the second perspective and a second object in a position hidden from the second perspective. For example, in the extended reality environment, the second virtual object may appear to be closer to the user and the first virtual object may appear to be behind the second virtual object. In some embodiments, the first virtual object may be larger than the second virtual object such that the first virtual object may be partially obscured by the second virtual object in the extended reality environment. From the second perspective, the first virtual object may be visible to the outside observer, but the second virtual object may be blocked from view by the first virtual object (e.g., if the first virtual object is larger than the second virtual object) such that the outside observer cannot see the second virtual object. In some embodiments, the generated video includes a representation of the first object from the second perspective in a first visual format and a representation of the second object from the second perspective in a second visual format, the second visual format differs from the first visual format and is indicative of the second object being in the position hidden from the second perspective. Generating the video may be performed in a similar manner as described earlier. In some embodiments, different virtual objects may be rendered in different visual formats. A visual format may include parameters of how a virtual object is to be rendered, for example, object size, line width, line color, line style, object fill, object fill color, intensity, texture, or transparency/opacity. In some embodiments, the different visual formats may include an indicator or a parameter (e.g., a flag, a tag, or other indicator) in the visual format that the virtual object is in a hidden position. In some examples, the different visual formats may be indicative of the virtual object being in a hidden position. For example, a half transparent rendering of a normally non-transparent virtual object may be indicative of the virtual object being in a hidden position. For example, in the extended reality environment, the first virtual object may be larger than the second virtual object and may be behind the second virtual object such that from the second perspective the second virtual object may be hidden behind the first virtual object (i.e., the outside observer cannot see the second virtual object). To render the image from the second perspective such that the outside observer may see both the first virtual object and the second virtual object, the first virtual object may be rendered with a reduced opacity (i.e., the first visual format) and the second virtual object may be rendered with its “normal” opacity (i.e., the second visual format) such that the outside observer may see “through” the first virtual object to see the second virtual object. As another example, the first virtual object may be rendered in an outline representation (i.e., not “filled in” or as a “wireframe” in the first visual format) and the second virtual object may be rendered with its “normal” opacity (i.e., the second visual format) such that the outside observer may see “through” the first virtual object to see the second virtual object. In some embodiments, in the generated video the rendered representation of the at least one virtual object from the second perspective hides the physical hand and includes a virtual representation of the physical hand. Generating the video may be performed in a similar manner as described earlier. For example, the generated video may include a virtual representation of the user's hands instead of a representation of the user's hands in the physical environment. The virtual representation of the user's hands from the second perspective may be rendered in a similar manner as rendering the virtual object from the second perspective. In some embodiments, the virtual representation of the user's hands may be based on the first image data including the user's hands in the physical environment from the first perspective and the second image data including the user's hands in the physical environment from the second perspective. In some embodiments, the virtual representation of the user's hands from the second perspective may be generated using a machine learning algorithm, such as a generative adversarial network (GNN), a convolutional neural network (CNN), a recurrent neural network (RNN), or other machine learning algorithm as described earlier. In some embodiments, the generated video may include a virtual representation of the user's hands and a generated representation of the user's face without the wearable extended reality appliance. In some embodiments, the operations include analyzing at least one of the first image data or the second image data to determine an absence of interaction with a particular virtual object. Analyzing the first image data or the second image data may be performed in a similar manner as described earlier. In some embodiments, in the extended reality environment, the user may have stopped interacting with the virtual object. For example, the user may place the virtual object on a surface or drop the virtual object. In some embodiments, when a user “drops” the virtual object in the extended reality environment, the virtual object may “float” in front of the user until the user moves the virtual object to a different location. In some embodiments, when a user “drops” the virtual object in the extended reality environment, the virtual object may automatically be placed in a predetermined location. In some embodiments, when the user stops interacting with the virtual object, the generated video excludes a representation of the particular virtual object. Generating the video may be performed in a similar manner as described above. For example, if the user stops interacting with the virtual object, the user may be able to see the virtual object in the extended reality environment, but the outside observer would no longer see the virtual object in the generated video. In some embodiments, however, when the user stops interacting with the virtual object, the generated video may include the representation of the particular virtual object. For example, the virtual object may “float” near the user from the second perspective. As another example, the virtual object may be automatically placed in a predetermined location, visible both in the extended reality environment and in the generated video from the second perspective. In some embodiments, the operations include rendering for display a representation of the extended reality environment from the second perspective. The rendering may be performed in a similar manner as described earlier. For example, the entire extended reality environment as seen by the user wearing the extended reality appliance may be rendered from the second perspective; i.e., the outside observer may see everything that the user sees, but from the second perspective, similar to what the outside observer would see if they wore an extended reality appliance and were viewing the same extended reality environment as the user. In some embodiments, the operations include generating an additional video of the individual in the extended reality environment interacting with the at least one virtual object from the second perspective. Generating the additional video may be performed in a similar manner as described earlier. In some embodiments, the additional video may include a complete representation of the extended reality environment melded with a representation of the user in the physical environment. For example, in the generated video, it may appear that the extended reality environment is placed between the outside observer and the individual in the physical environment. In some embodiments, the additional video may include a complete representation of the extended reality environment melded with a virtual representation of the individual in the extended reality environment. For example, in the generated video, it may appear to the outside observer as if the outside observer was wearing an extended reality appliance and viewing the same extended reality environment as the individual. In some embodiments, the operations include artificially deleting the wearable extended reality appliance from the second image data. Artificially deleting the wearable extended reality appliance from the second image data may enable the outside observer to see the user's entire face, as if the user were not wearing the extended reality appliance. In some embodiments, the artificially deleting may be performed in a similar manner as the rendering described earlier. In some embodiments, the operations include generating the video of the individual, without the wearable extended reality appliance, interacting with the at least one virtual object from the second perspective. Generating the video may be performed in a similar manner as described above. In some embodiments, an image of the individual's face from the second perspective may be replaced with a previously captured image of the individual's face taken while the individual was not wearing the extended reality appliance. The previously captured image of the individual's face may be retrieved from a storage location. For example, the storage location may be on the wearable extended reality appliance, on the computing device, or on a remote storage separate from the wearable extended reality appliance and the computing device. For example, an image similar to image3010as shown inFIG.30may be generated and may include showing the individual's full face. In some embodiments, an AI algorithm, such as any one of the AI or machine learning algorithms described earlier, may be used to render the image of the individual's full face from the second perspective without the wearable extended reality appliance. For example, the AI algorithm may combine the previously captured image of the individual's face with image3010to generate the new image from the second perspective. In some embodiments, the operations include causing the wearable extended reality appliance to present a preview of the video of the individual interacting with the at least one virtual object from the second perspective while the individual is interacting with the at least one virtual object. The preview video may be generated in a similar manner as generating the video described earlier. For example, the generated video may be previewed by the user in the extended reality environment, such as by displaying the preview of the generated video in a separate virtual window in the extended reality environment. In some embodiments, the operations include sharing the generated video with at least one other individual while the individual wearing the wearable extended reality appliance is interacting with the at least one virtual object. Generating the video may be performed in a similar manner as described earlier. For example, the operations may include automatically sending the generated video to another individual. In some embodiments, the generated video may be sent to another individual by any one or more of: an email message, a social media account message, a social media account post, a hyperlink or other type of link, or by displaying the generated video on a screen (e.g., a television, a monitor, a tablet, a mobile phone, a mobile device, or other display device). In some embodiments, the individual wearing the wearable extended reality appliance may initiate the sharing of the generated video. For example, the individual may activate a user interface control in the extended reality environment to initiate the sharing. As another example, the individual may activate a physical control on the wearable extended reality appliance or on the computing device to initiate the sharing. In some embodiments, the outside observer may initiate the sharing of the generated video. For example, the outside observer may activate a physical control on the wearable extended reality appliance or on the computing device to initiate the sharing. In some embodiments, the operations include receiving input that an additional video of the individual interacting with the at least one virtual object from the first perspective is preferred. In some embodiments, the input may be received from the individual in the extended reality environment, for example, by the individual activating a user interface control in the extended reality environment. In some embodiments, the input may be received from the user by the user activating a physical control on the wearable extended reality appliance or on the computing device. In some embodiments, it may be easier to observe how the user is interacting with the virtual object from the first perspective than from the second perspective. For example, if the virtual object is a virtual screen including a text document and the user is typing text into the document, it may be easier for the outside observer to read the document from the first perspective than from the second perspective because in the first perspective the outside observer would see the text oriented in the same direction as the user. As another example, if the user is repairing a bicycle in the extended reality environment, it would be easier for the outside observer to understand how the user is repairing the bicycle if the outside observer was able to see the video from the first perspective. In some embodiments, the operations include melding a representation of the at least one virtual object from the first perspective with the first image data to generate the additional video of the individual interacting with the at least one virtual object from the first perspective. The melding may be performed in a similar manner as described earlier. In this embodiment, the additional video may be generated from the first perspective in a similar manner as generating the video from the second perspective as described earlier. While different image data may be used in generating the additional video (i.e., substituting the first image data for the second image data), the methods for melding and generating may be similar. For example, an image of the user's hands in the physical environment (i.e., the first image data) may be melded with the virtual object from the first perspective to generate the additional video. In some embodiments, the operations include switching from the video from the second perspective to the additional video from the first perspective. In some embodiments, the switching may be initiated by the user by activating a user interface control in the extended reality environment. In some embodiments, the switching may occur based on a predetermined action by the individual. For example, the individual may perform a predetermined hand gesture in the extended reality environment to activate the switching. In some embodiments, the switching may be initiated by the user activating a physical control on the wearable extended reality appliance or on the computing device. In some embodiments, the video may begin from the second perspective and switch to being from the first perspective. In some embodiments, the video may switch between the first perspective and the second perspective multiple times. In some embodiments, a “split screen” video may be generated where one portion of the video (e.g., the left side) is video from the second perspective and another portion of the video (e.g., the right side) is video from the first perspective. In some embodiments, the input is received from the at least one other individual. For example, the other individual may be the outside observer. For example, the outside observer may enter a command or activate a physical control on the computing device selectively connected to the wearable extended reality appliance to initiate the switching. FIG.32is a flowchart of an exemplary method3210for generating videos of individuals interacting with virtual objects. The terms “generating,” “videos,” and “interacting” as used in a similar manner as described above.FIG.32is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. One or more operations of method3210may be performed by a processor associated with a wearable extended reality appliance. For example, a first processor may be located in the wearable extended reality appliance and may perform one or more operations of the method3210. As another example, a second processor may be located in a computing device selectively connected to the wearable extended reality appliance, and the second processor may perform one or more operations of the method3210. As another example, the first processor and the second processor may cooperate to perform one or more operations of the method3210. The cooperation between the first processor and the second processor may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processors. Method3210may include a step3212of generating a presentation of an extended reality environment including at least one virtual object. In some embodiments, the extended reality environment may be presented via the wearable extended reality appliance. The extended reality environment may be generated and presented in a similar manner as described above. Method3210may include a step3214of receiving first image data from at least a first image sensor. The terms “first image data” and “first image sensor” are used in a similar as described above. In some embodiments, the first image data may reflect a first perspective of an individual wearing the wearable extended reality appliance. In some embodiments, the first image sensor may be a part of the wearable extended reality appliance. Method3210may include a step3216of receiving second image data from at least a second image sensor. The terms “second image data” and “second image sensor” are used in a similar as described above. In some embodiments, the second image data may reflect a second perspective facing the individual wearing the wearable extended reality appliance. In some embodiments, the second image sensor may be a part of a computing device selectively connected to the wearable extended reality appliance. Method3210may include a step3218of identifying first physical hand movements in the first image data. The terms “first physical hand movements” and “identified” are used in a similar manner as described above. In some embodiments, the first physical hand movements may represent the individual interacting with the at least one virtual object from the first perspective. Method3210may include a step3220of identifying second physical hand movements in the second image data. The terms “second physical hand movements” and “identified” are used in a similar manner as described above. In some embodiments, the second physical hand movements may represent the individual interacting with the at least one virtual object from the second perspective. Method3210may include a step3222of analyzing the first image data or the second image data to determine an interaction with the at least one virtual object. The term “analyzing” is used in a similar manner as the term “analyzed” described above. In some embodiments, the analyzing may include analyzing both the first image data and the second image data. In some embodiments, the interaction may correspond only to the first physical hand movements. In some embodiments, the interaction may correspond only to the second physical hand movements. In some embodiments, the interaction may correspond to both the first physical hand movements and the second physical hand movements. Method3210may include a step3224of rendering the at least one virtual object for display from the second perspective. The term “rendering” is used in a similar manner as the term “rendered” described above. For example, if the user is holding a virtual puzzle cube, the user will see a certain color combination facing in the user's direction. To render the puzzle cube from the second perspective (i.e., the face of the puzzle cube that the outside observer would see), the rendering may include using use artificial intelligence (AI) algorithms machine learning (ML) algorithms as described above and/or information from the wearable extended reality device about the puzzle cube. Method3210may include a step3226of melding the rendered representation of the at least one virtual object from the second perspective with the second image data to generate a video of the individual interacting with the at least one virtual object from the second perspective. The term “melding” is used in a similar manner as the term “rendered” described above. In some embodiments, instead of a video, a still melded image, or a series of one or more still melded images may be generated. In an alternative embodiment, the generated video may be based on only the second image data and the rendered virtual object from the second perspective; i.e., the first image data may not be received or may not be analyzed. In such alternative embodiment, the second image data from the second image sensor may be received. The second physical hand movements interacting with the at least one virtual object from the second perspective may be identified. The second image data may be analyzed to determine an interaction with the at least one virtual object. The at least one virtual object may be rendered from the second perspective and may be melded with the second image data to generate the video. It is noted that in this alternative embodiment, the generated video would be limited to the second perspective. Some embodiments may provide a system for generating videos of individuals interacting with virtual objects. The system includes at least one processor programmed to cause a wearable extended reality appliance to generate a presentation of an extended reality environment including at least one virtual object; receive first image data from at least a first image sensor, the first image data reflecting a first perspective of an individual wearing the wearable extended reality appliance; receive second image data from at least a second image sensor, the second image data reflecting a second perspective facing the individual; identify in the first image data first physical hand movements interacting with the at least one virtual object from the first perspective; identify in the second image data second physical hand movements interacting with the at least one virtual object from the second perspective; analyze at least one of the first image data or the second image data to determine an interaction with the at least one virtual object; render for display a representation of the at least one virtual object from the second perspective; and meld the rendered representation of the at least one virtual object from the second perspective with the second image data to generate a video of the individual interacting with the at least one virtual object from the second perspective. For example, the system may include system200shown inFIG.2. The at least one processor may include processing device360shown inFIG.3and/or processing device460shown inFIG.4. The steps may be performed entirely by processing device360, entirely by processing device460, or jointly by processing device360and processing device. The cooperation between processing device360and processing device460may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processing devices. Disclosed embodiments, including methods, systems, apparatuses, and non-transitory computer-readable media, may relate to enabling collaboration between physical writers and virtual writers, or between physical writers and virtual viewers. Some embodiments involve a non-transitory computer readable medium containing instructions for causing at least one processor to perform operations to enable collaboration between physical writers and virtual writers. The term “non-transitory computer readable medium” may be understood as described earlier. The term “instructions” may refer to program code instructions that may be executed by a computer processor. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, and any other computer processing technique. The term “processor” may be understood as described earlier. For example, the at least one processor may be one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5, and the instructions may be stored at any of memory devices212,311,411, or511, or a memory of mobile communications device206. A physical writer may be any individual. A virtual writer may be any individual. A virtual viewer may be any individual. Collaboration between physical writers and virtual writers may include interactions between physical writers and virtual writers in extended reality environments. For example, disclosed embodiments may relate to one or more ways for individuals working in extended reality to add annotations to a physical surface (e.g., a physical document or whiteboard) concurrently edited in a physical space, even when the individuals are not located in the same physical space (e.g., physical room). Disclosed embodiments may involve augmenting virtual markings of a remote writer over tangible markings of an individual wearing an extended reality appliance. Collaboration between physical writers and virtual viewers may include interactions between physical writers and virtual viewers in extended reality environments. Some embodiments involve receiving image data representing a hand of a first physical writer holding a physical marking implement and engaging with a physical surface to create tangible markings, wherein the image data is received from an image sensor associated with a wearable extended reality appliance worn by the first physical writer. A physical marking implement may include, for example, a pen, pencil, piece of chalk, highlighter, marker, brush, or any other implement configured to create markings on a physical surface. The physical surface may be a surface of a physical object, which may include, for example, a notebook, whiteboard, desk, table, wall, window, touch pad, cup, mobile device, screen, shelf, machine, vehicle, door, chair, or any other physical item or object. In some embodiments, the physical surface is at least one of a whiteboard or a paper. In some embodiments, the physical surface is a compilation of pages. The tangible markings may include, for example, a letter, word, sentence, paragraph, text, line, arc, freeform, shape, symbol, figure, drawing, feature, sign, or any other indication on a physical surface. The first physical writer may be any individual and may wear a wearable extended reality appliance. In some examples, the wearable extended reality appliance may include an image sensor. In some examples, an image sensor separate from the wearable extended reality appliance may be placed in the environment of the first physical writer. The image sensor, whether part of or separate from the wearable extended reality appliance, may be configured to capture images of the scenes in front of the image sensor. For example, the image sensor may continuously or periodically capture image data. The image data may represent a hand of the first physical writer holding a physical marking implement and engaging with a physical surface to create tangible markings. In some examples, at least one processor associated with the wearable extended reality appliance may receive, from the image sensor, the captured image data. FIG.33is a schematic diagram illustrating use of an exemplary wearable extended reality appliance consistent with some embodiments of the present disclosure. With reference toFIG.33, a first physical writer3310may be an individual. First physical writer3310may wear a wearable extended reality appliance3312. A hand of first physical writer3310may hold a physical marking implement3316and may engage (for example, via the physical marking implement3316) with a physical surface3314to create tangible markings. The physical marking implement3316may be, for example, a pen, a pencil, a chalk, a highlighter, a marker, a brush, or any other apparatus configured to create markings on a surface. An example of the physical surface3314as shown inFIG.33may be a notebook. In some examples, the physical surface3314may be a whiteboard, a piece of paper, a wall, a window, a physical glass surface, a physical table top, or a surface of any other physical object as desired by a person of ordinary skill in the art. An image sensor, whether part of or separate from wearable extended reality appliance3312, may capture image data representing the hand of first physical writer3310holding physical marking implement3316and engaging with physical surface3314to create tangible markings. At least one processor associated with wearable extended reality appliance3312may receive the captured image data from the image sensor. FIGS.34,35,36, and37are schematic diagrams illustrating various use snapshots of an example system for virtual sharing of a physical surface consistent with some embodiments of the present disclosure.FIGS.34and37may illustrate one or more elements as described in connection withFIG.33from another perspective (e.g., the perspective of first physical writer3310, the perspective of wearable extended reality appliance3312, or the perspective of the image sensor that may be part of or separate from wearable extended reality appliance3312). With reference toFIG.34, a hand of first physical writer3310may hold physical marking implement3316and may engage with physical surface3314to create tangible markings3410,3412. Examples of tangible markings3410,3412as shown inFIG.34may be two patterns of drawings. In some examples, any other type of tangible marking as desired may be created on physical surface3314. An image sensor associated with (e.g., part of or separate from) wearable extended reality appliance3312may capture the scenes of creating tangible markings using physical marking implement3316on physical surface3314, including, for example, the scene as shown inFIG.34. Some embodiments involve transmitting information based on the image data to at least one computing device associated with at least one second virtual writer, to thereby enable the at least one second virtual writer to view the tangible markings created by the first physical writer. The at least one second virtual writer may include one or more individuals (e.g., different from the first physical writer). In some examples, the at least one second virtual writer may be present in one or more locations different from a location in which the first physical writer may be present. For example, the at least one second virtual writer and the first physical writer may be present in different rooms, buildings, cities, countries, or in different locations having any desired distance therebetween. The at least one computing device associated with the at least one second virtual writer may include any type of computing device that the at least one second virtual writer may use (e.g., for collaborating, interacting, or communicating with the first physical writer and/or the wearable extended reality appliance worn by the first physical writer). In some embodiments, the at least one computing device associated with the at least one second virtual writer includes at least one of another wearable extended reality appliance, a desktop computer, a laptop, a tablet, or a smartphone. In some embodiments, the at least one computing device associated with the at least one second virtual writer may include one or more computing devices based on virtualization and/or cloud computing technologies, such as virtual machines. The at least one computing device associated with the at least one second virtual writer may be located in proximity to or remote from the at least one second virtual writer and may be accessed by the at least one second virtual writer. At least one processor associated with the wearable extended reality appliance worn by the first physical writer may process the image data captured by the image sensor and may, based on the processing, determine information for transmitting to the at least one computing device associated with the at least one second virtual writer. In some examples, the information to be transmitted to the at least one computing device associated with the at least one second virtual writer may represent the scenes as captured by the image sensor. For example, the transmitted information may allow the at least one second virtual writer to view (via the at least one computing device) the scenes as viewed by the first physical writer and/or as captured by the image sensor. The processing of the image data may include, for example, converting the captured image data into a format suitable for transmission. Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface. Some embodiments involve analyzing the image data received from the image sensor to remove at least one of the hand or the physical marking implement from the image data to thereby create modified image data, and using the modified image data to enable the at least one second virtual writer to view the tangible markings created by the first physical writer without the at least one of the hand or the physical marking implement. For example, by deleting the representation of the at least one of the hand or the physical marking implement and using an inpainting algorithm to fill the deleted portions, at least one processor may generate the modified image data. When the image data includes multiple images of the physical surface (for example, multiple frames of a video of the physical surface), the deleted portions of one image may be filled based on pixel data corresponding to the deleted portions in other images. For example, image processing software may recognize hands and marking implements (e.g., using content aware processing), and may remove them from an image. Alternatively and equivalently, markings may be recognized and removed from an image containing one or more of a hand or a marking implement. In some examples, a generative machine learning model may be trained using training examples to remove depictions of hands and/or depictions of marking implements from images and/or videos. An example of such training example may include a sample image or video including a depiction of a sample hand and/or marking implement, together with a modified version of the sample image or video not including the depiction of the sample hand and/or marking implement. The trained generative machine learning model may be used to analyze the image data received from the image sensor to remove at least one of the hand or the physical marking implement from the image data to thereby create modified image data. In some examples, the image data may be analyzed to calculate a convolution of at least part of the image data and thereby obtain a result value of the calculated convolution. Further, the creation of the modified image data may be based on the result value of the calculated convolution. In one example, one or more pixels associated with a depiction of the at least one of the hand or the physical marking implement may be identified based on the result value of the calculated convolution. In another example, pixels values of pixels associated with a depiction of the at least one of the hand or the physical marking implement may be modified to new pixel values, and the new pixel values may be determined based on the result value of the calculated convolution. At least one processor associated with the wearable extended reality appliance worn by the first physical writer may transmit the information based on the image data to the at least one computing device associated with the at least one second virtual writer. The at least one computing device associated with the at least one second virtual writer may receive the information from the wearable extended reality appliance and may use the received information to cause display to the at least one second virtual writer, for example, of the tangible markings created by the first physical writer. For example, the at least one computing device may cause display of the tangible markings, for example, via one or more screens associated with the at least one computing device. In some examples, the at least one computing device may include a wearable extended reality appliance and may cause display of the tangible markings virtually via a display system of the wearable extended reality appliance, which may include, for example, an optical head-mounted display, a monocular head-mounted display, a binocular head-mounted display, a see-through head-mounted display, a helmet-mounted display, or any other type of device configured to show images to a user. In some examples, the tangible markings may be presented to the at least one second virtual writer with the physical surface. In some examples, the image data may be analyzed to separate the tangible markings from the background physical surface, and the tangible markings may be presented to the at least one second virtual writer over a different surface, such as another physical surface in an environment of the at least one second virtual writer, a virtual surface in an extended reality environment of the at least one second virtual writer, or any other desired surface. Similarly, the tangible markings may be presented to the at least one second virtual viewer with the physical surface. With reference toFIG.34, at least one processor associated with wearable extended reality appliance3312may transmit information based on captured image data to a computing device associated with a second virtual writer. The transmitted information may indicate, for example, the tangible markings3410,3412, and/or the physical surface3314. In some examples, the hands of the first physical writer3310and/or the physical writing implement3316may be not indicated in the transmitted information. Additionally or alternatively, the hands of the first physical writer3310and/or the physical writing implement3316may be indicated in the transmitted information. With reference toFIG.35, an example of the computing device3510associated with the second virtual writer that may receive the information transmitted from the at least one processor associated with wearable extended reality appliance3312is shown. Computing device3510may include a laptop computer.FIG.35shows that the second virtual writer (e.g., an individual) is using the computing device3510with two hands. The screen of computing device3510may, based on the received information, display a representation3512of the physical surface3314and display representations3514,3516of the tangible markings3410,3412. The computing device3510may thus allow the second virtual writer or a virtual viewer to view, for example, the tangible markings3410,3412created by the first physical writer3310. In some examples, one or more visual indicators indicative of an identity of the first physical writer3310may be displayed in association with the representations3512,3514, and/or3516. The one or more visual indicators may include, for example, an image3520of the first physical writer3310, a textual indicator3518of the first physical writer3310, or any other type of desired indication. In some examples, the first physical writer3310may be in a different location than the second virtual writer (or the virtual viewer), and a communication channel may be established between the wearable extended reality appliance3312and the computing device3510(e.g., for transmission of desired information), so that the first physical writer3310and the second virtual writer (or the virtual viewer) may collaborate based on virtual sharing of a physical surface as described herein. In some examples, the computing device3510may be any other type of computing device, such as a second wearable extended reality appliance. One or more of the elements3512,3514,3516,3518, and/or3520may be displayed virtually to the second virtual writer using a display system of the second wearable extended reality appliance. Some embodiments involve receiving from the at least one computing device annotation data representing additional markings in relative locations with respect to the tangible markings created by the first physical writer. For example, via a user interface of the at least one computing device on which the tangible markings created by the first physical writer may be displayed to the at least one second virtual writer, the at least one second virtual writer may input additional markings in locations relative to the displayed tangible markings. The user interface may include, for example, an application running on a laptop computer, a virtual surface presented by a wearable extended reality appliance, or any other interface via which the at least one second virtual writer may provide input. In some examples, the additional markings may be added onto the surface on which the tangible markings created by the first physical writer may be displayed by the at least one computing device. The additional markings may include, for example, a letter, word, sentence, paragraph, text, line, arc, freeform, shape, symbol, figure, drawing, annotation, feature, sign, or any other indication that may be input via a computing device. The additional markings may be generated using one or more of a keyboard, stylus, mouse, touch sensitive input, voice to text input, gesture command, or any other manner of adding indicia. In some examples, the additional markings may be associated with one or more particular items of the tangible markings created by the first physical writer and may be displayed in proximity to the one or more associated items. For example, an additional marking that the at least one second virtual writer may input may include a comment on a particular tangible marking created by the first physical writer. Such an additional marking may be displayed in proximity to the particular tangible marking to indicate the association therebetween. Based on the input of the additional markings, the at least one computing device may determine the relative locations of the additional markings with respect to the tangible markings created by the first physical writer (e.g., as the additional markings and the tangible markings are on a surface presented by the at least one computing device). In some examples, the relative locations may be encoded based on a coordinate system that moves with the tangible markings or with the physical surface, and therefore the locations may be relative to the tangible markings. In some examples, the relative locations may be encoded as a distance and a relative direction with respect to at least part of the tangible markings. The at least one computing device may transmit, to the wearable extended reality appliance worn by the first physical writer, annotation data representing the additional markings in relative locations with respect to the tangible markings created by the first physical writer. The annotation data may include, for example, representations of the additional markings and/or the relative locations of the additional markings with respect to the tangible markings created by the first physical writer. At least one processor associated with the wearable extended reality appliance worn by the first physical writer may receive the annotation data from the at least one computing device. With reference toFIG.36, the second virtual writer may, via a user interface of the computing device3510, input additional markings3610. An example of the additional markings3610may include a comment (“Love this one”) on the representation3514of the tangible marking3410. The additional markings3610may include a circle around, and/or may be in proximity to, the representation3514of the tangible marking3410, to indicate an association between the additional markings3610and the representation3514of the tangible marking3410. The computing device3510may transmit, to the wearable extended reality appliance3312, annotation data representing the additional markings3610in relative locations with respect to the representation3514of the tangible marking3410. The wearable extended reality appliance3312may receive the annotation data from the computing device3510. In some examples, one or more visual indicators indicative of an identity of the second virtual writer may be displayed in association with the additional markings3610. The one or more visual indicators may include, for example, an image of the second virtual writer, a textual indicator3612of the second virtual writer, or any other type of desired indication. The computing device3510may additionally or alternatively transmit the one or more visual indicators to the wearable extended reality appliance3312. Some embodiments involve in response to receiving the annotation data, causing the wearable extended reality appliance to overlay the physical surface with virtual markings in the relative locations. At least one processor associated with the wearable extended reality appliance worn by the first physical writer may receive, from the at least one computing device associated with the at least one second virtual writer, the annotation data representing the additional markings in relative locations with respect to the tangible markings created by the first physical writer. In response to receiving the annotation data, the at least one processor associated with the wearable extended reality appliance may cause the wearable extended reality appliance to overlay the physical surface with the virtual markings in the relative locations. The virtual markings may correspond to the additional markings created by the at least one second virtual writer. The overlaying may occur using any knowing virtual reality or extended reality tool that causes virtual information to be displayed in a physical environment. Overlaying the physical surface with the virtual markings may use a display system of the wearable extended reality appliance, which may include, for example, an optical head-mounted display, a monocular head-mounted display, a binocular head-mounted display, a see-through head-mounted display, a helmet-mounted display, or any other type of device configured to show images to a user. In some embodiments, the annotation data includes cursor data associated with a pointing device. Some embodiments involve analyzing the cursor data to determine the relative locations of the additional markings. For example, the pointing device may be of the at least one computing device associated with the at least one second virtual writer. The at least one second virtual writer may use the pointing device to input the additional markings and to indicate the relative locations for the additional markings. The indicated relative locations for the additional markings may be recorded in the cursor data, which may be transmitted by the at least one computing device associated with the at least one second virtual writer to the wearable extended reality appliance worn by the first physical writer. The at least one processor associated with the wearable extended reality appliance worn by the first physical writer may analyze the cursor data to determine the relative locations of the additional markings. In some examples, causing the wearable extended reality appliance to overlay the physical surface with the virtual markings in the relative locations may include analyzing images captured using the image sensor associated with the wearable extended reality appliance to determine a position and/or an orientation of at least one of the physical surface or the tangible markings created by the first physical writer, and determining the locations to place the virtual markings based on the determined position and/or orientation. For example, the relative locations of the additional markings created by the at least one second virtual writer with respect to the tangible markings created by the first physical writer as presented by the at least one computing device associated with the at least one second virtual writer may be mapped onto the physical surface, so that the tangible markings and the additional markings may be presented to the first physical writer and the at least one second virtual writer in a similar way. This would enable collaboration between the first physical writer and the at least one second virtual writer by virtual sharing of the physical surface. With reference toFIG.37, at least one processor associated with the wearable extended reality appliance3312may receive the annotation data from the computing device3510and may, in response, cause the wearable extended reality appliance3312to overlay the physical surface3314with virtual markings3710in the relative locations with respect to the tangible markings3410,3412. The virtual markings3710may correspond to the additional markings3610created by the second virtual writer. Based on the additional markings3610created by the second virtual writer, the virtual markings3710may include a comment (“Love this one”) on the tangible marking3410. Based on the additional markings3610, the virtual markings3710may include a circle around, and/or may be in proximity to, the tangible marking3410, to indicate an association between the virtual markings3710and the tangible marking3410. In some examples, the wearable extended reality appliance3312may receive, from the computing device3510, one or more visual indicators associated with the additional markings3610. The one or more visual indicators may be indicative of an identity of the second virtual writer and may be displayed virtually by the wearable extended reality appliance3312in association with the virtual markings3710corresponding to the additional markings3610. The one or more visual indicators may include, for example, an image of the second virtual writer, a textual indicator3712of the second virtual writer, or any other type of desired indication. In some embodiments, the computing device3510may transmit, to the wearable extended reality appliance3312, cursor data associated with a pointing device (e.g., of the computing device3510). For example, the second virtual writer may use the pointing device (which may control a cursor3522as shown inFIGS.35and36) to input the additional markings3610and to indicate the relative locations for the additional markings3610. The indicated relative locations for the additional markings3610may be recorded in the cursor data, which may be transmitted to the wearable extended reality appliance3312. The at least one processor associated with the wearable extended reality appliance3312may analyze the cursor data to determine the relative locations of the additional markings3610. In some embodiments, the physical surface is a compilation of pages and the annotation data received from the at least one computing device associated with the at least one second virtual writer represents first virtual markings associated with a first page of the compilation and second virtual markings associated with a second page of the compilation. For example, the at least one second virtual writer may input additional markings specific to each of one or more pages of the compilation. Such specific markings may be different for different pages. The at least one computing device may transmit, to the wearable extended reality appliance worn by the first physical writer, the page-specific markings created by the at least one second virtual writer, for presenting as page-specific virtual markings by the wearable extended reality appliance. In some embodiments, the compilation is a notebook and the embodiments involve analyzing the image data (e.g., as captured by the image sensor associated with the wearable extended reality appliance worn by the first physical writer) to determine that the notebook is opened to the first page, and causing the wearable extended reality appliance to overlay the first virtual markings on the first page of the notebook and exclude overlaying the second virtual markings on the first page of the notebook. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may analyze specific characteristics of the pages of the notebook to identify particular pages. In some examples, the pages of the notebook may include page numbers, and the at least one processor may identify particular pages of the notebook based on the page numbers in the image data. In other examples, notebook pages may each contain a unique code to enable page identification. In further examples, images of each of one or more pages of the notebook, or other feature information representing each page, may be stored in memory, and the at least one processor may compare the stored information for each page with the captured information for a particular page in the image data, to identify the particular page. Based on identifying the page, the at least one processor may cause display of virtual markings specific to the identified page (e.g., in the annotation data), for example, by overlaying the specific virtual markings on the identified page of the notebook and not overlaying other virtual markings (e.g., in the annotation data) on the identified page of the notebook. Some embodiments involve analyzing the image data to determine when the first page is turned, and causing the first virtual markings to disappear in response to the determination that the first page is turned. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may use the image data captured by the image sensor associated with the wearable extended reality appliance to detect a gesture of the first physical writer turning the first page of the notebook. Additionally or alternatively, the at least one processor may analyze the image data (e.g., periodically, continuously, or when page flipping is detected) to determine whether a currently showing page of the notebook is a particular page (e.g., the first page), based on the content (e.g., the tangible markings and/or a page number) of the currently showing page. For example, the at least one processor may determine that the first page is turned based on determining that the currently showing page of the notebook is changing from the content of the first page to the content of another page. In response to the determination that the first page is turned, the at least one processor may cause the first virtual markings to disappear. For example, a machine learning model may be trained using training examples to determine when a page is turned from images and/or videos. An example of such training example may include a sample image or a sample video, together with a label indicating whether the sample image or the sample video depicts a turn of a page. The trained machine learning model may be used to analyze the image data to determine when the first page is turned. Some embodiments involve analyzing the image data to determine when the first page is flipped back, and causing the first virtual markings to reappear in response to the determination that the first page is flipped back. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may use the image data captured by the image sensor associated with the wearable extended reality appliance to detect a gesture of the first physical writer flipping back to the first page of the notebook. Additionally or alternatively, the at least one processor may analyze the image data (e.g., periodically or continuously) to determine whether a currently showing page of the notebook is a particular page (e.g., the first page), based on the content (e.g., the tangible markings and/or a page number) of the currently showing page. For example, the at least one processor may determine that the first page is flipped back based on determining that the currently showing page of the notebook is changing to the content of the first page from the content of another page. In response to the determination that the first page is flipped back, the at least one processor may cause the first virtual markings to reappear. In some examples, a machine learning model may be trained using training examples to distinguish between different pages and recognize particular pages from images and/or videos. An example of such training example may include two sample images, each image depicting a sample page, together with label(s) indicating whether the two sample pages are the same page or different pages. The trained machine learning model may be used to analyze the image data and determine when the first page is flipped back (for example, when the first page reappears and a second page disappears). Some embodiments involve causing the wearable extended reality appliance to present a virtual representation of the second page with the second virtual markings away from the notebook. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may store in memory the content (e.g., the tangible markings and/or a page number) of each of one or more pages of the notebook, using captured image data when the notebook is showing various pages. When the notebook is opened to the first page (e.g., with the first virtual markings overlaying thereon), the at least one processor may cause the wearable extended reality appliance to present a virtual representation of the second page of the notebook with the second virtual markings (e.g., representing one or more additional markings created by the at least one second virtual writer for the second page of the notebook). The second virtual markings may be displayed together with the virtual representation of the second page of the notebook in a similar manner as the second virtual markings would have been displayed to overlay on the second page of the notebook. For example, the second virtual markings may be displayed on the surface of the second page in virtual representation, and/or may be displayed in locations relative to the virtual representation of the tangible markings of the second page (for example, the relative locations may be specified by the at least one second virtual writer). The virtual representation of the second page of the notebook with the second virtual markings may be displayed away from the notebook. For example, the virtual representation of the second page of the notebook with the second virtual markings may be displayed next to, or in any other desired location relative to, the notebook (which may be opened to the first page), so that the first page with the first virtual markings overlaying thereon and the virtual representation of the second page of the notebook with the second virtual markings may be displayed at the same time to the first physical writer. In some embodiments, the annotation data includes second image data representing a hand of an additional physical writer holding a second physical marking implement and engaging with a second physical surface to create second tangible markings. Some embodiments involve analyzing the second image data to determine the relative locations of the second tangible markings. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may transmit information based on the captured image data to a wearable extended reality appliance worn by the additional physical writer, to thereby enable the additional physical writer to view the tangible markings created by the first physical writer. The wearable extended reality appliance worn by the additional physical writer may, based on receiving the information, cause display of the tangible markings created by the first physical writer, for example, by overlaying the second physical surface with the tangible markings created by the first physical writer. The additional physical writer may use a second physical marking implement to create second tangible markings on the second physical surface in locations relative to the virtual representations, of the tangible markings created by the first physical writer, overlaying on the second physical surface. An image sensor associated with the wearable extended reality appliance worn by the additional physical writer may capture second image data representing a hand of the additional physical writer holding the second physical marking implement and engaging with the second physical surface to create the second tangible markings. The second image data may be transmitted to the wearable extended reality appliance worn by the first physical writer (e.g., in the annotation data). At least one processor associated with the wearable extended reality appliance worn by the first physical writer may analyze the second image data to determine the relative locations of the second tangible markings (e.g., with respect to the tangible markings created by the first physical writer) and may cause display of the second tangible markings, for example, by overlaying the first physical surface with the second tangible markings in the relative locations. In some embodiments, the at least one second virtual writer includes a plurality of virtual writers. Some embodiments involve causing the wearable extended reality appliance to present in association with virtual markings made by a virtual writer of the plurality of virtual writers, a visual indicator indicative of an identity of the virtual writer. For example, the identity of the virtual writer may be transmitted to the wearable extended reality appliance worn by the first physical writer in connection with the data indicating the virtual markings made by the virtual writer. The visual indicator may be, for example, displayed in proximity to the virtual markings to indicate the association therebetween. In some examples, additional indications (e.g., an arrow, a linking line, or a dotted line) may be displayed to show the association between the visual indicator and the virtual markings. In some examples, virtual markings made by multiple virtual writers may be displayed by the wearable extended reality appliance worn by the first physical writer and, in association with virtual markings made by each of the multiple virtual writers, a visual indicator indicative of an identity of the corresponding virtual writer may be displayed. In some embodiments, the visual indicator includes at least one of an image of a virtual writer, a textual indicator of a virtual writer, or a symbol associated with a virtual writer. The visual indicator may additionally or alternatively include any other type of desired indication. In some examples, the visual indicator may be configured by the virtual writer. For example, the virtual writer may upload an image and enter a name for the visual indicator of the virtual writer. With reference toFIG.37, one or more visual indicators (e.g., including the textual indicator3712) indicative of an identity of the second virtual writer that created the additional markings3610may be displayed virtually by the wearable extended reality appliance3312in association with the virtual markings3710corresponding to the additional markings3610. In some embodiments, transmitting the image data to the at least one computing device associated with the at least one second virtual writer includes transmitting the image data to a group of computing devices associated with a group of virtual writers. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may transmit the image data to the group of computing devices associated with the group of virtual writers. Some embodiments involve receiving input from the first physical writer designating one or more virtual writers of the group of virtual writers for participation, and displaying virtual markings associated with the one or more virtual writers of the group of virtual writers designated for participation while preventing display of virtual markings associated with others in the group of virtual writers not designated for participation. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may cause display of a listing of the group of virtual writers. The listing may allow the first physical writer to select the one or more virtual writers for designating for participation. The listing may include a menu, a list of entries, a number of tiles, or any other desired form. Based on receiving the first physical writer's designation of the one or more virtual writers for participation, the at least one processor associated with the wearable extended reality appliance worn by the first physical writer may cause display of virtual markings associated with the one or more virtual writers of the group of virtual writers designated for participation. The at least one processor associated with the wearable extended reality appliance worn by the first physical writer may not cause display of virtual markings associated with others in the group of virtual writers, who were not designated for participation by the first physical writer. In some embodiments, the at least one second virtual writer includes a plurality of virtual writers. Some embodiments involve causing the wearable extended reality appliance to display a listing of individuals permitted to view the tangible markings of the first physical writer. The listing of individuals may include, for example, one or more virtual writers of the plurality of virtual writers. The listing of individuals may be configured by the first physical writer. For example, the first physical writer may add individual(s) to, or remove individual(s) from, the listing of individuals. The listing may include a menu, a list of entries, a number of tiles, or any other desired form. Based on the listing of individuals, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may transmit data indicating the tangible markings of the first physical writer to the listed individuals, so that those individuals may view the tangible markings of the first physical writer. Some embodiments involve transmitting additional data to the at least one computing device associated with the at least one second virtual writer, to thereby enable the at least one second virtual writer to view the tangible markings with a visual indicator indicative of an identity of the first physical writer associated with the tangible markings. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may determine the identity of the first physical writer and may transmit additional data indicating the identity of the first physical writer to the at least one computing device associated with the at least one second virtual writer. The determining of the identity of the first physical writer may be based on, for example, the first physical writer uploading an image and/or entering a name, symbol, or any other information associated with the first physical writer. Based on receiving the additional data, the at least one computing device associated with the at least one second virtual writer may cause display of the visual indicator indicative of the identity of the first physical writer. The visual indicator may be displayed in association with (e.g., in proximity to or next to) the tangible markings of the first physical writer as displayed using the at least one computing device associated with the at least one second virtual writer. In some examples, additional indications (e.g., an arrow, a linking line, or a dotted line) may be displayed to show the association between the visual indicator and the displayed tangible markings. In some examples, the visual indicator may include at least one of an image of the first physical writer, a textual indicator of the first physical writer, a symbol associated with the first physical writer, and/or any other type of desired indication. With reference toFIG.35, the computing device3510may receive the additional data indicating the identity of the first physical writer3310from at least one processor associated with the wearable extended reality appliance3312and may, based on the additional data, cause display of one or more visual indicators indicative of an identity of the first physical writer3310. The one or more visual indicators indicative of an identity of the first physical writer3310may be displayed in association with the representations3512,3514, and/or3516, allowing the second virtual writer to view. The one or more visual indicators may include, for example, an image3520of the first physical writer3310, a textual indicator3518of the first physical writer3310, or any other type of desired indication. The one or more visual indicators may indicate, for example, that the representations3512,3514, and/or3516are associated with (e.g., created by, used by, or belonging to) the first physical writer3310. Some embodiments involve, after overlaying the physical surface with the virtual markings, receiving input for causing a modification in the virtual markings. Some embodiments involve, in response to receiving the input, causing the wearable extended reality appliance to modify the virtual markings. For example, at least one processor associated with the wearable extended reality appliance worn by the first physical writer may overlay the physical surface with the virtual markings and may thereafter receive an input for causing the modification in the virtual markings. The input may be received from the first physical writer and/or the at least one second virtual writer. For example, the first physical writer may provide the input via an input device of the wearable extended reality appliance. In some embodiments, the input includes additional image data received from the image sensor associated with the wearable extended reality appliance worn by the first physical writer. Some embodiments involve analyzing the additional image data to identify a gesture indicting the modification in the virtual markings. The gesture may include, for example, any finger or hand motion, such as a drag, a pinch, a spread, a swipe, a tap, a pointing, a scroll, a rotate, a flick, a touch, a zoom-in, a zoom-out, a thumb-up, a thumb-down, a touch-and-hold, or any other action of a finger or hand. The at least one processor associated with the wearable extended reality appliance worn by the first physical writer may use image analysis algorithms to identify the gesture directed to the virtual markings in the additional image data. In some examples, the at least one second virtual writer may provide the input (e.g., via an input device or using a gesture as captured by an image sensor) to the at least one computing device associated with the at least one second virtual writer, and the at least one computing device may transmit, to the wearable extended reality appliance worn by the first physical writer, data of the input. In some embodiments, the modification includes at least one of a deletion of at least a portion the virtual markings, changing a size of at least a portion the virtual markings, changing a color of at least a portion the virtual markings, or changing a location of at least a portion the virtual markings. In some examples, the modification may include changing a texture of at least a portion of the virtual markings, changing the text in the virtual markings, changing the shape of at least a portion of the virtual markings, and/or changing an orientation of at least a portion of the virtual markings. Additionally or alternatively, the modification may include any other type of desired change to at least a portion of the virtual markings. Some embodiments involve a system for enabling collaboration between physical writers and virtual writers, the system including at least one processor programmed to: receive image data representing a hand of a first physical writer holding a physical marking implement and engaging with a physical surface to create tangible markings, wherein the image data is received from an image sensor associated with a wearable extended reality appliance worn by the first physical writer; transmit information based on the image data to at least one computing device associated with at least one second virtual writer, to thereby enable the at least one second virtual writer to view the tangible markings created by the first physical writer; receive from the at least one computing device annotation data representing additional markings in relative locations with respect to the tangible markings created by the first physical writer; and in response to receiving the annotation data, cause the wearable extended reality appliance to overlay the physical surface with virtual markings in the relative locations. Some embodiments involve a method for enabling collaboration between physical writers and virtual writers, the method including: receiving image data representing a hand of a first physical writer holding a physical marking implement and engaging with a physical surface to create tangible markings, wherein the image data is received from an image sensor associated with a wearable extended reality appliance worn by the first physical writer; transmitting information based on the image data to at least one computing device associated with at least one second virtual writer, to thereby enable the at least one second virtual writer to view the tangible markings created by the first physical writer; receiving from the at least one computing device annotation data representing additional markings in relative locations with respect to the tangible markings created by the first physical writer; and in response to receiving the annotation data, causing the wearable extended reality appliance to overlay the physical surface with virtual markings in the relative locations. FIG.38is a flowchart illustrating an exemplary process3800for virtual sharing of a physical surface consistent with some embodiments of the present disclosure. To the extent details of the process were previously discussed, all of those details may not be repeated below to avoid unnecessary repetition. With reference toFIG.38, in step3810, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive image data representing a hand of a first physical writer holding a physical marking implement and engaging with a physical surface to create tangible markings, wherein the image data may be received from an image sensor associated with a wearable extended reality appliance (WER-Appliance) worn by the first physical writer. In step3812, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to transmit information based on the image data to at least one computing device associated with at least one second virtual writer, to thereby enable the at least one second virtual writer to view the tangible markings created by the first physical writer. In step3814, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive from the at least one computing device annotation data representing additional markings in relative locations with respect to the tangible markings created by the first physical writer. In step3816, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to, in response to receiving the annotation data, cause the wearable extended reality appliance to overlay the physical surface with virtual markings in the relative locations. A physical audio system is located in a physical space, for example, by mounting a speaker on a wall in a room. In this example, the speaker is “tied to” the location in the room where the speaker is mounted (for example, a corner of the room) such that a person in the room will be able to determine where the speaker is located when she hears sounds emanating from the speaker. Similarly, a virtual audio system may be tied to physical spaces. For example, a virtual speaker may be “mounted” or otherwise placed in a corner of a room in the physical environment. When a user with a wearable extended reality appliance enters the room, it may appear to the user that sounds they hear are emanating from the corner of the room where the virtual speaker is mounted or placed. Similar to adjusting the audio settings for a physical speaker (for example, volume, bass, or treble), the user of the wearable extended reality appliance in an area of the virtual audio system may change and save audio settings (for example, volume, bass, or treble) for later use when the user returns to the area or when another user arrives in the location of the virtual audio system. Disclosed embodiments may include methods, systems, and non-transitory computer readable media for facilitating tying a virtual speaker to a physical space. It is to be understood that this disclosure is intended to cover methods, systems, and non-transitory computer readable media, and any detail described, even if described in connection with only one of them, is intended as a disclosure of the methods, systems, and non-transitory computer readable media. It is noted that as used herein, the terms “physical space” and “physical environment” are understood to have similar meanings and may be used interchangeably. Some disclosed embodiments may be implemented via a non-transitory computer readable medium containing instructions for performing the operations of a method. In some embodiments, the method may be implemented on a system that includes at least one processor configured to perform the operations of the method. In some embodiments, the method may be implemented by one or more processors associated with the wearable extended reality appliance. For example, a first processor may be located in the wearable extended reality appliance and may perform one or more operations of the method. As another example, a second processor may be located in a computing device (e.g., an integrated computational interface device) selectively connected to the wearable extended reality appliance, and the second processor may perform one or more operations of the method. As another example, the first processor and the second processor may cooperate to perform one or more operations of the method. The cooperation between the first processor and the second processor may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processors. Some embodiments involve a non-transitory computer readable medium containing instructions for causing at least one processor to perform operations to tie at least one virtual speaker to a physical space. The terms “non-transitory computer readable medium,” “processor,” and “instructions” may be understood as described elsewhere in this disclosure. A “virtual speaker” refers to a location from which it may be perceived that sounds are emitted, in an absence of a physical speaker at that location. For example, a user may wear headphones that emit sound waves simulating sound as it would be heard if a physical speaker were located at a location where no speaker is physically located. Or sound waves emitted from non-wearable physical speakers in a space may be tuned to cause an impression that sound is being emitted from locations other than from the physical speakers. Tying the at least one virtual speaker to the physical space is similar to how a physical speaker is tied to a physical space (for example, by mounting the speaker in a corner of a room). For example, the virtual speaker may be virtually “placed” in a corner of a room (i.e., the virtual speaker is “tied” to the corner of the room) or in any other location in the room. When a user with a wearable extended reality appliance enters the room, it will appear to the user that sounds are emanating from the location in the physical space where the virtual speaker is located (e.g., the corner of the room). Some embodiments involve receiving, via a wireless network, a first indication that a first wearable extended reality appliance is located in an area associated with a virtual speaker. The term “wearable extended reality appliance” may be understood as described elsewhere in this disclosure. As used herein, the phrase “an area associated with a virtual speaker” is an area in a physical space from which sound is perceived to emanate in an absence of a speaker in that area. For example, the area may include a room in the physical space or a portion of a room in the physical space. As another example, the area may include a predetermined distance (measured in any units, such as centimeters, meters, inches, or feet) as measured from the location of the virtual speaker in the physical space. In some embodiments, the virtual speaker may be an omnidirectional speaker, and the area may include the predetermined distance in any direction from the virtual speaker (e.g., a predetermined radius around the omnidirectional speaker). In some embodiments, the virtual speaker may be a directional speaker, and the area may include the predetermined distance in the direction of the directional speaker (i.e., the predetermined distance in the direction which the directional speaker is aimed). In some embodiments, the area of the virtual speaker may be determined based on a user's location. For example, a virtual speaker may be defined a particular distance from the user at, for example, a particular angle from a reference axis through the user. Or, the space may be defined by a virtual coordinate system and the speaker placed at or in an area of a particular set of coordinates (e.g., x, y or x, y, z). In some embodiments, the first wearable extended reality appliance may include Global Positioning System (GPS) and/or an indoor localization functionality to determine the location of the first wearable extended reality appliance in the physical space. The first wearable extended reality appliance may include Wi-Fi, Bluetooth®, or other wireless communication functionality as described elsewhere in this disclosure which may be used to determine the location of the first wearable extended reality appliance in the physical space. In some embodiments, the physical space may include a device (e.g., an input device, such as an integrated computational interface device as described elsewhere in this disclosure) configured to receive wireless signals (e.g., GPS, Wi-Fi, Bluetooth®, or other wireless signal) from the first wearable extended reality appliance which may be used to determine when the first wearable extended reality appliance enters the physical space. The first indication may include a wireless signal from the GPS, Wi-Fi, Bluetooth®, or other wireless communication functionality included in the first wearable extended reality appliance. Some embodiments involve transmitting to the first wearable extended reality appliance first data corresponding to first sounds associated with the virtual speaker, to thereby enable a first user of the first wearable extended reality appliance to hear the first sounds during a first time period, wherein the first sounds correspond to first settings of the virtual speaker. The first wearable extended reality appliance may include (or otherwise have associated with it) headphones and/or speakers as described elsewhere in this disclosure. The first data may include data such as digital signals representing sounds in a data format that may be transmitted to and received by the first wearable extended reality appliance. For example, the data format may be an audio data format such as an uncompressed format (e.g., WAV or AU), a lossless compression format (e.g., Windows Media Audio (WMA) Lossless), or a lossy compression format (e.g., WMA Lossy). The phrase “first time period” as used herein is a time period or duration in which the first user is located in the area associated with the virtual speaker. The first settings of the virtual speaker may include any settings that may adjust the output of the virtual speaker, such as volume, bass, treble, balance (e.g., left and right balance), or an equalizer (e.g., specific frequency adjustments on a frequency band or range). In some embodiments, the first settings of the virtual speaker may be the settings of the virtual speaker that exist when the first user enters the area associated with the virtual speaker. In such embodiments, the virtual speaker settings may be associated with the virtual speaker and not with the first user or with the first wearable extended reality appliance, meaning that the virtual speaker settings will be the same regardless of which user or which wearable extended reality appliance is associated with the first indication. In some embodiments, the first settings may correspond to the first wearable extended reality appliance and the virtual speaker may be automatically adjusted to the first settings when the first indication is received. The first indication may include virtual speaker settings preferred by the first user and the settings may be stored in the first wearable extended reality appliance. The first indication may include a “trigger” (e.g., a signal or an instruction) to the virtual speaker to retrieve settings associated with the first wearable extended reality appliance from an external storage, such as an integrated computational interface device or a cloud-based storage. FIG.39illustrates an area in a physical space where a first user with a first wearable extended reality appliance is listening to audio at first settings of a virtual speaker. A physical space3910includes a virtual speaker3912tied to a location in physical space3910. As shown inFIG.39, virtual speaker3912is located in one corner of physical space3910. It is noted that the system, method, and non-transitory computer readable medium will perform in a similar manner if virtual speaker3912is in a different location of physical space3910, and as described below, virtual speaker3912may be moved to different locations in physical space3910. A first user3914wearing a first wearable extended reality appliance3916is located in physical space3910and is able to hear first sounds associated with virtual speaker3912at first settings of virtual speaker3912(represented inFIG.39by three curved lines).FIG.39represents the first time period as described elsewhere in this disclosure, during which first user3914listens to the first sounds at the first settings of virtual speaker3912. Some embodiments involve receiving input associated with the first wearable extended reality appliance during the first time period, wherein the received input is indicative of second settings for the virtual speaker. The input may be received via a physical control located on an exterior portion of the first wearable extended reality appliance, such as a button, a knob, a dial, a switch, touchpad or a slider. Alternatively or additionally, the input may be received via a virtual user interface element in the extended reality environment. For example, the first user may activate a user interface element by invoking a command or by a predefined hand gesture in the extended reality environment. The input may be received as the result of interaction with a virtual control, such as a virtual button, knob, switch, touchpad, or slider, displayed in the user's field of view. Other examples of virtual user interface controls include a radio button, a checkbox, a dial, or a numerical entry field. In some embodiments, the input may be received via a voice command spoken by the first user. For example, the input may be received from a device paired with the first wearable extended reality appliance, such as the input device as described elsewhere in this disclosure. For example, the device may include one or more physical controls, such as a button, a knob, a dial, a switch, or a slider, or may include a user interface with one or more user interface elements or controls. In some embodiments, the input may be received via a physical input device that is physically separated from the first wearable extended reality appliance, such as a physical keyboard, a physical touchpad, a physical touchscreen, or a physical computer mouse. The input may be a signal resulting from an environmental change detected by a sensor associated with the wearable extended reality appliance. For example, an image sensor may detect that a meeting has begun, which may correspond to second desired settings for the virtual speaker (e.g., lower volume). Similarly, the input may be a sound signal from a microphone that picks up increased ambient noise (e.g., from a nearby construction site) corresponding to a setting of increased volume. Alternatively or additionally, the input may be a gesture by a user, reflecting the user's intent to adopt a second sound setting (e.g., more bass after the first user determines that a particular soundtrack playing in the area of the virtual speaker sounds better with increased bass). In some embodiments, the second settings may include any settings that may adjust the output of the virtual speaker, similar to the first settings, such as volume, bass, treble, balance (e.g., left and right balance), or an equalizer (e.g., specific frequency adjustments on a frequency band or range). In some examples, the second settings may include a new location or new orientation for the virtual speaker. Thus, the first user may be able to change any settings of the virtual speaker. In some embodiments, the first user may only be permitted to change certain settings of the virtual speaker. For example, the first user may only be able to adjust the volume. In such embodiments, the other settings of the virtual speaker (i.e., settings other than the volume) may be “locked” by the system or the method. For example, if the first user uses a virtual user interface to change the settings, only the settings that the first user is permitted to change may be displayed. For example, if the first user is only permitted to change the volume, then only a volume control may be displayed. Controls related to other settings may either not be displayed or may be presented in a “greyed out” manner such that while the first user may see the controls related to the other settings, such controls are not enabled. Some embodiments involve transmitting to the first wearable extended reality appliance second data corresponding to second sounds associated with the virtual speaker, to thereby enable the first user of the first wearable extended reality appliance to hear the second sounds during a second time period, wherein the second sounds correspond to the second settings of the virtual speaker. The second data may include data in a data format that may be transmitted to and received by the first wearable extended reality appliance, similar to the first data as described elsewhere in this disclosure. The second data may include data such as digital signals representing sounds in a data format that may be transmitted to and received by the first wearable extended reality appliance. In some examples, the second time period may be a time period or duration in which the first user is located in the area associated with the virtual speaker and is listening to sounds at the second settings of the virtual speaker. In some examples, the second time period may be a time period after the input indicative of second settings for the virtual speaker is received and/or after the first time period has ended. In some implementations, the second sounds may be the same as the first sounds, but at the second settings (e.g., at a higher volume level). FIG.40illustrates the area in the physical space where the first user with the first wearable extended reality appliance is listening to audio at second settings of the virtual speaker. A physical space4010includes a virtual speaker4012tied to a location in physical space4010. As shown inFIG.40, virtual speaker4012is located in one corner of physical space4010. A first user4014wearing a first wearable extended reality appliance4016is located in physical space4010and is able to hear second sounds associated with virtual speaker4012at second settings of virtual speaker4012(represented inFIG.40by five curved lines).FIG.40represents the second time period as described elsewhere in this disclosure, after first user4014has changed the settings of virtual speaker4012to the second settings. Some embodiments involve after determining that the first user and the first wearable extended reality appliance left the area associated with the virtual speaker, receiving, via the wireless network, a second indication that a second wearable extended reality appliance is located in the area associated with the virtual speaker. As used herein, the phrase “left the area associated with the virtual speaker” means that the first user and the first wearable extended reality appliance are no longer physically present in a physical environment where the virtual speaker is virtually located. For example, the first user may have left a particular room, a portion of the room, or has moved more than the predetermined distance from the location of the virtual speaker in the physical space. The second wearable extended reality appliance may be constructed in a similar manner and operate in a similar way as the first wearable extended reality appliance as described elsewhere in this disclosure. The second indication may be similar to the first indication as described elsewhere in this disclosure and may be used to determine when the second wearable extended reality appliance enters the physical space. The second indication may include a wireless signal from the GPS, Wi-Fi, Bluetooth®, or other wireless communication functionality included in the second wearable extended reality appliance and may be used to determine the location of the second wearable extended reality appliance in the physical space. Some embodiments involve transmitting to the second wearable extended reality appliance third data corresponding to third sounds associated with the virtual speaker, to thereby enable a second user of the second wearable extended reality appliance to hear the third sounds during a third time period, wherein the third sounds correspond to the second settings of the virtual speaker. The third data may include data in a format that may be transmitted to and received by the second wearable extended reality appliance, similar to the first data as described elsewhere in this disclosure. Additionally or alternatively, the third data may include data such as digital signals representing sounds in a data format that may be transmitted to and received by the second wearable extended reality appliance. The third time period may correspond to a period or duration in which the second user is located in the area associated with the virtual speaker and is listening to sounds at the second settings of the virtual speaker. The third sounds may be the same as the second sounds, but heard by the second user of the second wearable extended reality appliance. Thus, for example, after the first user adopts the second settings, when the second user enters the area of the virtual speaker, the second user is exposed to the second sound settings. While, in some embodiments, the second sound settings alter a sound characteristic, in other embodiments, the second sound setting may additionally or alternatively change the underlying substance of the sound. For example, the original sound may correspond to a first song, the second sound setting may be a second song, different from the first song. In such an example, when the second user enters the area of the virtual speaker, the second user may be exposed to the second song, as opposed to the original first song. FIG.41illustrates the area in the physical space where a second user with a second wearable extended reality appliance is listening to audio at second settings of the virtual speaker. A physical space4110includes a virtual speaker4112tied to a location in physical space4110. As shown inFIG.41, virtual speaker4112is located in one corner of physical space4110. A second user4114wearing a second wearable extended reality appliance4116is located in physical space4110and is able to hear third sounds associated with virtual speaker4112at second settings of virtual speaker4112(represented inFIG.41by five curved lines).FIG.41represents the third time period as described elsewhere in this disclosure, during which second user4114listens to third sounds from virtual speaker4112corresponding to the second settings. In some embodiments, the first wearable extended reality appliance is the second extended reality appliance. For example, the second user may be the same person as the first user; e.g., the first user left the area associated with the virtual speaker and then later returned to the area. As another example, the second user may be a different person using the first wearable extended reality appliance; e.g., the first user and the second user share the same wearable extended reality appliance. In some embodiments, the first wearable extended reality appliance differs from the second extended reality appliance. For example, when the first and second users differ, the second user may enter the area associated with the virtual speaker wearing the second wearable extended reality appliance. As another example, the second user may be the same person as the first user who may have switched from using the first wearable extended reality appliance to using the second wearable extended reality appliance. In some embodiments, the third data corresponding to the third sounds is transmitted to a plurality of additional wearable extended reality appliances while the first wearable extended reality appliance that played the first sounds during the first time period and the second sounds during the second time period has left the area associated with the virtual speaker. For example, multiple users other than the first user may have entered the area associated with the virtual speaker after the first user has left the area (i.e., is absent from the area). Each one of the plurality of additional wearable extended reality appliances may be associated with a different one of the multiple users in the area and may be constructed in a similar manner and operate in a similar way as the first wearable extended reality appliance as described elsewhere in this disclosure. An indication from each of the plurality of additional wearable extended reality appliances may be received, to indicate that each of the plurality of additional wearable extended reality appliances is located in the area associated with the virtual speaker. The indication from each of the plurality of additional wearable extended reality appliances may include a wireless signal from the GPS, Wi-Fi, Bluetooth®, or other wireless communication functionality included in the additional wearable extended reality appliance and may be used to determine the location of the additional wearable extended reality appliance in the physical space. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to be played at a specific volume level differing from an original volume level associated with the first settings. For example, the first user may adjust the volume level from the original volume level of the first settings to the specific volume level of the second settings. This adjustment may be made separately or in addition to any one or more of the bass, treble, balance, or equalizer settings. The first user may hear the second sounds at the specific volume level, while the second user may hear the third sounds at the same specific volume level, meaning that the volume settings are the same for the first user and the second user. In some embodiments, the first user may hear the second sounds at the specific volume level, while the second user may hear the third sounds at a different specific volume level, meaning that the volume settings are different for the first user and the second user. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to be played at a specific bass level differing from an original bass level associated with the first settings. For example, the first user may adjust the bass level from the original bass level of the first settings to the specific bass level of the second settings. This adjustment may be made separately or in addition to any one or more of the volume, treble, balance, or equalizer settings. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to be played at a specific treble level differing from an original treble level associated with the first settings. For example, the first user may adjust the treble level from the original treble level of the first settings to the specific treble level of the second settings. This adjustment may be made separately or in addition to any one or more of the volume, bass, balance, or equalizer settings. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to be played at specific settings differing from the original values associated with the first settings. For example, the first user may adjust the balance or the equalizer settings from the original balance or equalizer settings of the first settings to the specific balance or equalizer settings of the second settings. These adjustments may be made separately or in addition to any one or more of the volume, bass, or treble settings. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to play specific audio content differing from original audio content associated with the first settings. For example, the first settings of the virtual speaker may remain unchanged while the specific audio content is being played. In other words, the first settings of the virtual speaker stay the same even if the content changes; i.e., the virtual speaker settings are not tied to the content being played. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to play a specific music genre differing from an original music genre associated with the first settings. For example, the first settings of the virtual speaker may remain unchanged while the specific music genre is being played. In other words, the first settings of the virtual speaker stay the same even if the music genre changes; i.e., the virtual speaker settings are not tied to the music genre being played. In some embodiments, the received input indicative of the second settings causes the second sounds and the third sounds to play content from a specific audio channel differing from an original audio channel associated with the first settings. For example, the first settings of the virtual speaker may remain unchanged while the specific audio channel is being played. In other words, the first settings of the virtual speaker stay the same even if the audio channel changes; i.e., the virtual speaker settings are not tied to the audio channel being played. In some examples, the specific audio channel and the original audio channel may correspond to different radio stations. In some examples, the specific audio channel and the original audio channel may correspond to different playlists. In some examples, the specific audio channel and the original audio channel may correspond to different audio sources. In some embodiments, the virtual speaker settings may correspond to the content received by the wearable extended reality appliance. In some embodiments, the virtual speaker settings may correspond to the physical space or to an extended reality environment associated with the wearable extended reality appliance. For example, if the physical space or the extended reality environment represents a classical music hall, the virtual speaker settings may be adjusted to represent being in a classical music hall with predetermined settings for bass, treble, balance, and equalizer while the user may be permitted to adjust only the volume. As another example, if the physical space or the extended reality environment represents a rock concert at a large stadium, the virtual speaker settings may be adjusted to represent being in a large stadium with predetermined settings for bass, treble, balance, and equalizer while the user may be permitted to adjust only the volume. In some embodiments, the received input indicative of the second settings causes the virtual speaker to change virtual location within the area, the changed virtual location differs from a virtual location associated with the virtual speaker during the first time period, and wherein the changed virtual location is thereafter associated with the second time period and the third time period. For example, the settings may permit the first user to change the location of the virtual speaker within the physical space. For example, if the virtual speaker is initially in a position next to a door in the physical space, the first user may change the position of the virtual speaker to be next to a window in the physical space. In some embodiments, the first user may be able to change the location of the virtual speaker by selecting a location in the physical space from a predetermined list of locations in the physical space. For example, predetermined list of locations may be provided to the first user via a user interface element in the extended reality environment. In some embodiments, the first user may be able to change the location of the virtual speaker by selecting a location in the physical space from a rendering of the physical space presented in the extended reality environment. For example, the virtual speaker may be rendered as a virtual object in the extended reality environment and the rendering in the extended reality environment may correspond to the area associated with the virtual speaker. The first user may be able to move the virtual speaker in the extended reality environment like any other virtual object. In some embodiments, the first user may only be able to place the virtual speaker in predetermined locations. For example, as the first user moves the virtual speaker in the extended reality environment, the first user may be permitted to place the virtual speaker in a predetermined location. If the first user attempts to place the virtual speaker in a location that is not permitted, the first user may not be able to “release” the virtual speaker to place it in that location. FIG.42illustrates the area in the physical space where the first user with the first wearable extended reality appliance is listening to audio at the first settings of the virtual speaker and a location of the virtual speaker in the area in the physical space has changed. A physical space4210includes a virtual speaker4212originally tied to a first location in physical space4210(shown in dashed outline).FIG.42represents the first time period as described elsewhere in this disclosure, after a first user4216has changed the location of the virtual speaker from the first location shown by4212to a second location for virtual speaker4214(shown in solid outline). As shown inFIG.42, the virtual speaker is moved from one corner of physical space4210to an opposite corner of physical space4210. The first user4216wearing a first wearable extended reality appliance4218is located in physical space4210and is able to hear second sounds associated with virtual speaker4214at first settings of virtual speaker4214(represented inFIG.42by three curved lines). In some embodiments, in addition to moving the virtual speaker from the first location (indicated by reference number4212) to the second location (indicated by reference number4214), first user4216may also adjust other settings of virtual speaker4214, such as volume, bass, treble, balance, or an equalizer as described elsewhere in this disclosure. In some embodiments, the virtual speaker is a directional virtual speaker. A directional speaker projects sound in a specific direction (e.g., a narrow sound beam), such that a user in a path of the sound beam can hear the sound, but other people in the physical environment near the user and out of the path of the sound beam cannot hear the sound. Thus, sound signals presented to a user may be tuned to simulate the output of a directional speaker, rendering the virtual speaker directional. In some embodiments, the received input indicative of the second settings causes the virtual speaker to change orientation with respect to a physical space, the changed orientation differs from an original orientation associated with the virtual speaker during the first time period. The settings may permit the first user to change the orientation of the virtual speaker, by either tuning the audio output to simulate a changed location and/or a facing direction of the virtual speaker. For example, the first user may select the facing direction of the virtual speaker, which in turn tunes the audio output to simulate the selected orientation (e.g., rotated left, right, up, down, or any combination thereof). In some embodiments, the changed orientation of the virtual speaker is thereafter associated with the second time period and the third time period. After the first user changes the orientation of the virtual speaker, the changed orientation may carry over into the second time period and the third time period. FIG.43illustrates an area in a physical space with a directional virtual speaker positioned in a first orientation. A physical space4310includes a directional virtual speaker4312projecting a directional sound beam4314in a first orientation. It is noted that the length of the lines used to illustrate directional sound beam4314are only indicative of the orientation of directional sound beam4314and do not define a “length” of directional sound beam4314(i.e., how far a user may be from directional virtual speaker4312and still be able to hear sounds from directional virtual speaker4312).FIG.43represents the first time period as described elsewhere in this disclosure, during which a first user listens to the first sounds at the first settings of directional virtual speaker4312. FIG.44illustrates the area in the physical space with the directional virtual speaker positioned in a second orientation. A physical space4410includes a directional virtual speaker4412projecting a directional sound beam4414in a second orientation. It is noted that the length of the lines used to illustrate directional sound beam4414are only indicative of the orientation of directional sound beam4414and do not define a “length” of directional sound beam4414(i.e., how far a user may be from directional virtual speaker4412and still be able to hear sounds from directional virtual speaker4412).FIG.44represents the second time period and the third time period as described elsewhere in this disclosure, after the first user has changed the orientation of the directional virtual speaker and during which a first user or a second user listens to the second sounds or the third sounds at the second settings of directional virtual speaker4412. In addition to changing the orientation of the directional virtual speaker from the first orientation (shown inFIG.43) to the second orientation (shown inFIG.44), the first user may also adjust other settings of directional virtual speaker4412, such as volume, bass, treble, balance, or an equalizer as described elsewhere in this disclosure. In some embodiments, the received input indicative of the second settings causes a change in a size of a sound zone associated with the virtual speaker, the changed size of the sound zone differing from an original size of a sound zone associated with the virtual speaker during the first time period. As used herein, the term “sound zone” refers to how far a sound can be heard from the virtual speaker. The larger the sound zone, the farther from the virtual speaker the sound can be heard. Similarly, the size of the sound zone may refer to the contours of the sound zone. Thus, the received input may tune the audio output to specify locations in an area where sounds may or may not be heard. In some embodiments, the sound zone may be based on a distance from the virtual speaker's location in the physical space. In some embodiments, the sound zone may be of any size (e.g., a width of the sound zone), based on characteristics of the virtual speaker. For example, if the virtual speaker is a directional speaker, the sound zone may be narrow. As another example, if the virtual speaker is an omnidirectional speaker, the sound zone may be a predetermined radius around the virtual speaker measured in any direction, with the virtual speaker being at a center of the sound zone. As another example, the sound zone may have a conical shape with an origin point being the location of the virtual speaker. In yet another example, the sound zone may be based on a physical area, such as a room, an apartment, an office complex, an elevator, or a floor. Because the speaker is a virtual speaker, the physical limitations of volume from a physical speaker (i.e., the farther the listener is from the speaker, the lower the volume appears to be to the listener) are not relevant, meaning that the volume of the virtual speaker perceived by the first user of the first wearable extended reality appliance may be the same anywhere within the sound zone. For example, the first user may set a range of X meters for the sound zone in which the virtual speaker is active. Once the first user moves outside of the range (e.g., more than X meters), the first user will not be able to hear the sounds. In some embodiments, at least one characteristic of sounds associated with a virtual speaker may be uniform across the sound zone. In some embodiments, at least one characteristic of sounds associated with a virtual speaker may vary across the sound zone. In some embodiments, one or more characteristics of sounds associated with a virtual speaker may be uniform across the sound zone while other one or more characteristics of sounds associated with a virtual speaker may vary across the sound zone. For example, volume may be uniform across the sound zone, while virtual source direction may vary across the sound zone. In some embodiments, the changed size of the sound zone is thereafter associated with the second time period and the third time period. After the first user changes the size of the sound zone, the changed sound zone size carries over into the second time period and the third time period. FIG.45illustrates an area in a physical space with a virtual speaker having a sound zone of a first size and a user with a wearable extended reality appliance is located in the sound zone. A physical space4510includes a virtual speaker4512projecting a sound zone4514of an original size. Sound zone4514is shown having a conical shape with an origin point being the location of virtual speaker4512. Sound zone4514may have other shapes, such as round, circular, spherical, or based on confines of the physical space where the virtual speaker is located. As noted above, the virtual speaker is not subject to the same limitations as a physical speaker, so the sound zone may be defined to have any shape. It is noted that the length of the dashed lines used to illustrate sound zone4514may indicate the original size of the sound zone, i.e., how far a first user4516may be from virtual speaker4512and still be able to hear sounds from virtual speaker4512.FIG.45represents the first time period as described elsewhere in this disclosure, during which the first user listens to the first sounds at the first settings of virtual speaker4512at the original size of the sound zone4514. FIG.46illustrates an area in a physical space with a virtual speaker having a sound zone of a second size and the user with the wearable extended reality appliance is located outside the sound zone. A physical space4610includes a virtual speaker4612projecting a sound zone4614of a changed size. Sound zone4614is shown having a conical shape with an origin point being the location of virtual speaker4612. Sound zone4614may have other shapes, such as round, circular, spherical, or based on confines of the physical space where the virtual speaker is located. As noted above, the virtual speaker is not subject to the same limitations as a physical speaker, so the sound zone may be defined to have any shape. It is noted that the length of the dashed lines used to illustrate sound zone4614may indicate the changed size of the sound zone, i.e., how far a first user4618may be from virtual speaker4612and still be able to hear sounds from virtual speaker4612. As shown inFIG.46, first user4618is outside of changed size sound zone4614, as indicated by dashed line4616and is thus unable to hear sounds from virtual speaker4612. If first user4618moves closer to virtual speaker4612so that first user4618is inside the changed size sound zone4614(i.e., on the side of dashed line4616closer to virtual speaker4612), then first user4618would be able to hear the sounds from virtual speaker4612. FIG.46represents the second time period and the third time period as described elsewhere in this disclosure, after the first user has changed the size of the sound zone of the virtual speaker and during which a first user or a second user listens to the second sounds or the third sounds at the second settings of virtual speaker4612. In some embodiments, in addition to changing the size of the sound zone of virtual speaker from the original size (shown inFIG.45) to the changed size (shown inFIG.46), the first user may also adjust other settings of virtual speaker4612, such as volume, bass, treble, balance, or an equalizer as described elsewhere in this disclosure. Some embodiments involve receiving an additional indication during the first time period that an additional wearable extended reality appliance is located in the area associated with the virtual speaker. The additional indication may be received in a similar manner as the first indication and may include similar information as the first indication as described elsewhere in this disclosure. Additionally or alternatively, the additional indication may include a wireless signal from the GPS, Wi-Fi, Bluetooth®, or other wireless communication functionality included in the additional wearable extended reality appliance to determine the location of the additional wearable extended reality appliance in the physical space. In some embodiments, the additional wearable extended reality appliance may be constructed in a similar manner and operate in a similar way as the first wearable extended reality appliance as described elsewhere in this disclosure. Some embodiments involve transmitting to the additional wearable extended reality appliance fourth data associated with the virtual speaker to thereby enable the additional wearable extended reality appliance to present sounds corresponding to the first settings during the first time period and the second time period. The fourth data may include data in a format that may be transmitted to and received by the additional wearable extended reality appliance, similar to the first data as described elsewhere in this disclosure. The first settings are the same settings of the virtual speaker as used in connection with the first sounds (e.g., the same settings for volume, bass, treble, balance, and equalizer). The user of the additional wearable extended reality appliance may hear fourth sounds corresponding to the fourth data at the first settings of the virtual speaker, for example, at the original volume level of the first settings. Some other embodiments involve transmitting to the additional wearable extended reality appliance fourth data associated with the virtual speaker to thereby enable the additional wearable extended reality appliance to present sounds corresponding to the first settings during the first time period, and transmitting to the additional wearable extended reality appliance fifth data associated with the virtual speaker to thereby enable the additional wearable extended reality appliance to present sounds corresponding to the second settings during the second time period. In some embodiments, an additional user (with the additional wearable extended reality appliance) may be in the same physical space as the first user. For example, the first user and the additional user may be listening to the same content at the first settings of the virtual speaker (i.e., the fourth data may be the same as the first data). Alternatively, the first user and the additional user may be listening to different content at the first settings of the virtual speaker (i.e., the fourth data may be different from the first data). FIG.47illustrates an area in a physical space where a first user with a first wearable extended reality appliance and a second user with a second wearable extended reality appliance are listening to audio at first settings of a virtual speaker. A physical space4710includes a virtual speaker4712tied to a location in physical space4710. A first user4714wearing a first wearable extended reality appliance4716is located in physical space4710and is able to hear first sounds associated with virtual speaker4712at first settings of virtual speaker4712(represented inFIG.47by three curved lines). A second user4718wearing an additional wearable extended reality appliance4720is located in physical space4710and is able to hear fourth sounds associated with virtual speaker4712at first settings of virtual speaker4712(represented inFIG.47by three curved lines). As shown inFIG.47, both first user4714and second user4718may hear sounds from virtual speaker4712(first sounds and fourth sounds, respectively) at the first settings of virtual speaker4712. Some embodiments involve receiving an additional indication during the first time period that an additional wearable extended reality appliance is located in the area associated with the virtual speaker. The additional indication may be received in a similar manner as the first indication and may include similar information as the first indication as described elsewhere in this disclosure. For example, the additional indication may include a signal that may be used by a device (e.g., input device, integrated computational interface device, or computing device) to determine that the additional wearable extended reality appliance is located in the area associated with the virtual speaker. The additional wearable extended reality appliance may be constructed in a similar manner and operate in a similar way as the first wearable extended reality appliance as described elsewhere in this disclosure. Some embodiments involve transmitting to the additional wearable extended reality appliance fourth data associated with the virtual speaker to thereby enable the additional wearable extended reality appliance to present sounds corresponding to the second settings during the second time period. The fourth data may include data in a format that may be transmitted to and received by the additional wearable extended reality appliance, similar to the first data as described elsewhere in this disclosure. The fourth data may include data such as digital signals representing sounds in a data format that may be transmitted to and received by the additional wearable extended reality appliance. In some embodiments, an additional user (with the additional wearable extended reality appliance) may be in the same physical space as the first user. For example, the first user and the additional user may be listening to the same content at the second settings of the virtual speaker (i.e., the fourth data may be the same as the first data). Alternatively, the first user and the additional user may be listening to different content at the second settings of the virtual speaker (i.e., the fourth data may be different from the first data). Some embodiments involve obtaining information associated with the second indication. For example, the information associated with the second indication may include information about the second wearable extended reality appliance, such as a device identifier, or information about the second user of the second wearable extended reality application, such as a user identifier. For example, the device identifier and the user identifier may include an alphanumeric string including an identifier code associated with the second wearable extended reality appliance and/or the second user. In other examples, the information associated with the second indication may include information about the second wearable extended reality appliance, such as a type, a settings, a location, an orientation, an activity indicator, and so forth. Some embodiments involve accessing a plurality of sound playing rules defining virtual speaker settings, based on the obtained information. The plurality of sound playing rules may be stored on the wearable extended reality appliance or may be stored on a device separate from the wearable extended reality appliance, for example, on an input device or on a cloud-based server in communication with the wearable extended reality appliance. The plurality of sound playing rules may be accessed by the wearable extended reality appliance. In some embodiments, the plurality of sound playing rules may be accessed by a device located in the physical space and in communication with the wearable extended reality appliance. In some embodiments, one or more of the plurality of sound playing rules may be associated with a device identifier or a user identifier. For example, a parent may create one or more sound playing rules for a child and associate those sound playing rules with a user identifier corresponding to the child such that when the child wears the wearable extended reality appliance, the sound playing rules associated with the child are implemented. For example, the sound playing rule for the child may be that the child is not able to adjust the settings of the virtual speaker or that the child may only raise the volume setting to a predetermined maximum (which may be a lower value than a maximum possible volume setting). As another example, the sound playing rule for the child may be that the child is limited to playing certain types of content (e.g., age-appropriate content as determined by the rule creator). As used herein, the term “implemented” in connection with the sound playing rules means that the sound playing rules are used by the wearable extended reality appliance to control the settings associated with the virtual speaker. In some embodiments, one or more of the plurality of sound playing rules may be associated with an activity associated with the second wearable extended reality appliance. For example, for some activities, the sound playing rules may include a maximum volume. In some embodiments, one or more of the plurality of sound playing rules may be associated with a type of the second wearable extended reality appliance, for example due to hardware limitations of the second wearable extended reality appliance. Some embodiments involve determining that an existence of the second wearable extended reality appliance in the area associated with the virtual speaker corresponds to a specific sound playing rule. The existence of a second wearable extended reality appliance in the area associated with a virtual speaker may be determined by signals sent by that appliance to either the first appliance or to a central controller. Those signals may be based on a location system such as GPS, Wi-Fi, or Bluetooth® related location systems. Additionally or alternatively, the existence may be determined by image data captured by either the second wearable extended reality appliance or another wearable extended reality appliance. Based on the determination of the existence of the second wearable extended reality appliance in a particular area, a correspondence to a specific sound playing rule may be determined. For example, the second indication may indicate that the second wearable extended reality appliance is in the area associated with the virtual speaker. As described elsewhere in this disclosure, the second indication may also include an identifier associated with the second wearable extended reality appliance. In some embodiments, the second indication may include a wireless signal from the GPS, Wi-Fi, Bluetooth®, or other wireless communication functionality included in the second wearable extended reality appliance and may be used to determine the location of the second wearable extended reality appliance in the physical space. In some embodiments, one or more sound playing rules associated with the second wearable extended reality appliance may be determined based on the second indication. For example, if the physical space is a museum and the audio is associated with an exhibit in the physical space, the sound playing rule may be that only the volume setting may be changed by a user and when the second wearable extended reality appliance enters the area associated with the virtual speaker, the volume setting may be adjusted to an initial value (e.g., first settings) which may be the same for all users. Then, once the user is in the physical space, the user may adjust the volume setting. Some embodiments involve implementing the specific sound playing rule to determine actual settings of the third sounds heard by the second user during the third time period. The specific sound playing rule may include setting information for the virtual speaker, and implementing the specific sound playing rule may change the virtual speaker settings to match the settings indicated by the specific sound playing rule. The specific sound playing rule may be implemented by the second wearable extended reality appliance by controlling the settings (e.g., the second settings) associated with the virtual speaker. An input device associated with the second wearable extended reality appliance may implement the specific sound playing rule. In some embodiments, the specific sound playing rule is that a specific setting change only affects devices that initiate the specific setting change. In this situation, when a specific sound playing rule is set via a particular device, the rule may only impact the device through which the rule was set. For example, if the specific sound playing rule is associated with a particular wearable extended reality appliance (e.g., by association via an identifier of the wearable extended reality appliance), then the change in the virtual speaker settings may only apply to that particular wearable extended reality appliance. In some embodiments, the obtained information associated with the second indication includes information about a time of day. The time of day may be a current time in a time zone where the wearable extended reality appliance is located. Some embodiments involve determining whether the condition of the specific sound playing rule is met based on the information about the time of day. If the specific sound playing rule includes a condition, the condition may be based on a current time of day. For example, a condition might be that before 5 pm, a virtual speaker plays hard rock music, after 5 pm, the music genre switches to light rock, and after 8 pm, mood music kicks-in. By way of another example, if the wearable extended reality appliance includes a physical speaker, the specific sound playing rule may limit the volume of the physical speaker if it is later than 11:00 PM. In some embodiments, the time of day condition is one of two or more conditions of the specific sound playing rule. For example, the sound playing rule may include a time of day condition and a predetermined distance condition (e.g., that the wearable extended reality appliance be located within a predetermined distance of the virtual speaker). Some embodiments include transmitting to the second wearable extended reality appliance the third data corresponding to the third sounds that correspond to the second settings of the virtual speaker when a condition of the specific content sound playing rule is met. The sound playing rule may include a condition and the sound playing rule may be implemented (i.e., the third sounds are played using the second settings of the virtual speaker) if the condition is met. For example, a condition may include that the user is located within a predetermined distance (measured in any unit of measurement, such as meters, centimeters, feet, or inches) of the location of the virtual speaker in the physical space. If, for example, the physical space is a museum with multiple exhibits and each exhibit has its own virtual speaker associated with it, the sound playing rule may be that the user of the wearable extended reality appliance needs to be within a predetermined distance of the virtual speaker before the audio associated with the exhibit begins playing. In such circumstances, the user would need to be relatively close to the exhibit (within the predetermined distance) before the associated audio started playing, to avoid user confusion by having audio associated with a different exhibit playing. Some embodiments include transmitting to the second wearable extended reality appliance fourth data corresponding to fourth sounds that correspond to the first settings of the virtual speaker when the condition of the specific sound playing rule is unmet. The sound playing rule may include a condition and the sound playing rule may be not implemented (i.e., the fourth sounds are played using the first settings of the virtual speaker) if the condition is not met. Continuing the example of when the physical space is a museum, if the user walks between different rooms in the museum, the sound playing rule may be that music (i.e., fourth sounds) is played while the user is walking and continues to play until the user is located within a sound zone of a virtual speaker associated with an exhibit in the museum. In this example, the sound playing rule may be associated with the exhibit the user is walking toward and the condition may be that the user be within a predetermined distance of the exhibit (i.e., the condition is met) before the audio associated with the exhibit starts playing, otherwise music is played (i.e., the condition is not met). In some embodiments, audio data captured using at least one audio sensor included in the second wearable extended reality appliance may be analyzed to detect sounds in the physical environment of the second wearable extended reality appliance (such as ambient noise, a person speaking, music, and so forth). Further, in response to the detection of the sounds in the physical environment of the second wearable extended reality appliance, actual settings of the third sounds heard by the second user during the third time period may be determined, for example, based on characteristics of the sounds in the physical environment of the second wearable extended reality appliance. For example, in response to high ambient noise levels, a volume of the third sounds may be increased. In another example, in response to a detection of a person speaking in the physical environment of the second wearable extended reality appliance, a volume of the third sounds may be decreased. In some examples, the audio data may be analyzed using a voice recognition algorithm to determine whether the person speaking in the physical environment of the second wearable extended reality appliance is the second user. When the person speaking is the second user, the volume of the third sounds may be decreased, while when the person speaking is not the second user, the volume of the third sounds may be increased, may be unmodified, or may be decreased less than when the person speaking is the second user. In some examples, the audio data may be analyzed to determine whether the person speaking in the physical environment of the second wearable extended reality appliance is speaking to the second user. When the person is speaking to the second user, the volume of the third sounds may be decreased, while when the person is not speaking to the second user, the volume of the third sounds may be increased, may be unmodified, or may be decreased less than when the person is speaking to the second user. For example, a machine learning model (such as a classification model) may be trained using training examples to determine from audio whether people are speaking to the second user. An example of such training example may include a sample audio, together with a label indicating whether the sample audio includes speech directed to the second user. The trained machine learning model may be used to analyze the audio data captured using at least one audio sensor included in the second wearable extended reality appliance and determine whether the person speaking in the physical environment of the second wearable extended reality appliance is speaking to the second user. In some embodiments, additional data associated with other sounds for presentation to the second user during the third time period may be received, the other sounds may not be associated with the virtual speaker. For example, the other sounds may be associated with a different virtual speaker, may be associated with an app, may be associated with a virtual object that is not a virtual speaker, and so forth. Further, the additional data may be analyzed to determine actual settings of the third sounds heard by the second user during the third time period. For example, the determination of the actual settings may be based on a category associated with the other sounds, may be based on a volume associated with the other sounds, may be based on a position of a virtual source associated with the other sounds, and so forth. FIG.48is a flowchart of an exemplary method4810for tying a virtual speaker to a physical space.FIG.48is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. One or more operations of method4810may be performed by a processor associated with a wearable extended reality appliance. For example, a first processor may be located in the wearable extended reality appliance and may perform one or more operations of the method4810. As another example, a second processor may be located in a computing device selectively connected to the wearable extended reality appliance, and the second processor may perform one or more operations of the method4810. As another example, the first processor and the second processor may cooperate to perform one or more operations of the method4810. The cooperation between the first processor and the second processor may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processors. Method4810includes a step4812of receiving, for example via a wireless network, a first indication that a first wearable extended reality appliance is located in an area associated with a virtual speaker. The terms “first indication,” “first wearable extended reality appliance,” “virtual speaker,” and “area associated with the virtual speaker” have a similar meaning as described elsewhere in this disclosure. In some embodiments, the physical space may include a device configured to receive wireless signals (e.g., an input device, such as an integrated computational interface device as described elsewhere in this disclosure) from the first wearable extended reality appliance which may be used to determine when the first wearable extended reality appliance enters the physical space. The first indication may include a wireless signal from GPS, Wi-Ft, Bluetooth®, or other wireless communication functionality included in the first wearable extended reality appliance. Method4810includes a step4814of transmitting to the first wearable extended reality appliance first data corresponding to first sounds associated with the virtual speaker, to thereby enable a first user of the first wearable extended reality appliance to hear the first sounds during a first time period, wherein the first sounds correspond to first settings of the virtual speaker. In some embodiments, the first wearable extended reality appliance may include headphones and/or speakers as described elsewhere in this disclosure. The terms “first data,” “first sounds,” and “first time period” have a similar meaning as described elsewhere in this disclosure. Method4810includes a step4816of receiving input associated with the first wearable extended reality appliance during the first time period, wherein the received input is indicative of second settings for the virtual speaker. The terms “input” and “second settings” have a similar meaning as described elsewhere in this disclosure. In some embodiments, the input may be received in a manner similar as described elsewhere in this disclosure. Method4810includes a step4818of transmitting to the first wearable extended reality appliance second data corresponding to second sounds associated with the virtual speaker, to thereby enable the first user of the first wearable extended reality appliance to hear the second sounds during a second time period, wherein the second sounds correspond to the second settings of the virtual speaker. The terms “second data,” “second sounds,” and “second time period” have a similar meaning as described elsewhere in this disclosure. Method4810includes a step4820of after determining that the first user and the first wearable extended reality appliance left the area associated with the virtual speaker, receiving via the wireless network, a second indication that a second wearable extended reality appliance is located in the area associated with the virtual speaker. The terms “left the area associated with the virtual speaker,” “second indication,” and “second wearable reality appliance” have a similar meaning as described elsewhere in this disclosure. Method4810includes a step4822of transmitting to the second wearable extended reality appliance third data corresponding to third sounds associated with the virtual speaker, to thereby enable a second user of the second wearable extended reality appliance to hear the third sounds during a third time period, wherein the third sounds correspond to the second settings of the virtual speaker. The terms “third data,” “third sounds,” and “third time period” have a similar meaning as described elsewhere in this disclosure. Some embodiments provide a system for tying at least one virtual speaker to a physical space. The system includes at least one processor programmed to receive, via a wireless network, a first indication that a first wearable extended reality appliance is located in an area associated with a virtual speaker; transmit to the first wearable extended reality appliance first data corresponding to first sounds associated with the virtual speaker, to thereby enable a first user of the first wearable extended reality appliance to hear the first sounds during a first time period, wherein the first sounds correspond to first settings of the virtual speaker; receive input associated with the first wearable extended reality appliance during the first time period, wherein the received input is indicative of second settings for the virtual speaker; transmit to the first wearable extended reality appliance second data corresponding to second sounds associated with the virtual speaker, to thereby enable the first user of the first wearable extended reality appliance to hear the second sounds during a second time period, wherein the second sounds correspond to the second settings of the virtual speaker; after determining that the first user and the first wearable extended reality appliance left the area associated with the virtual speaker, receive via the wireless network, a second indication that a second wearable extended reality appliance is located in the area associated with the virtual speaker; and transmit to the second wearable extended reality appliance third data corresponding to third sounds associated with the virtual speaker, to thereby enable a second user of the second wearable extended reality appliance to hear the third sounds during a third time period, wherein the third sounds correspond to the second settings of the virtual speaker. For example, the system may include system200shown inFIG.2. The at least one processor may include processing device360shown inFIG.3and/or processing device460shown inFIG.4. The steps may be performed entirely by processing device360, entirely by processing device460, or jointly by processing device360and processing device. The cooperation between processing device360and processing device460may include load balancing, work sharing, or other known mechanisms for dividing a workload between multiple processing devices. When using a wearable extended reality appliance to view and/or interact with virtual objects, some virtual objects may be in a field of view while some other virtual objects may be outside the field of view, as explained in more detail below. As the field of view changes (for example, due to head movements of the user or due to changes to display parameters) or the positions of the virtual objects change, virtual objects may enter and/or exit the field of view. Virtual objects may change due to many different triggers, as described in more detail below. When a virtual object changes while the virtual object is outside the field of view, it may take a while before the user looks to the direction of the virtual object and notices the change. Therefore, it is desired to provide a notification indicative of the change noticeable by the user even when the virtual object is outside the field of view. However, when the virtual object is in the field of view, the change is immediately and directly noticeable, and providing a supplementary notification indicative of the change may cause clutter or undesired abundance of notifications. Disclosed embodiments, including methods, systems, apparatuses, and non-transitory computer-readable media, may relate to initiating location-driven sensory prompts reflecting changes to virtual objects. Some embodiments involve a non-transitory computer readable medium containing instructions for performing operations configured to initiate location-driven sensory prompts reflecting changes to virtual objects. The term “non-transitory computer readable medium” may be understood as described earlier. The term “instructions” may refer to program code instructions that may be executed by a computer processor. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, and any other computer processing technique. The term “processor” may be understood as described earlier. For example, the at least one processor may be one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5, and the instructions may be stored at any of memory devices212,311,411, or511, or a memory of mobile communications device206. A virtual object may refer to a visual representation rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent an object. A virtual object may include, for example, an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, a virtual widget, a virtual screen, or any other type of virtual representation of any object or feature. In some examples, a virtual object may be associated with a communications application, a news application, a gaming application, a timing application, a word-processing application, a data-processing application, a presentation application, a reading application, a browsing application, a messaging application, or any other type of application. A change to a virtual object may include any type of modification, alteration, variation, adjustment, rearrangement, reordering, adaptation, reconstruction, transformation, or revision to the virtual object. The change to the virtual object may include a change to any aspect of the virtual object, including the appearance of the virtual object, the associated content of the virtual object, the associated functions of the virtual object, the status of the virtual object, the state of the virtual object, the associated data of the virtual object, or any other feature of the virtual object. In some embodiments, the change to the virtual object may include, for example, an incoming message, a received notification, a news update, an occurrence of an event, a request for user action, a received advertisement, a trigger for displaying a user interface, or any other update associated with the virtual object. In some examples, the change to the virtual object may include any action, function, and/or data directed towards the virtual object. A sensory prompt may refer to any indication that may be configured to be sensed by an individual. A sensory prompt may relate to any sense of an individual, such as sight, smell, touch, taste, hearing, or any other ability of an individual to gather information. In some examples, a sensory prompt may be used to provide a notification to a user. For example, a sensory prompt may include a visual notification, an audible notification, or a tactile notification. A computing device may cause a sensory prompt to be generated (e.g., via one or more output devices, such as a screen, a speaker, or a vibrator), based on one or more triggering events, such as a change to a virtual object rendered by the computing device. Initiating location-driven sensory prompts reflecting changes to virtual objects may include causing a sensory prompt reflecting a change to a virtual object based on a location associated with the virtual object. For example, a change to a virtual object may trigger different sensory prompts if the virtual object is located in different locations. The different locations of the virtual object that may cause different sensory prompts may be, for example, differentiated based on a field of view of a wearable extended reality appliance that may cause display of the virtual object. Disclosed embodiments may include, for example, detecting when a change happens to a virtual object outside a current field of view of the wearable extended reality appliance. Based on identifying the change, the wearable extended reality appliance may provide a sensory prompt to notify and/or inform the user of the change to the virtual object. The sensory prompt may be different from a sensory prompt that may be triggered by a change to the virtual object if the virtual object is within the field of view of the wearable extended reality appliance. Some embodiments involve enabling interaction with a virtual object located in an extended reality environment associated with a wearable extended reality appliance. The term “extended reality environment” may be understood as described earlier. The virtual object (also described earlier) may be located in any desired location in the extended reality environment. For example, the virtual object may be displayed (e.g., via the wearable extended reality appliance) as being placed on a physical wall, as being placed on a virtual wall rendered by the wearable extended reality appliance, as being placed on a virtual whiteboard rendered by the wearable extended reality appliance, as being placed (e.g., floating) in a space without being connected to other objects (either physical or virtual) in the space, or in any other desired location. Interaction with the virtual object may refer to any action from a user to the virtual object or from the virtual object to a user. Interaction with the virtual object may include, for example, any action of a user that may interface the virtual object, such as an instruction input by a user to the virtual object, a command provided by a user to the virtual object, a gesture of a user directed to the virtual object, or any other input that may be provided by a user to the virtual object. The action of a user that may interface with the virtual object may be via an input device of a wearable extended reality appliance that may cause display of the virtual object. Additionally or alternatively, interaction with the virtual object may include any action of the virtual object that may interface with a user, such as an output image of the virtual object for a user, an output sound of the virtual object for a user, output text of the virtual object for a user, or any other output that may be provided by the virtual object to a user. The action of the virtual object that may interface a user may be via an output device of a wearable extended reality appliance that may cause display of the virtual object. At least one processor associated with a wearable extended reality appliance may, for example, cause display of a virtual object located in an extended reality environment associated with the wearable extended reality appliance and enable interaction with the virtual object located in the extended reality environment associated with the wearable extended reality appliance. For example, the at least one processor may activate input devices of the wearable extended reality appliance and/or output devices of the wearable extended reality appliance, for a user to interact with the virtual object. Additionally or alternatively, the at least one processor may configure parameters, settings, functions, and/or instructions of a system of the wearable extended reality appliance, to allow a user to interact with the virtual object. FIGS.49,50,51, and52are schematic diagrams illustrating various use snapshots of an example system for initiating sensory prompts for changes based on a field of view consistent with some embodiments of the present disclosure. With reference toFIG.49, a user4910may wear a wearable extended reality appliance4912. The wearable extended reality appliance4912may provide an extended reality environment4914to the user4910. The wearable extended reality appliance4912may cause display of one or more virtual objects4918,4920,4922in the extended reality environment4914. An example of the virtual object4918may be a virtual screen. An example of the virtual object4920may be an icon or widget for an email application. An example of the virtual object4922may be an icon or widget for a clock application. At least one processor associated with the wearable extended reality appliance4912may enable the user4910to interact with the virtual objects4918,4920,4922located in the extended reality environment4914. Some embodiments involve receiving data reflecting a change associated with the virtual object. A change associated with a virtual object may include any type of modification, alteration, variation, adjustment, rearrangement, reordering, adaptation, reconstruction, transformation, or revision associated with the virtual object. The change associated with the virtual object may include a change to any aspect of the virtual object, including the appearance of the virtual object, the associated content of the virtual object, the associated functions of the virtual object, the status of the virtual object, the state of the virtual object, the associated data of the virtual object, or any other feature of the virtual object. In some embodiments, the change associated with the virtual object may include, for example, an incoming message, a received notification, a news update, an occurrence of an event, a request for user action, a received advertisement, a trigger for displaying a user interface, or any other update associated with the virtual object. In some examples, the change associated with the virtual object may include any action, function, and/or data directed towards the virtual object. At least one processor associated with the wearable extended reality appliance may, for example, receive data reflecting the change associated with the virtual object. In some examples, the data may be received from another computing device. In some examples, the data may be received from an application, a function, and/or any other entity running on the wearable extended reality appliance. In some examples, the data may be received from an application, a function, and/or any other entity associated with the virtual object. In some examples, the data may be generated by and/or received from the virtual object. Additionally or alternatively, the data may be received from any other desired entity. In some embodiments, the virtual object is associated with a communications application and the change associated with the virtual object involves at least one of an incoming message or a received notification. A communications application may include, for example, an email application, a social network application, an instant messaging application, a phone application, or any other application that may be configured to communicate with another entity. The virtual object may include an icon, a widget, a symbol, a window, or any other user interface for the communications application. At least one processor associated with the wearable extended reality appliance may implement the communications application, and may receive (e.g., from another computing device or any other desired entity) data indicating an incoming message for the communications application and/or a notification for the communications application. The incoming message and/or the notification may include, for example, an email, a post, a text message, a phone call or voice message, or any other information that may be received by the communications application. In some embodiments, the virtual object is associated with a news application and the change associated with the virtual object involves a news update. A news application may refer to any application that may be configured to provide information of current events via one or more of various media (e.g., broadcasting or electronic communication). The virtual object may include an icon, a widget, a symbol, a window, or any other user interface for the news application. At least one processor associated with the wearable extended reality appliance may implement the news application, and may receive (e.g., from another computing device or any other desired entity) data indicating a news update for the news application. The news update may include, for example, a real-time news feed, a periodic news update, a triggered news transmission, or any other information associated with news. In some embodiments, the virtual object is associated with a gaming application and the change associated with the virtual object involves an occurrence of an event in the gaming application. A gaming application may refer to any application that may be configured to provide a video game, a computer game, an electronic game, and/or any other user interaction. A virtual object may include an icon, a widget, a symbol, a window, or any other user interface for the gaming application. At least one processor associated with the wearable extended reality appliance may implement the gaming application, and may receive (e.g., from another computing device, the gaming application, or any other desired entity) data indicating an occurrence of an event in the gaming application. An event in the gaming application may include, for example, a message in a game (e.g., from another player), a mission in a game, a notification in a game, a status change in a game, or any other information associated with a game. In some embodiments, the change associated with the virtual object is unscheduled and the data reflecting the change associated with the virtual object is received from a remote server. A remote server may include any computing device that may be configured to transmit information. The remote server and the wearable extended reality appliance may be located in different rooms, in different buildings, in different cities, in different states, in different countries, or in two locations having any desired distance therebetween. The change associated with the virtual object may be not appointed, assigned, or designated for a configured time. For example, the remote server may transmit, to the wearable extended reality appliance, the data reflecting the change associated with the virtual object based on the occurrence of an event that may be not subject to a configured schedule (e.g., an individual sending an email, an individual sending a text message, or a real-time news update). In some examples, the change associated with the virtual object may be unscheduled. For example, the data reflecting the change associated with the virtual object may be received from another local software application, from a local sensor, from a remote software application, from a remote sensor, from a remote processing device, and so forth. Additionally or alternatively, the change associated with the virtual object may be scheduled (e.g., expiration of a timer for a clock application associated with the virtual object, an upcoming scheduled event for a calendar application associated with the virtual object, or a scheduled transmission of an email for an email application associated with the virtual object). The data reflecting the change associated with the virtual object may be received from an application associated with (e.g., underlying, supporting, or corresponding to) the virtual object or from any other desired entity. With reference toFIG.49, at least one processor associated with the wearable extended reality appliance4912may, for example, receive data reflecting a change associated with the virtual object4920. The data reflecting the change associated with the virtual object4920may indicate, for example, that a new email is received by the email application for which the virtual object4920may be the icon or widget. Some embodiments involve determining whether the virtual object is within a field of view of the wearable extended reality appliance or is outside the field of view of the wearable extended reality appliance. A field of view may refer to a spatial extent that may be observed or detected at any given moment. For example, a field of view of an entity as an observer or detector may include a solid angle via which the entity may be sensitive to radiation (e.g., visible light, infrared light, or other optical signals). In some examples, a field of view may include an angle of view. A field of view may be measured horizontally, vertically, diagonally, or in any other desired manner. A field of view of the wearable extended reality appliance may refer to, for example, a portion, of the extended reality environment, associated with a display system of the wearable extended reality appliance at a given moment. The display system of the wearable extended reality appliance may include, for example, an optical head-mounted display, a monocular head-mounted display, a binocular head-mounted display, a see-through head-mounted display, a helmet-mounted display, or any other type of device configured to show images to a user. The portion of the extended reality environment may include a region or space where virtual content may be displayed by the display system of the wearable extended reality appliance at a given moment. Additionally or alternatively, the field of view of the wearable extended reality appliance may refer to a regional or spatial extent to which virtual content may be displayed by the display system of the wearable extended reality appliance at a given moment. In some examples, from the perspective of a user wearing the wearable extended reality appliance, the field of view of the wearable extended reality appliance may be a solid angle via which the user may view, perceive, observe, detect, and/or be sensitive to virtual content as displayed (e.g., projected or radiated) by the display system of the wearable extended reality appliance (e.g., at a given moment). At least one processor associated with the wearable extended reality appliance may determine whether the virtual object located in the extended reality environment is within the field of view of the wearable extended reality appliance or is outside the field of view of the wearable extended reality appliance, for example, using a ray casting algorithm, using a rasterization algorithm, using a ray tracking algorithm, and so forth. For example, a user wearing the wearable extended reality appliance may be not able to view the virtual object if the virtual object is outside the field of view of the wearable extended reality appliance, as the virtual object may be not displayed by the display system of the wearable extended reality appliance at the given moment. A user wearing the wearable extended reality appliance may be able to view the virtual object if the virtual object is within the field of view of the wearable extended reality appliance, as the virtual object may be displayed by the display system of the wearable extended reality appliance at the given moment. The determination of whether the virtual object is within or outside the field of view of the wearable extended reality appliance may be made, for example, at or around the time of the change associated with the virtual object, at or around the time of initiating a sensory prompt for the change (e.g., the first or second sensory prompt as described herein), at or around a time between the change and the initiating of the sensory prompt, at or around a selected time before the change, and/or at or around any other desired time for making the determination. In some examples, the determination of whether the virtual object is within or outside the field of view of the wearable extended reality appliance may be made in response to the receiving of the data reflecting the change associated with the virtual object. In some examples, the determination of whether the virtual object is within or outside the field of view of the wearable extended reality appliance may be made in preparation for the initiating of a sensory prompt for the change associated with the virtual object. The at least one processor associated with the wearable extended reality appliance may determine that the virtual object is within the field of view of the wearable extended reality appliance, for example, if a certain percentage larger than zero percent (e.g., 0.1 percent, 1 percent, 20 percent, 50 percent, or 100 percent) of the virtual object is within the field of view of the wearable extended reality appliance. The at least one processor may determine that the virtual object is outside the field of view of the wearable extended reality appliance, for example, if the virtual object does not have the certain percentage within the field of view of the wearable extended reality appliance. In some examples, the at least one processor associated with the wearable extended reality appliance may determine whether the virtual object is within or outside the field of view of the wearable extended reality appliance based on determining whether a particular point on the virtual object is within or outside the field of view of the wearable extended reality appliance. Additionally or alternatively, the at least one processor associated with the wearable extended reality appliance may determine whether the virtual object is within or outside the field of view of the wearable extended reality appliance in any other desired manner. With reference toFIG.49, a field of view4916of the wearable extended reality appliance4912may be associated with a display system of the wearable extended reality appliance4912. The field of view4916may move and/or rotate as the display system moves and/or rotates. At a given moment, the display system may cause display, to the user4910, of virtual content within the field of view4916and may not cause display, to the user4910, of virtual content outside the field of view4916. The field of view4916may be a region or space in the extended reality environment4914and/or may be a solid angle from a point of observation by the user4910(e.g., the eye(s) of the user4910and/or a point in the area of or nearby the eye(s) of the user4910). At least one processor associated with the wearable extended reality appliance4912may determine whether the virtual object4920is within the field of view4916or is outside the field of view4916, for example, based on (e.g., in response to) the receiving of the data reflecting the change associated with the virtual object4920. In some examples, with reference toFIG.49, the at least one processor associated with the wearable extended reality appliance4912may determine that the virtual object4920is within the field of view4916. In some examples, with reference toFIG.51(which may illustrate one or more elements as described in connection withFIG.49), the at least one processor associated with the wearable extended reality appliance4912may determine that the virtual object4920is outside the field of view4916. Some embodiments involve causing the wearable extended reality appliance to initiate a first sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be within the field of view. A sensory prompt may refer to any indication that may be configured to be sensed by an individual. A sensory prompt may relate to any sense of an individual, such as sight, smell, touch, taste, hearing, or any other ability of an individual to gather information. In some examples, a sensory prompt may be used to provide a notification to a user. For example, a sensory prompt may include a visual notification, an audible notification, or a tactile notification. A computing device may cause a sensory prompt to be generated (e.g., via one or more output devices, such as a screen, a speaker, or a vibrator), based on one or more triggering events, such as a change associated with a virtual object rendered by the computing device. At least one processor associated with the wearable extended reality appliance may cause the wearable extended reality appliance to initiate a first sensory prompt indicative of the change associated with the virtual object located in the extended reality environment when the virtual object is determined to be within the field of view of the wearable extended reality appliance. Initiation of the first sensory prompt may be based on receipt of data reflecting the change associated with the virtual object and/or may be based on a determination that the virtual object is within the field of view of the wearable extended reality appliance. The first sensory prompt may include, for example, a notification (e.g., visual, audible, or tactile) that may indicate the change associated with the virtual object. The first sensory prompt may include, for example, a popup notification on a physical or virtual screen, a change of the appearance of the virtual object (e.g., adding or changing a mark, a red dot, a number, or any other indication on the virtual object), a popup virtual object as a notification, or any other desired indication. The first sensory prompt may include, for example, a preview of at least a portion of the content of the change associated with the virtual object, a summary of the content of the change associated with the virtual object, an indication of the existence of the change associated with the virtual object, and/or an indication with any desired level of detail of the change associated with the virtual object. With reference toFIG.50(which may illustrate a use snapshot based on the examples as described in connection withFIG.49), at least one processor associated with the wearable extended reality appliance4912may cause the wearable extended reality appliance4912to initiate a first sensory prompt indicative of the change associated with the virtual object4920when the virtual object4920is determined to be within the field of view4916. The initiating of the first sensory prompt may be based on (e.g., in response to) the receiving of the data reflecting the change associated with the virtual object4920and/or the determination that the virtual object4920is within the field of view4916. The first sensory prompt may include a change of the appearance of the virtual object4920as displayed by the wearable extended reality appliance4912. For example, appearing on the virtual object4920, a number indicating a quantity of emails received by the email application for which the virtual object4920may be the icon or widget may change from “11” (as shown inFIG.49) to “12” (as shown inFIG.50). The first sensory prompt (e.g., the change of the number from “11” to “12”) may indicate that a new email is received by the email application for which the virtual object4920may be the icon or widget. Some embodiments involve causing the wearable extended reality appliance to initiate a second sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be outside the field of view. In some embodiments, the second sensory prompt differs from the first sensory prompt. At least one processor associated with the wearable extended reality appliance may cause the wearable extended reality appliance to initiate a second sensory prompt indicative of the change associated with the virtual object located in the extended reality environment when the virtual object is determined to be outside the field of view of the wearable extended reality appliance. The initiation of the second sensory prompt may be based on the receipt of data reflecting a change associated with the virtual object and/or may be based on the determination that the virtual object is outside the field of view of the wearable extended reality appliance. The second sensory prompt may include, for example, a notification (e.g., visual, audible, or tactile) that may indicate the change associated with the virtual object. Additionally or alternatively, the second sensory prompt may include, for example, a popup notification on a physical or virtual screen, a change of the appearance of the virtual object (e.g., adding or changing a mark, a red dot, a number, or any other indication on the virtual object), a popup virtual object as a notification, or any other desired indication. The second sensory prompt may further include, for example, a preview of at least a portion of the content of the change associated with the virtual object, a summary of the content of the change associated with the virtual object, an indication of the existence of the change associated with the virtual object, and/or an indication with any desired level of detail of the change associated with the virtual object. The second sensory prompt may be different from the first sensory prompt (e.g., for the same change associated with the virtual object or for the same received data reflecting the change associated with the virtual object). For example, for the change associated with the virtual object or the received data reflecting the change, the at least one processor associated with the wearable extended reality appliance may initiate the first sensory prompt or the second sensory prompt based on whether the virtual object is within or outside the field of view of the wearable extended reality appliance. The first sensory prompt and the second sensory prompt may be different in terms of a type or category of a sensory prompt, a quantity of sensory prompting items, a degree of sensory prompting effect (e.g., disturbance), content of a sensory prompt, or any other aspect of a sensory prompt. For example, the first sensory prompt may be a visual notification, and the second sensory prompt may be an audible notification. As another example, the first sensory prompt may include one item of visual notification (e.g., a change of the appearance of the virtual object), and the second sensory prompt may include two or more items of visual notification (e.g., a popup notification window and a change of the appearance and/or location of the virtual object). As another example, the first sensory prompt may be an audible notification with a lower degree of loudness, and the second sensory prompt may be an audible notification with a higher degree of loudness. As another example, the first sensory prompt may be a visual notification with first content (e.g., a change of the appearance of the virtual object), and the second sensory prompt may be a visual notification with second content (e.g., a popup notification window with a preview of the content of the change associated with the virtual object). Additionally or alternatively, the first sensory prompt and the second sensory prompt may be different in any other desired manner. With reference toFIG.52(which may illustrate a use snapshot based on the examples as described in connection withFIG.51), at least one processor associated with the wearable extended reality appliance4912may cause the wearable extended reality appliance4912to initiate a second sensory prompt indicative of the change associated with the virtual object4920when the virtual object4920is determined to be outside the field of view4916(as shown inFIG.51). The initiating of the second sensory prompt may be based on (e.g., in response to) the receiving of the data reflecting the change associated with the virtual object4920and/or the determination that the virtual object4920is outside the field of view4916(as shown inFIG.51). With reference toFIG.52, the second sensory prompt may include a change of the size of the virtual object4920. For example, the size of the virtual object4920may be expanded (e.g., by 150 percent, by 200 percent, by 300 percent, or to any other desired degree). In some examples, expanding the size of the virtual object4920may make at least a portion of the virtual object4920enter the field of view4916, and thus cause at least the portion to be displayed by the wearable extended reality appliance4912to the user4910. Additionally or alternatively, the second sensory prompt may include a change of the appearance of the virtual object4920. For example, on the virtual object4920, a number indicating a quantity of emails received by the email application for which the virtual object4920may be the icon or widget may change from “11” (as shown inFIG.51) to “12” (as shown inFIG.52). Additionally or alternatively, the second sensory prompt may include an audible notification. For example, at least one processor associated with the wearable extended reality appliance4912may cause an audible notification5210to be generated (e.g., using one or more speakers associated with the wearable extended reality appliance4912). The audible notification5210may sound as originating from the current location of the virtual object4920. The audible notification5210may include, for example, a beep, a tone, an audio segment, an audio message, an audio associated with the change associated with the virtual object4920, or any other desired audio. The second sensory prompt (e.g., including the expanding of the size of the virtual object4920, the change of the number from “11” to “12,” and the audible notification5210) may indicate that a new email is received by the email application for which the virtual object4920may be the icon or widget. The second sensory prompt (e.g., including the expanding of the size of the virtual object4920, the change of the number from “1 I” to “12,” and the audible notification5210) may differ from the first sensory prompt (e.g., including the change of the number from “11” to “12”). In some embodiments, the first sensory prompt includes at least one of a visual notification, an audible notification, or a tactile notification, and the second sensory prompt includes at least two of a visual notification, an audible notification, or a tactile notification. A visual notification may include any indication that may be viewed by a user. An audible notification may include any indication that may be heard by a user. A tactile notification may include any indication that may be perceived by a user with the sense of touch. In some examples, the second sensory prompt may include a larger quantity of notifications than the first sensory prompt. In some examples, the second sensory prompt may include a same quantity of notifications as the first sensory prompt. In some examples, the second sensory prompt may include a smaller quantity of notifications than the first sensory prompt. In some embodiments, the second sensory prompt causes the virtual object to move, and the first sensory prompt causes the virtual object to change its appearance without moving. For example, the change of the appearance of the virtual object without moving (e.g., associated with the first sensory prompt) may include, for example, adding or changing a mark, a red dot, a number, or any other indication on the virtual object, without changing the location of the virtual object in the extended reality environment. Causing the virtual object to move (e.g., associated with the second sensory prompt) may include, for example, changing the location of the virtual object in the extended reality environment. In the second sensory prompt, the virtual object may be caused to move in any desired direction. In some embodiments, causing the virtual object to move includes causing the virtual object to temporarily appear in the field of view of the wearable extended reality appliance. For example, the virtual object may move in a direction towards the field of view of the wearable extended reality appliance, across the boundary of the field of view, and into the field of view. In some examples, the virtual object may move into the field of view along a surface on which the virtual object may be placed (e.g., a physical or virtual wall, a physical or virtual whiteboard, or any other desired surface). Additionally or alternatively, the virtual object may move in a direction, towards the field of view, that may connect the location of the virtual object and a point, on the boundary of the field of view, that may be in proximity to (e.g., closest as measured in the space of the extended reality environment, or closest as measured on a particular surface in the extended reality environment) the location of the virtual object. In some examples, the virtual object may move to a location, in the field of view, that may be in proximity to the boundary of the field of view (e.g., be in proximity to the point on the boundary). In some examples, the virtual object may move to any desired location in the field of view. The virtual object may stay at the location in the field of view for a temporary period of time (e.g., 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds, 10 seconds, or any other desired time). Additionally or alternatively, the virtual object may stay at the location in the field of view until a user interacts with the virtual object. In some embodiments, when the field of view of the wearable extended reality appliance includes a virtual screen, the second sensory prompt causes a popup notification to be displayed on the virtual screen, and the first sensory prompt causes the virtual object to change its appearance in an absence of a popup notification on the virtual screen. A virtual screen may include a virtual object that may resemble a physical screen. The virtual screen may be rendered by the wearable extended reality appliance and may be displayed in the field of view of the wearable extended reality appliance. The popup notification (e.g., associated with the second sensory prompt) may include, for example, any indication that may, in response to a triggering event, promptly appear in the foreground of the visual interface rendered by the wearable extended reality appliance. The popup notification may appear in any desired location on the virtual screen. In some examples, the popup notification may indicate information of the change associated with the virtual object, such as a preview of the content of the change associated with the virtual object, a summary of the content of the change associated with the virtual object, the existence of the change associated with the virtual object, or any other desired information. The change of the appearance of the virtual object (e.g., associated with the first sensory prompt) may include, for example, adding or changing a mark, a red dot, a number, or any other indication on the virtual object. In connection with the change of the appearance of the virtual object (e.g., associated with the first sensory prompt), the wearable extended reality appliance may not cause display of a popup notification on the virtual screen. In some embodiments, the second sensory prompt is indicative of a location of the virtual object outside the field of view of the wearable extended reality appliance. For example, the second sensory prompt may be indicative of a direction between the location of the virtual object and the field of view or a position or area within the field of view (e.g., by associating the second sensory prompt with an edge or location of the field of view closest to the location of the virtual object, by associating the second sensory prompt with a motion directed away from the location of the virtual object, by associating the second sensory prompt with motion directed towards the location of the virtual object, and/or in any other desired manner). In some examples, the second sensory prompt may be indicative of a distance between the location of the virtual object and the field of view or a position or area within the field of view. For example, one or more aspects (e.g., the intensity, the level of loudness, or the level of detail of the content) of the second sensory prompt may be configured based on (e.g., proportional to or inversely proportional to) the distance. In some embodiments, the second sensory prompt includes an audible output configured to appear as originating from the location of the virtual object outside the field of view of the wearable extended reality appliance. An audible output may include, for example, any indication that may be heard by a user. At least one processor associated with the wearable extended reality appliance may cause the audible output to be generated using a method that may create a directional audible perspective. For example, stereophonic sound may be used to generate the audible output of the second sensory prompt, so that a user of the wearable extended reality appliance may perceive the audible output as originating from the location of the virtual object outside the field of view of the wearable extended reality appliance. In some examples, the first sensory prompt may not include an audible output configured to appear as originating from the location of the virtual object. In some examples, both the first sensory prompt and the second sensory prompt may include audible outputs configured to appear as originating from the location of the virtual object, and the audible output of the first sensory prompt may differ from the audible output of the second sensory prompt in at least one of a tone, a volume, a pitch, a duration, or any other aspect of an audible output. For example, the audible output of the second sensory prompt may be more intense than the audible output of the first sensory prompt. In some embodiments, the second sensory prompt includes a visual output configured to appear as originating from the location of the virtual object outside the field of view of the wearable extended reality appliance. A visual output may include, for example, any indication that may be viewed by a user. The visual output may include, for example, ripples originating from the location of the virtual object and entering the field of view, a moving arrow or object originating from the location of the virtual object and entering the field of view, or any other visual indication that may be displayed in the field of view as originating from the location of the virtual object. In some examples, the motion of the visual output may be along a surface on which the virtual object may be placed. In some examples, the motion of the visual output may be along a path that may be associated with proximity (e.g., closest) between the location of the virtual object and the field of view. Additionally or alternatively, the motion of the visual output may be determined in any other desired manner. In some examples, the first sensory prompt may not include a visual output configured to appear as originating from the location of the virtual object. In some examples, both the first sensory prompt and the second sensory prompt may include visual outputs configured to appear as originating from the location of the virtual object, and the visual output of the first sensory prompt may visually differ from the visual output of the second sensory prompt. For example, the visual output of the second sensory prompt may be remote from the location of the virtual object while the visual output of the first sensory prompt may be local to the location of the virtual object. In another example, the visual output of the second sensory prompt may move in the extended reality environment while the visual output of the first sensory prompt may be local to a specific area of the extended reality environment (such as the area of the virtual object). In yet another example, the visual output of the second sensory prompt may include one graphical indication, while the visual output of the first sensory prompt may include a different graphical indication. In an additional example, the visual output of the second sensory prompt may include textual information, while the visual output of the first sensory prompt may include different textual information. Some embodiments involve estimating an importance of the change associated with the virtual object, and determining a degree of disturbance corresponding to the second sensory prompt based on the estimated importance of the change. For example, at least one processor associated with the wearable extended reality appliance may estimate the importance of the change associated with the virtual object based on one or more of various factors, such as a message sender associated with the change, an indicator of importance in a message associated with the change, a date and time of the change, or any other relevant factor. In some examples, to estimate the importance of the change, the at least one processor may analyze the content of the change, for example, by using natural language processing algorithms, voice recognition algorithms, or any other desired method. The at least one processor may determine a degree of disturbance corresponding to the second sensory prompt based on the estimated importance of the change. The degree of disturbance corresponding to the second sensory prompt may include, for example, a degree of loudness of an audible output, a displayed size of a visual output, a vibration amplitude of a tactile output, an amount of the content of an output, or any other measurement of interruption of the second sensory prompt. In some examples, the degree of disturbance may be proportional to the estimated importance of the change. Some embodiments involve receiving image data; analyzing the image data to determine an activity of a user of the wearable extended reality appliance; estimating a relevancy level of the change associated with the virtual object based on the determined activity of the user; and determining a degree of disturbance corresponding to the second sensory prompt based on the relevancy level of the change. In some examples, image data may be captured using an image sensor included in or separate from the wearable extended reality appliance. The activity of the user may be a physical activity of the user. The analysis of the image data may include usage of a visual activity recognition algorithm or a gesture recognition algorithm. Such algorithms may determine degrees/amounts of bodily movement in image data by identifying a body or parts thereof in pixels and measuring pixel movement. In some examples, the physical activity may include an interaction with a physical object. An object recognition algorithm may be used to identify a type of the physical object, and the relevancy level may be determined based on the type of the physical object (e.g., based on an affinity between the type of the physical object and the virtual object or an affinity between the type of the physical object and the change associated with the virtual object). In some examples, the physical activity of the user may include an interaction with a second virtual object using hand gestures. The relevancy level may be determined based on the second virtual object (e.g., based on an affinity between the second virtual object and the virtual object or an affinity between the second virtual object and the change associated with the virtual object). The degree of disturbance corresponding to the second sensory prompt may include, for example, a degree of loudness of an audible output, a displayed size of a visual output, a vibration amplitude of a tactile output, an amount of the content of an output, or any other measurement of interruption of the second sensory prompt. In some examples, the degree of disturbance may be proportional to the relevancy level of the change. For example, if the physical object or the second virtual object is of a work type (e.g., a physical book, a physical file, a word processing program, a spreadsheet program, or a presentation program), and the virtual object or the change associated with the virtual object is of a gaming type (e.g., a video game), the relevancy level of the change may be determined to be low. If the physical object or the second virtual object is of a work type (e.g., a physical book, a physical file, a word processing application, a spreadsheet application, or a presentation application), and the virtual object or the change associated with the virtual object is of a work type (e.g., an email application), the relevancy level of the change may be determined to be high. Some embodiments involve accessing a group of rules associating degrees of disturbance with degrees of virtual object changes, determining that the change associated with the virtual object corresponds to a specific rule of the group of rules, and implementing the specific rule to set a degree of disturbance corresponding to the second sensory prompt. For example, at least one processor associated with the wearable extended reality appliance may store the group of rules in a memory associated with the wearable extended reality appliance. The at least one processor may access the group of rules, for example, in preparation for initiating a sensory prompt (e.g., the second sensory prompt) for the change associated with the virtual object. The degree of disturbance corresponding to the second sensory prompt may include, for example, a degree of loudness of an audible output, a displayed size of a visual output, a vibration amplitude of a tactile output, an amount of the content of an output, or any other measurement of interruption of the second sensory prompt. The group of rules may map various virtual object changes (and/or the degrees of the changes) to corresponding degrees of disturbance of a sensory prompt. For example, each rule in the group of rules may indicate a virtual object change (and/or a degree of the change) and a corresponding degree of disturbance of a sensory prompt for the change. The group of rules may be configured, for example, by a user, an administrator, or any other desired entity. Based on accessing the group of rules, the at least one processor may, for example, search in the group of rules using an identifier of the change associated with the virtual object as a search key. The at least one processor may identify the specific rule based on the searching, and may implement the specific rule to set the degree of disturbance corresponding to the second sensory prompt. The degree of disturbance corresponding to the second sensory prompt may be set to, for example, the degree of disturbance as indicated in the specific rule. Some embodiments involve halting the second sensory prompt upon detection of a trigger. For example, at least one processor associated with the extended reality appliance may halt (e.g., stop, suspend, or end) the second sensory prompt based on detecting a trigger. The trigger may include, for example, user interaction with and/or user attention to the virtual object, the change associated with the virtual object, and/or the second sensory prompt, a command to halt the second sensory prompt, a command to mute notifications, an expiration of a timer for displaying the second sensory prompt, or any other desired event. In some embodiments, detection of the trigger includes identifying entry of the virtual object into the field of view of the wearable extended reality appliance. For example, based on the second sensory prompt, a user of the wearable extended reality appliance may direct the user's attention to the virtual object, for example, by moving and/or rotating the wearable extended reality appliance so that the field of view of the wearable extended reality appliance may cover the location of the virtual object and the virtual object may enter into the field of view. At least one processor associated with the wearable extended reality appliance may detect the entry of the virtual object into the field of view and may, based on the detected entry, halt the second sensory prompt. Some embodiments involve analyzing input received from a sensor associated with the wearable extended reality appliance to detect the trigger for halting the second sensory prompt. The sensor associated with the wearable extended reality appliance may include, for example, an image sensor, an eye-tracking sensor, a sensor for tracking head-motion, a sensor included in or associated with an input device, or any other desired device for receiving information from a user. At least one processor associated with the wearable extended reality appliance may analyze input received from the sensor to detect the trigger for halting the second sensory prompt. The analysis of the input to detect the trigger may include, for example, detecting user interaction with and/or user attention to the virtual object, the change associated with the virtual object, and/or the second sensory prompt, detecting a command to halt the second sensory prompt, detecting a command to mute notifications, or detecting any other desired event as the trigger. Some embodiments involve, after the wearable extended reality appliance initiated the second sensory prompt, upon entry of the virtual object into the field of view of the wearable extended reality appliance, causing the wearable extended reality appliance to initiate the first sensory prompt indicative of the change associated with the virtual object. For example, based on the second sensory prompt, a user of the wearable extended reality appliance may direct the user's attention to the virtual object, for example, by moving and/or rotating the wearable extended reality appliance so that the field of view of the wearable extended reality appliance may cover the location of the virtual object and the virtual object may enter into the field of view. At least one processor associated with the wearable extended reality appliance may detect the entry of the virtual object into the field of view and may, based on the detected entry, cause the wearable extended reality appliance to initiate the first sensory prompt indicative of the change associated with the virtual object (and may halt the second sensory prompt). For example, the second sensory prompt may include a displayed virtual arrow towards the location of the virtual object, or a displayed illustration (e.g., a preview visual notification) of the change associated with the virtual object (e.g., a received email), and the first sensory prompt may include a change of the appearance of the virtual object (e.g., a change of a number, shown on the virtual object, indicating the quantity of received emails). Some embodiments involve receiving real-time movement data associated with the wearable extended reality appliance; analyzing the real-time movement data to determine a prospective entrance of the virtual object into the field of view of the wearable extended reality appliance; and in response to the determined prospective entrance of the virtual object into the field of view of the wearable extended reality appliance, withholding causing the wearable extended reality appliance to initiate the second sensory prompt. The real-time movement data may indicate movement of the wearable extended reality appliance and/or a component or element of the wearable extended reality appliance. The real-time movement data may include, for example, data captured using an inertia sensor (such as an accelerometer and/or a gyroscope) included in or separate from the wearable extended reality appliance. Additionally or alternatively, the real-time movement data may be obtained by analyzing images captured using an image sensor included in or separate from the wearable extended reality appliance (e.g., using an ego-motion algorithm). In some examples, the real-time movement data may be captured repeatedly, continuously, or periodically using one or more of various desired sensors and may be processed in real-time. At least one processor associated with the wearable extended reality appliance may analyze the real-time movement data to determine the prospective entrance of the virtual object into the field of view of the wearable extended reality appliance. The at least one processor may determine that a current movement may cause a prospective entrance of the virtual object into the field of view, for example, if the virtual object may be predicted to enter the field of view within a selected time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.3 seconds, or any other desired time) from the current time. The at least one processor may make the determination, for example, based on the current position and/or orientation of the wearable extended reality appliance and a current speed and direction of moving and/or rotating of the wearable extended reality appliance. The at least one processor may make the determination (e.g., repeatedly, continuously, or periodically), for example, when the virtual object may be currently outside the field of view. In some examples, the at least one processor may use historical data to predict whether there may be a prospective entrance of the virtual object into the field of view. For example, the at least one processor may store parameter information (e.g., the position of the wearable extended reality appliance, the orientation of the wearable extended reality appliance, the speed of moving and/or rotating of the wearable extended reality appliance, the direction of moving and/or rotating of the wearable extended reality appliance, and/or any other relevant information) during a time period before a detected actual entrance of the virtual object into the field of view, and may compare the stored parameter information with currently gathered parameter information (e.g., for the time period before the current time). Based on the comparison, the at least one processor may determine (e.g., predict) a prospective entrance of the virtual object into the field of view, for example, if a degree of similarity between the stored parameter information and the currently gathered parameter information (e.g., a confidence score) satisfies (e.g., meets or exceeds) a configured or selected threshold level of similarity. Additionally or alternatively, the at least one processor may determine (e.g., predict) a prospective entrance of the virtual object into the field of view in any other desired manner. In response to the determined prospective entrance of the virtual object into the field of view of the wearable extended reality appliance, the at least one processor associated with the wearable extended reality appliance may withhold causing the wearable extended reality appliance to initiate the second sensory prompt. The withholding may last for any desired time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.3 seconds, or any other desired time). In some examples, the withholding may last for a time period that may be same as or may approximate the time period within which the virtual object may be predicted (e.g., by the at least one processor) to enter the field of view. If the virtual object does not enter the field of view during the withholding time period, the at least one processor may, based on the expiration of the withholding time period, cause the wearable extended reality appliance to initiate the second sensory prompt. If the virtual object enters the field of view during the withholding time period, the at least one processor may, based on the entrance of the virtual object into the field of view, cause the wearable extended reality appliance to initiate the first sensory prompt. Some embodiments involve a system for initiating location-driven sensory prompts reflecting changes to virtual objects, the system comprising at least one processor programmed to: enable interaction with a virtual object located in an extended reality environment associated with a wearable extended reality appliance; receive data reflecting a change associated with the virtual object; determine whether the virtual object is within a field of view of the wearable extended reality appliance or is outside the field of view of the wearable extended reality appliance; cause the wearable extended reality appliance to initiate a first sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be within the field of view; and cause the wearable extended reality appliance to initiate a second sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be outside the field of view, wherein the second sensory prompt differs from the first sensory prompt. Some embodiments involve a method for initiating location-driven sensory prompts reflecting changes to virtual objects, the method comprising: enabling interaction with a virtual object located in an extended reality environment associated with a wearable extended reality appliance; receiving data reflecting a change associated with the virtual object; determining whether the virtual object is within a field of view of the wearable extended reality appliance or is outside the field of view of the wearable extended reality appliance; causing the wearable extended reality appliance to initiate a first sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be within the field of view; and causing the wearable extended reality appliance to initiate a second sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be outside the field of view, wherein the second sensory prompt differs from the first sensory prompt. FIG.53is a flowchart illustrating an exemplary process5300for initiating sensory prompts for changes based on a field of view consistent with some embodiments of the present disclosure. With reference toFIG.53, in step5310, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to enable interaction with a virtual object located in an extended reality environment associated with a wearable extended reality appliance. In step5312, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive data reflecting a change associated with the virtual object. In step5314, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to determine whether the virtual object is within a field of view of the wearable extended reality appliance or is outside the field of view of the wearable extended reality appliance. In step5316, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to cause the wearable extended reality appliance to initiate a first sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be within the field of view. In step5318, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to cause the wearable extended reality appliance to initiate a second sensory prompt indicative of the change associated with the virtual object when the virtual object is determined to be outside the field of view, wherein the second sensory prompt differs from the first sensory prompt. Various embodiments may be described with reference to a system, method, apparatuses, and/or computer readable medium for performing or implementing operations for selectively controlling a display of digital objects. It is intended that the disclosure of one is a disclosure of all. For example, it is to be understood that the disclosure of one or more processes embodied in a non-transitory computer-readable medium, as described herein, may also constitute a disclosure of methods implemented by the computer readable medium, as well as systems and/or apparatuses for implementing processes embodied in the non-transitory computer-readable medium, for example, via at least one processor. Thus, in some embodiments, a non-transitory computer readable medium contains instructions that when executed by at least one processor cause the at least one processor to perform operations for selectively controlling a display of digital objects. Some aspects of such processes may occur electronically over a network that may be wired, wireless, or both. Other aspects of such processes may occur using non-electronic means. In the broadest sense, the processes disclosed herein are not limited to particular physical and/or electronic instrumentalities; rather, they may be accomplished using any number of differing instrumentalities. The term “non-transitory computer-readable medium” may be understood as described earlier. The term “instructions” may refer to program code instructions that may be executed by a computer processor, for example, software instructions, computer programs, computer code, executable instructions, source code, machine instructions, machine language programs, or any other type of directions for a computing device. The instructions may be written in any type of computer programming language, such as an interpretive language (e.g., scripting languages such as HTML and JavaScript), a procedural or functional language (e.g., C or Pascal that may be compiled for converting to executable code), object-oriented programming language (e.g., Java or Python), logical programming language (e.g., Prolog or Answer Set Programming), or any other programming language. In some embodiments, the instructions may implement methods associated with machine learning, deep learning, artificial intelligence, digital image processing, and any other computer processing technique. In some embodiments, the instructions contained in the non-transitory computer-readable medium may include (e.g., embody) various processes for selectively controlling a display of digital objects via a physical display of a computing device such as, for example, a wearable extended reality appliance, including generating a plurality of digital objects for display, determining a usage status of a wearable extended reality appliance, selecting a display mode of a computing device, determining to display and/or not display digital objects, outputting digital objects for presentation, presenting digital objects, causing at least one digital object to appear and/or disappear from display, identifying a change in a usage status of a wearable extended reality appliance, updating a display mode selection, revising a presentation of digital objects, and/or any process related to controlling a display of digital objects based on a usage of a wearable extended reality appliance, as described herein. As used herein, a “computing device” includes any electronic component or group of components for manipulating data. Examples of computing devices include wearable extended reality or virtual reality appliances, personal computers, laptops, servers, tablets, smart phones, smart watches, or any other device that includes at least one processor. At least one processor may be configured to execute instructions contained in the non-transitory computer-readable medium to cause various processes to be performed for implementing operations for selectively controlling a display of digital objects, as described herein. The term processor may be understood as described earlier. For example, the at least one processor may be one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5, and the instructions may be stored at any of memory devices212,311,411, or511, or a memory of mobile device206. Disclosed embodiments may relate to operations for selectively controlling display of digital objects. As used herein, the term “digital objects” may include, or otherwise denote, any type of data representation or visual presentation generated by and/or presented by at least one computer or processing device. Digital objects may include, for example, any data or media (e.g., alphanumerical text, image data, audio data, video data) formatted for presenting information to a user via, for example, an interface of an electronic device. “Display of digital objects” include the presentation of digital objects to a user via one or more presentation devices. “Operations for selectively controlling display of digital objects” may include one or more acts of regulating, integrating, implementing, presenting, manipulating, and/or changing a presentation of, at least one digital object, or group of digital objects. These digital objects may be capable of being presented to a user via at least one appropriate physical display and/or extended reality appliance at or for a particular time. Moreover, the digital objects may be presented to a user in response to at least one action, option, and/or environment that may be distinguishable from some other action, option, and/or environment. For example, operations for selectively controlling, or otherwise implementing selective control over, a display of digital objects may relate to determining, regulating, managing, changing, and/or otherwise affecting the location, arrangement, appearance, status, accessibility, and/or overall presentation of any number of digital objects via a physical display and/or extended reality appliance, such as a wearable extended reality appliance. In some embodiments, operations for selectively controlling a display of digital objects may be performed within and/or between a real environment, a virtual environment, or real and virtual combined environments for displaying digital objects to a user. In some embodiments, the manner in which digital objects are displayed to a user may relate to the location, arrangement, appearance, status, accessibility, and/or overall presentation of a digital object or group of digital objects as presented to a user via an extended reality appliance or physical display and/or combination of an extended reality appliance and/or physical display. In some embodiments, the manner in which digital objects are displayed may be selectively controlled to change at or for a particular time in response to at least one action, option, and/or environment. The display of digital objects may occur in a real environment via at least one physical display for presenting digital content. In other examples, the display of digital objects may occur in an extended reality environment, such as an augmented reality environment or a mixed reality environment, via a physical display of a computing device and/or a wearable extended reality appliance. For example, a wearable extended reality appliance may be configured to enable a user of the wearable extended reality appliance to view the overall display of digital objects across multiple displays including a physical display for presenting digital content and an extended reality appliance for presenting virtual digital content. Digital objects may be displayed to a user via at least one physical display of a computing device and/or a wearable extended reality appliance. In some embodiments, the wearable extended reality appliance may be in communication with the at least one computing device. In some embodiments, digital objects may include, for example, at least one application, widget, document, cursor, menu, option in a menu, at least one icon which may activate a script for causing an action associated with the particular digital object associated with the icon and/or otherwise linked to related programs or applications, and/or any other data representation or visual presentation displayed, or configured for display, via a physical display and/or via an extended reality appliance. In some embodiments, the digital objects may include real digital objects and/or virtual digital objects. As described herein, a real digital object may relate to any digital object displayed to a user via at least one physical display of a computing device. A virtual digital object may relate to any digital object displayed to a user via a wearable extended reality appliance. For example, at a particular time, one digital object may be presented as a virtual digital object via a wearable extended reality appliance, as a real digital object via a physical display, or as a virtual digital object and a real digital object simultaneously via an extended reality appliance and a physical display. In some embodiments, a real digital object may include any graphic two-dimensional digital content, graphic three-dimensional digital content, inanimate digital content, animate digital content configured to change over time or in response to triggers, and/or any other digital content configured to be displayed to a user via a physical display. For example, real digital objects displayed via a physical display may include a document, a widget inside a menu bar, and images. In some embodiments, a virtual digital object may include any inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual constructive or destructive overlay over a portion of a physical and/or digital environment or over a physical and/or real digital object, a virtual addition to a physical and/or digital environment or to a physical and/or real digital object, and/or any other digital content configured to be displayed to a user via a wearable extended reality appliance. For example, virtual digital objects displayed via an extended reality appliance may include a virtual document, virtual widgets inside a virtual menu bar, a virtual workspace, and a realistic three-dimensional rendition of an image. In some embodiments, a user may be able to interact with the digital objects, including real digital objects and/or virtual digital objects, presented via the physical display and/or the wearable extended reality appliance. In some embodiments, at least one real digital object may be related, linked, associated, or otherwise correspond to at least one virtual digital object, or vice versa. For example, the at least one real digital object and the at least one virtual digital object may share at least one common feature and/or function. In one example, interaction with at least one virtual digital object may affect at least one related, linked, or associated real digital object, and vice versa. In another example, interaction with at least one virtual digital object may not affect a related, linked, or associated real digital object, and vice versa. In some examples, at least one real digital object may be converted to at least one virtual digital object, or vice versa, within an extended reality environment at or for a particular time in response to at least one action, option, and/or environment. Computations provided by a computing device may include arithmetic and/or logic operations with or without human intervention. For example, a computing device may include one or more input devices, processing devices for processing data instructions, output devices, and/or storage devices for data and storage retention. In some embodiments, a computing device may relate to a standalone unit and/or a combination of related or interconnected units. In some embodiments, the computing device may be directly or indirectly connected to a physical display, and/or may be a part of the physical display. Additionally, or alternatively, the computing device may be directly or indirectly connected to a wearable extended reality appliance and/or may be a part of the wearable extended reality appliance. In some embodiments, the computing device may enable a user to interact with one or more digital objects within an extended reality environment via a wearable extended reality appliance and/or via another device in communication with the computing device and/or with the wearable extended reality appliance. The computing device may be capable of selectively controlling a display of one or more digital objects, consistent with some embodiments of the present disclosure. In one example, the computing device may be configured to generate some or all of the digital objects for display via the physical display and/or the wearable extended reality appliance. For example, a computing device may include a laptop computer, a desktop computer, a smartphone, a wearable computer such as a smartwatch, and a tablet computer. As used herein, the term “generating a plurality of digital objects for display with use of a computing device” includes constructing and/or rendering of any number of digital objects for presentation a computing device. In some embodiments, the computing device may be configured to generate one or more digital objects for display based on received and/or processed digital signals and/or any other form of data received and/or stored by the computing device. For example, digital signals and/or data processed by the computing device may be used to present digital content, including real digital objects, to a user via a physical display. Additionally, or alternatively, digital signals and/or data processed by the computing device may be used to present virtual digital content, including virtual digital objects, to a user via a wearable extended reality appliance. In one example, digital signals and/or data may indicate an appropriate position and/or angle of a viewpoint of a digital object such that digital content may be generated for display to the user at a particular position and/or angle within a particular environment (e.g., a real environment or extended reality environment). In another example, digital signals and/or data may indicate an appropriate presentation and/or appearance of a digital object such that the digital object has a particular presentation and/or appearance. By way of example,FIG.54illustrates one non-limiting example of a plurality of digital objects presented to a user within an extended reality environment, consistent with some embodiments of the present disclosure.FIG.54is a representation of just one embodiment, and it is to be understood that some illustrated elements and/or features might be omitted, and others added within the scope of this disclosure. As shown, a user5410is wearing a wearable extended reality appliance5411and sitting behind table5413supporting a keyboard5414, mouse5415, and computing device5416having physical display5417. The physical display5417of the computing device5416may be a desktop computer configured to display digital content to user5410, for example, real digital objects5418A,5418B, and5418C. Real digital objects5418A and5418B are programs open on the physical display5417and real digital object5418C may be a cursor for interacting with digital objects displayed via the physical display5417and controllable using mouse5415. While physical display5417of computing device5416is depicted as a desktop computer, it is to be understood that, in some embodiments, the physical display may relate to any display or combination of physical displays configured to display real digital objects to user5410. Some non-limiting examples of such physical displays may include a physical display of a laptop computer, a physical display of a tablet, a physical display of a smartphone, a physical display of a television, and so forth. In some examples, a physical display may be or include a device converting digital signals and/or analog signals to perceptible light patterns. Additionally, while keyboard5414and mouse5415are depicted here as a wireless keyboard and a wireless mouse connected to computing device5416of physical display5417, it is to be understood that the computing device may be indirectly or directly connected to, or in communication with, any number of peripheral devices. As shown, wearable extended reality appliance5411may be a pair of smart glasses. The wearable extended reality appliance5411may be connected via a wire to keyboard5414which may be in communication with the computing device5416. Wearable extended reality appliance5411may be configured to display virtual digital content to user5410within an extended reality environment viewable through wearable extended reality appliance5411. For example, virtual digital object5419A and virtual digital object5419B are displayed to user5410via wearable extended reality appliance5411. From the perspective of user5410, virtual digital object5419A is displayed next to physical display5417of computing device5416and virtual digital object5419B is displayed on table5413. Some embodiments involve generating a plurality of digital objects for display in connection with use of a computing device. In one example, generating a digital object for display may include selecting and/or generating one or more visuals associated with the digital object. In another example, generating a digital object for display may include selecting and/or generating textual data for display in association with the digital object. In another example, generating a digital object for display may include selecting at least one of color, texture, size, position, orientation, illumination, intensity or opacity for the display of at least part of the virtual object. In some examples, generating a digital object for display may include rendering the virtual object, for example using a ray casting algorithm, using a ray tracking algorithm, using rasterization algorithm, and so forth. For example, the digital object may be rendered for display using a single display, using a stereo display, and so forth. For example, the digital object may be rendered from geometrical information associated with the virtual object, from a three-dimensional model associated with the virtual object, from a two-dimensional model associated with the virtual object, from visuals associated with the virtual object, from textual information associated with the virtual object, and so forth. In some embodiments, the computing device is operable in a first display mode and in a second display mode. The term “display mode” may include a configuration or manner of operation. For example, particular display parameters may apply to a particular mode. The particular mode may be applied to a real and/or extended reality environment at a particular time or for a particular duration. For example, a display parameter may include any characteristic capable of defining or classifying the manner in which any number of digital objects are displayed (e.g., location, arrangement, appearance, status, accessibility, and/or overall presentation) to a user via a physical display and/or extended reality appliance. In some embodiments, the display mode in which the computing device operates may be based on a particular type of one or more of the digital objects (e.g., real digital objects and/or virtual digital objects). In one example, the computing device may be configured to operate in a display mode capable of presenting real digital objects for display via at least one physical display. In another example, the computing device may be configured to operate in a display mode capable of presenting virtual digital objects for display via a wearable extended reality appliance. In yet another example, the computing device may be configured to operate in another display mode capable of simultaneously presenting digital content for display via at least one physical display and virtual digital content for display via a wearable extended reality appliance. As used herein, the term “operable” refers to an ability to work or perform. For example, a computing device may perform in multiple ways and may be switchable between modes of operation. Switching, for example, may alter the way, manner, type or content of presented. In one display mode, content may be displayed to the user via a physical display and in another mode, the content may be presented via a wearable extended reality appliance. Or, by way of another example, the computing device may be configured to switch between a first display mode for presenting digital objects via the physical display and a second display mode for presenting digital objects via the physical display and/or the wearable extended reality appliance. Additionally, or alternatively, in some embodiments, the computing device may be configured to operate in, and switch between, display modes that are different from the first display mode and the second display mode. In some embodiments, the computing device may be configured to switch between display modes in real time or near real time in response to at least one action, option, and/or environment. According to some embodiments, when the computing device is in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device. In a general sense, the term “displayed via a physical display” may relate to the presentation of digital content, including real digital objects, to a user via a physical display which the user may perceive and/or interact with in a real environment and/or an extended reality environment. In some examples, a “physical display” may relate to any type of device or system that is directly or indirectly connected to a computing device and configured to present inanimate and/or animate graphic two-dimensional and/or three-dimensional digital content to a user. In some examples, a “physical display” may relate to any type of device or system that is configured to present, based on received digital and/or analog signals, graphical information perceptible without using extended reality equipment, such as a wearable extended reality appliance. In some examples, digital objects presented via the physical display (e.g., real digital objects) are perceived as objects positioned at the physical location of the physical display, while digital objects presented via the wearable extended reality appliance (e.g., virtual digital objects) are perceived (e.g., due to optical illusion) as objects positioned away of the wearable extended reality appliance. In some examples, a physical display may relate to a non-transparent display, while the wearable extended reality appliance may include one or more transparent displays and may use the one or more transparent displays to present virtual digital objects in an optical illusion causing the virtual digital objects to appear at a select position in the environment away from the wearable extended reality appliance. In some examples, a physical display may relate to a non-transparent display, while the wearable extended reality appliance may include one or more projectors and may use the one or more projectors to present virtual digital objects in an optical illusion causing the virtual digital objects to appear at a select position in the environment away from the wearable extended reality appliance. In one embodiment, a physical display may include a computer screen, laptop screen, tablet screen, smartphone screen, projector screen, and/or any physical device capable of presenting digital content, such as a plurality of real digital objects, to a user. The physical display may include one physical display or a combination of physical displays in which at least one physical display is in direct or indirect communication with at least another physical display. Additionally, or alternatively, the physical display may include a combination of discrete physical displays that are not in communication with one another. As used herein, the “first display mode” may relate to the mode of operation of the computing device in which the computing device may be configured to generate and/or output digital content, such as a plurality of real digital objects, for display to a user via at least one physical display in communication with the computing device. In some examples, the visual presentation of at least one real digital object generated by the computing device in the first display mode may be produced in at least one confined region of the physical display. For example, when the computing device is operating in the first display mode, real digital objects may be generated and/or presented to a user via the physical display within any number of discrete and/or connected subsets of space within the entire space of the physical display. In some embodiments, when the computing device is operating in the first display mode, the plurality of digital objects are not displayed to the user via the wearable extended reality environment. By way of a non-limiting example,FIG.55Aillustrates a user5512of a computing device5514operating in a first display mode5510. User5512is shown sitting in front of a computing device5514operating in a first display mode5510. The computing device5514includes physical display5515, keyboard5516, and mouse5517. The physical display5515of computing device5514is configured to display digital content, such as a plurality of real digital objects5519A to5519C, to user5512. In this non-limiting embodiment, computing device5514is depicted as a laptop, but as described above, computing device5514may be any type of device or combination of devices capable of operating in a first display mode and a second display mode. Moreover, while physical display5515of computing device5514is depicted here as a laptop screen, it is to be understood that a computing device5514may be indirectly or directly connected to, or in communication with, any physical display or combination of physical displays configured to display digital content, such as a plurality of real digital objects, to the user5512. Additionally, while keyboard5516and mouse5517are depicted here as a keypad and trackpad built into computing device5514, it is to be understood that the computing device may be indirectly or directly connected to, or in communication with, any number of peripheral devices including a wireless keyboard, a wireless mouse, and a wearable extended reality appliance. The digital content displayed to user5512by physical display5515of computing device5514operating in the first display mode5510includes, for example, a cursor5518A and a plurality of real digital objects5519A to5519C. Additionally, it is to be understood that cursor5518A may constitute a digital object, such as a real digital object, as described herein. In the first display mode, cursor5518A may move anywhere within physical display5515and may interact with any digital content displayed therein, such as real digital object5519A, real digital object5519B, real digital object5519C, and/or any group or sub-group of real digital objects. For example, user5512may interact with applications or widgets, such as real digital objects5519A, real digital object5519B, and real digital object5519C, displayed in physical display5515using keyboard5516and/or mouse5517. According to some embodiments, when the computing device is in the second display mode, some of the plurality of digital objects are displayed via the physical display, and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance. In a general sense, the term “displayed via a wearable extended reality appliance” may relate to the presentation of virtual digital content, including virtual digital objects described above, to a user which the user may perceive, and/or interact with, in an extended reality environment via the wearable extended reality appliance. In one example, textual content entered using a keyboard (for example, using a physical keyboard, using a virtual keyboard, etc.) may be presented via the wearable extended reality appliance in real time as the textual content is typed. In another example, a virtual cursor may be presented via the wearable extended reality appliance, and the virtual cursor may be controlled by a pointing device (such as a physical pointing device, a virtual pointing device, a computer mouse, a joystick, a touchpad, a physical touch controller, and so forth). In yet another example, virtual displays, including one or more windows of a graphical user interface operating system, may be presented via the wearable extended reality appliance. In some embodiments, virtual digital objects generated by the computing device may be displayed via a wearable extended reality appliance in at least one virtual region of the extended reality environment. For example, virtual digital objects may be generated and/or presented to a user via the wearable extended reality appliance within any number of discrete and/or connected subsets of space within the entire space of the extended reality environment. A subset of space may relate to a two-dimensional or three-dimensional space within the extended reality environment that may be fixed relative to a particular physical object and/or digital object, fixed relative to a part of the extended reality appliance, or not fixed relative to any particular physical object, digital object, or part of the extended reality appliance. In some embodiments, virtual digital objects generated by the computing device may be displayed via the wearable extended reality appliance such that the dimensional orientation of at least one virtual digital object within the extended reality environment is different from another virtual digital object, as viewed from the perspective of the user. For example, the perceived dimensional orientation of any one subset of space including at least one virtual digital object may be different from, or similar to, the perceived dimensional orientation of any other subset of space including at least another virtual digital object within the entire space of an extended reality environment. In one example, at least two virtual digital objects may appear to exist within the same plane of the extended reality environment. In another example, at least one virtual digital object may appear to exist in a first plane of the extended reality environment, and at least another virtual digital object may appear to exist in a second plane of the extended reality environment that may intersect, or be parallel to, the first plane of the extended reality environment. As used herein, the term “wearable extended reality appliance” may be understood as described earlier. In some embodiments, the wearable extended reality appliance may be directly or indirectly in communication with the computing device and may include one wearable extended reality appliance or a combination of wearable extended reality appliances. As used herein, the “second display mode” may relate to the mode of operation of the computing device in which the computing device may generate and/or output digital content, such as a plurality of digital objects, for display to the user via at least one physical display in communication with the computing device and/or at least one wearable extended reality appliance. In some embodiments, in the second display mode, the wearable extended reality appliance may present digital content that may also be capable of being presented via the physical display. Additionally, or alternatively, the wearable extended reality appliance may present digital content that may not be capable of being presented via the physical display. In some embodiments, in the second display mode, the wearable extended reality appliance may display at least one virtual digital object mimicking and/or extending the functionality of at least one real digital object displayed, or previously displayed, via the physical display. Additionally, or alternatively, the wearable extended reality appliance may display at least one virtual digital object that is not related to the functionality of at least one real digital object displayed, or previously displayed, via the physical display. By way of a non-limiting example,FIG.55Billustrates a user5512of a wearable extended reality appliance5513and a computing device5514operating in a second display mode5511. The computing device5514illustrated herein is consistent with the computing device5514operating in the first display mode5510illustrated inFIG.55A. As shown, the wearable extended reality appliance5513is in wireless communication with the computing device5514and is configured to display virtual digital content to the user5512when the computing device is in the second display mode5511. Additionally, the wearable extended reality appliance5513is configured to enable the user5512to view digital content presented via the physical display5515of the computing device5514through the wearable extended reality appliance5513. For illustration purposes, the wearable extended reality appliance5513is depicted here as a pair of smart glasses, but as described above, wearable extended reality appliance5513may be any type of head-mounted device used for presenting an extended reality to user5512when computing device5514is in the second display mode5511. As shown, the digital objects displayed to user5512when computing device5514is in the second display mode5511include real digital objects presented by physical display5515of computing device5514and virtual digital objects presented by wearable extended reality appliance5513in communication with computing device5514. The real digital objects presented by physical display5515of computing device5514in the second display mode5511include real digital object5519B and real digital object5519C. Real digital object5519B and real digital object5519C correspond in function and appearance to some of the plurality of digital objects presented to the user5512in the first display mode5510, as illustrated inFIG.55A. The virtual digital objects presented by wearable extended reality appliance5513include virtual digital object5520and virtual digital object5521. The virtual objects in the extended reality environment, as viewed from the perspective of user5512, are depicted as two discrete virtual regions including virtual region5522A and virtual region5522B. The virtual regions5522A and5522B of the extended reality environment have been artificially imposed in this illustration to represent virtual content presented via wearable extended reality appliance5513from the perspective of user5512. Virtual digital object5520is presented to user5512within virtual region5522A away from and next to physical display5515such that virtual digital object5520appears to float at a fixed location to the right of physical display5515from the perspective of user5512. Virtual digital object5521is presented to user5512within virtual region5522B in a region away from physical display5515corresponding to a surface to the left of computing device5514such that virtual digital object5521appears to be positioned at a fixed location on a surface to the left of physical display5515of computing device5514from the perspective of user5512. The virtual digital content presented by wearable extended reality appliance5513may also include virtual cursor5518B controllable via the mouse5517and/or wearable extended reality appliance5513. It is to be understood that, in some embodiments, virtual cursor5518B may constitute a virtual digital object, as described herein, when the computing device is in the second display mode5511. However, a cursor may also be presented as a real digital object via the physical display5515in some embodiments when the computing device is in the second display mode5511. For example, the cursor may move anywhere within physical display5515as cursor5518A and/or anywhere within the extended reality environment via wearable extended reality appliance5513as virtual cursor5518B. Additionally, the cursor may interact with any digital objects contained within the extended reality environment including any real digital objects as cursor5518A and/or as virtual cursor5518B and/or any virtual digital objects as virtual cursor5518B. In the second display mode5511, user5512may interact with any digital objects presented via wearable extended reality appliance5513and/or the physical display5515. In one example, virtual cursor5518B may move over a real digital object presented within physical display5515and drag the real digital object out of physical display5515into the extended reality environment, for example to virtual region5522A. In another example, virtual cursor5518B may move anywhere within the extended reality environment, including virtual regions5522A and5522B, and may interact with, virtual digital object5520or virtual digital object5521. In yet another example, virtual cursor5518B may move on all available surfaces (e.g., virtual region5522B or any other identifiable physical surface) or on selected surfaces in the extended reality environment. Additionally, or alternatively, in the second display mode5511, user5512may interact with any one of real digital objects5519B or5519C or virtual digital objects5520or5521using hand gestures and/or eye gestures recognized by the wearable extended reality appliance5513and/or any sensor (e.g., a camera) in communication with computing device5514. In the second display mode, some of the plurality of digital objects presented to the user in the first display mode may be presented to the user via a physical display in communication with the computing device. The digital objects being presented via a physical display may be similar to the real digital objects generated by the same computing device in the first display mode. Additionally, in the second display mode, at least one other of the plurality of digital objects presented to the user in the first display mode may be presented to the user via the wearable extended reality appliance. The at least one other of the plurality of digital objects generated by the computing device in the second display mode and presented via the wearable extended reality appliance may include a virtual digital object or virtual digital objects consistent with the virtual digital objects described above. In some examples, the at least one other of the plurality of digital objects may be presented via the wearable extended reality appliance over the physical display and/or away of the physical display. In some examples, the at least one other of the plurality of digital objects may be presented via the wearable extended reality appliance as a virtual digital object in a fixed location relative to at least one particular physical object, such as a desk, wall, digital device, or any physical object having at least one recognizable surface or boundary. The location of the particular physical object itself may be fixed or not fixed relative to the entire space of the extended reality environment. In one example, none of the at least one other of the plurality of digital objects is displayed via the physical screen. For example, the computing device operating in the second display mode may be configured to present a plurality of digital objects such that none of at least one virtual digital object presented via the wearable extended reality appliance may correspond to, and/or be displayed as, the real digital objects presented via the physical display in the second display mode. In another example, the at least one other of the plurality of digital objects is displayed via the physical screen while being displayed via the wearable extended reality appliance. For example, the computing device operating in the second display mode may be configured to present a plurality of digital objects such that at least one virtual digital object presented via the wearable extended reality appliance may correspond to, and/or be displayed as, at least one real digital object presented via the physical display in the second display mode. In yet another example, a first digital object of the at least one other of the plurality of digital objects is displayed via the physical screen while being displayed via the wearable extended reality appliance, and a second digital object of the at least one other of the plurality of digital objects is not displayed via the physical screen. For example, the computing device operating in the second display mode may be configured to present a plurality of digital objects such that a first digital object may be presented as a real digital object via the physical display while also being presented as a virtual digital object via the wearable extended reality appliance, and a second digital object may be presented as a virtual digital object via the wearable extended reality appliance but not presented as a real digital object via the physical display. By way of a non-limiting example, one of the digital objects (real digital object5519A) illustrated inFIG.55Ais displayed via physical display5515as a weather widget in the first display mode5510. When computing device5514is operating in the second display mode5511illustrated inFIG.55B, real digital object5519A is no longer presented to user5512via physical display5515. Rather, when computing device5514is operating in the second display mode5511illustrated inFIG.55B, the digital object is presented as virtual digital object5520to user5512(in virtual region5522A) via wearable extended reality appliance5513. In another example, when computing device5514is operating in the second display mode5511, a digital object (e.g., a weather widget) may be displayed via physical display5515while also being presented to user5512(in virtual region5522B) via wearable extended reality appliance5513such that a real digital object and virtual digital object are simultaneously displayed to user5512within the extended reality environment. In yet another example, when computing device5514is operating in the second display mode5511, a digital object (e.g., a weather widget) may be simultaneously displayed via physical display5515and in the extended reality environment via wearable extended reality appliance5513while another digital object (e.g., cursor5518A), previously displayed in the first display mode5510, is displayed via wearable extended reality appliance5513(in virtual region5522B) as virtual digital object5518B, but not via physical display5515. In some embodiments, in the second display mode, the wearable extended reality appliance may display at least one additional digital object being excluded from display via the physical display in the first display mode. In a general sense, the “term excluded from display via the physical display” may relate to any digital object that is virtually presented via the wearable extended reality appliance but not presented, or not configured to be presented, via the physical display in communication with the computing device. As used herein, the term “one additional digital object” may relate to at least one digital object that is presented as a virtual digital object in the second display mode via the wearable extended reality appliance that is not displayed, or was not previously displayed, as at least one real digital object in the first display mode via the physical display. In one example, the wearable extended reality appliance may display at least one virtual cursor located outside the physical boundaries of the physical display. In another example, the wearable extended reality appliance may display at least one visual element that resides outside the physical boundaries of the physical display. For example, the wearable extended reality appliance may be configured to display one or more of a virtual digital object for controlling at least one function of the computing device, a virtual digital object which may activate a script for causing an action associated with a particular digital object, and/or any other data representation or visual presentation displayed, or configured for display, via a wearable extended reality appliance. In one non-limiting example, a visual element that resides outside the physical boundaries of the physical display may include a two-dimensional (e.g., simplified) object, such as a clock on the wall or a virtual controller in communication with the computing device, or displayed as a three-dimensional life-like object, such as a plant on a desk. By way of a non-limiting example,FIG.55Billustrates virtual digital object5521(a volume controller) that is displayed via wearable extended reality appliance5513(in virtual region5522B) when computing device5514is operating in the second display mode5511. Virtual digital object5521is not functionally related to any real digital object displayed via physical display5515in the second display mode5511illustrated inFIG.55B. Additionally, a digital object corresponding to virtual digital object5521is excluded from display via physical display5515in the first display mode5510. For example, there is no digital object illustrated inFIG.55Athat is displayed via physical display5515that corresponds to virtual digital object5521, as shown inFIG.55B. In the example shown, virtual digital object5521is configured to be interacted with only as a virtual digital object and not configured to be displayed via physical display5515in the first display mode5510. In some embodiments, the at least one other of the plurality of digital objects has a first visual appearance when presented by the physical display in the first display mode and has a second visual appearance when presented by the wearable extended reality appliance in the second display mode. As used herein, the term “visual appearance” may relate to the arrangement, layout, and/or overall presentation of a digital object as displayed. For example, such an appearance may differ based on whether presented on the physical display or the wearable extended reality appliance. In one example, a particular digital object displayed via the physical display may have a first visual appearance when presented via a physical display in the first display mode and a second visual appearance that is different from the first visual appearance when presented virtually in the second display mode via a wearable extended reality appliance. For example, a graphical user interface of a program for viewing and editing a document presented via a wearable extended reality appliance in the second display mode may be different from the same program presented via a physical display in the first display mode. In another example, the visual appearance of a widget for checking emails may appear in a simplified version when presented via the physical display in the first display mode and may appear as an expanded version with more functionality when presented via the wearable extended reality appliance when the computing device is operating in the second display mode. In another example, a real digital object displayed via a physical display in the first display mode may have a visual appearance that is similar in at least one respect, or in all respects, to a virtual digital object displayed via the wearable extended reality appliance in the second display mode. By way of a non-limiting example,FIG.55Billustrates virtual digital object5520(relating to an application for checking the weather) in the second display mode5511, which functionally corresponds to real digital object5519A displayed in the first display mode5510ofFIG.55A. The digital object is displayed as an icon (e.g., real digital object5519A) when presented via physical display5515in the first display mode5510and is presented as an open application (e.g., virtual digital object5520) when displayed via wearable extended reality appliance5513(in virtual region5522A) when the computing device is operating in the second display mode5511. As shown, virtual digital object5520has a first visual appearance when presented via the physical display5515in the first display mode5510that is different from a second visual appearance when presented via the wearable extended reality appliance5513in the second display mode5511. In some embodiments, the location of a particular digital object may depend upon the location of some particular physical object. In other embodiments, the location of a particular digital object may not depend upon the location of some particular physical object. In some embodiments, when the computing device is operating in the first display mode, a location of a particular digital object of the at least one other of the plurality of digital objects is independent of a location of a particular physical object. The term “independent of a location of a particular physical object” may refer to a particular digital object's absence of spatial dependence on or relationship to a location of a particular physical object at a particular time and/or in a particular display mode. For example, a location of a particular digital object, such as a real digital object, presented to the user via the physical display may not depend on, or otherwise rely upon, the location of some physical object outside of the physical display. In one non-limiting example, the location of a real digital object (e.g., a widget) displayed via a physical display of the computing device in the first display mode may remain within the same discrete subset of space of the physical display, or may otherwise remain unaffected, if a distance between a particular physical object (e.g., a chair, table, or plant) changes relative to some point of reference, such as the physical display. In some embodiments, when the computing device is operating in the second display mode, the location of the particular digital object depends on the location of the particular physical object. The term “depends on the location of the particular physical object” may refer to a particular digital object's spatial dependence on or relationship to a location of a particular physical object at a particular time and/or in a particular display mode. In some embodiments, a location of a particular digital object, such as a virtual digital object, presented to the user via the wearable extended reality appliance in the second display mode may depend on, or otherwise rely upon, the location of some physical object outside of the physical display. In some examples, the particular virtual digital object may be docked to or near the particular physical object. For example, when a virtual digital object is docked to a physical object, the virtual digital object may stay in the same location as the physical object and may move with the physical object. Additionally, or alternatively, when a virtual digital object is docked near a physical object, the virtual digital object may stay at a location that is within a fixed distance or range relative to the physical object and may move with the physical object such that the digital object remains within a fixed distance or range relative to the physical object. When a virtual digital object is docked to a physical object, at least one point of the virtual digital object may stay fixed relative to at least one point on the physical object such that a particular position and/or angle of the virtual digital object may change with a position and/or angle of the physical object. In another example, the particular virtual digital object may be configured to move to and/or from the particular physical object. For example, a virtual digital object may be configured to move within the extended reality environment from a location that is a first distance (e.g., longer distance) from the particular physical object to a location that is a second distance (e.g., shorter distance) from the particular physical object, and vice versa. Additionally, or alternatively, a real digital object may be configured to move out of display via the physical display and onto display via the wearable extended reality appliance within the extended reality environment (e.g., as a virtual digital object) toward the particular physical object, and vice versa. In another example, the particular virtual digital object may be located so that it does not hide, or otherwise obstruct, the particular physical object. In yet another example, the particular virtual digital object may be located so that it is not hidden, or otherwise obstructed, by the particular physical object. For example, the location of a particular virtual digital object (e.g., a widget for a computer application) may be configured to move relative to a particular physical object (e.g., any physical object within the extended reality environment such as a desk, chair, or peripheral device to the computing device) such that the virtual digital object does not obstruct the user's view of the physical object and/or the user's view of the virtual digital object. If the particular digital object is at a location that would obstruct the user's view of the particular virtual digital object, the virtual digital object may be configured to move to a new location relative to the physical object as to remain visible to the user. Additionally, or alternatively, if the particular virtual digital object is at a location that would obstruct the user's view of the particular physical digital object, the virtual digital object may be configured to move to a new location relative to the physical object as to not block the user's view of the physical object. By way of a non-limiting example,FIG.56is a schematic illustration of an example of a plurality of digital objects presented to a user in a second display mode, consistent with some embodiments of the present disclosure.FIG.56illustrates a computing device operating in a second display mode, the computing device including physical display5614and keyboard5611. The physical display5614of the computing device is configured to display digital content, such as a plurality of real digital objects5615, to user5612. For illustration purposes, physical display5614of the computing device is depicted here as a computer monitor configured to display digital content to the user5612. Wearable extended reality appliance5613is in wireless communication with the computing device and is configured to display virtual digital content, such as virtual digital objects, to user5612when the computing device is in the second display mode. The virtual digital content displayed to user5612includes virtual digital objects5616, virtual digital object5618, and virtual digital object5619. When the computing device is operating in the second display mode, the location of virtual digital objects5616, virtual digital object5618, and virtual digital object5619depend on the location of particular physical objects. As shown, the location of virtual digital objects5616depend on the location of physical display5614. Virtual digital objects5616are contained in a virtual region that is locked to physical display5614at a distance away from physical display5614. Virtual digital objects5616are also at a fixed position within the virtual region that is locked to physical display5614. In this example, the physical display5614acts as a physical object. In another example, virtual digital objects5616may be locked at a distance from at least one of the real digital objects5615displayed via the physical display5614. As another example, the location of virtual digital object5618is docked on top of keyboard5611and depends on the location of keyboard5611. The location of virtual digital object5618depends on the location of keyboard5611at least because one point of virtual digital object5618is fixed relative to at least one point on keyboard5611such that a location of virtual digital object5618will stay in the same location as keyboard5611and may move with keyboard5611. As another example, when the computing device is operating in the second display mode, the location of virtual digital object5619depends on the location of physical object5617. For example, virtual digital object5619is located at a set distance from physical object5617so that it does not hide the physical object5617from user5612. In this example, physical object5617is a mobile communication device; however, as discussed above, it is to be understood that a physical object may include any other physical object having at least one recognizable surface or boundary. In one example, virtual digital object5619may be configured to move with, move to, and/or move from physical object5617. In another example, the position of virtual digital object5619relative to physical object5617may change in response to an action of physical object5617(e.g., receiving a text message). Some embodiments involve determining a usage status of the wearable extended reality appliance. The term “usage status” may relate to a state of use, condition for use, and/or suitability for use of the wearable extended reality appliance described above at any point in time. In some embodiments, a usage status of the wearable extended reality appliance may include a power status of the wearable extended reality appliance (e.g., on or off), an engagement status of the user of the wearable extended reality appliance (e.g., whether the user is interacting with the digital content or has interacted with the digital content recently), a connection status of the wearable extended reality appliance (e.g., connected or disconnected to a computing device), a battery status of the wearable extended reality appliance (e.g., full battery, partial battery, low battery, or no battery), a hardware and/or software status of the wearable extended reality appliance (e.g., operating normally or abnormally), and/or any identifiable measure, or combination of measures, of the wearable extended reality appliance's state, condition, and/or suitability for use. As used herein, the determination of the usage status of the wearable extended reality appliance may be based on information related to past and/or present usage, operation, and/or general utilization of a particular computing device, the wearable extended reality appliance, and/or another device used in connection with the particular computing device and/or the wearable extended reality appliance. The determination of the usage status of the wearable extended reality appliance may occur at any given point of time (e.g., at startup of the computing device and/or the wearable extended reality appliance) or over any period of time. Any form of data and/or input received by, processed by, and/or stored by at least one computing device that is related to the wearable extended reality appliance's state, condition, and/or suitability for use may be utilized to determine a usage status of the wearable extended reality appliance. In one example, the usage status of the wearable extended reality appliance may be determined based on at least one form of data stored in the computing device and/or the wearable extended reality appliance relating to past usage of the wearable extended reality appliance (e.g., predicted behavior and/or preferences of the wearer of the wearable extended reality appliance). For example, if the user has a tendency to wear the wearable extended reality appliance when they are finished using the wearable extended reality appliance, a usage status of the wearable extended reality appliance may indicate that the user is no longer interacting with the digital content after some amount of time based on data related to prior use and/or user preferences. In another example, the usage status of the wearable extended reality appliance may be determined based on at least one input received by the computing device and/or the wearable extended reality appliance, indicating the wearable extended reality appliance is ready for use. For example, a webcam may capture image data indicating the user is wearing the wearable extended reality appliance. Additionally, or alternatively, a sensor on the computing device or the wearable extended reality appliance may indicate that the user is wearing the wearable extended reality appliance or that the wearable extended reality appliance has sufficient battery for use within the extended reality environment. In one embodiment, the usage status of the wearable extended reality appliance is determined based on data indicating when the wearable extended reality appliance is active. As used herein, the term “active” may refer to an activity status of the wearable extended reality appliance indicating that the wearable extended reality appliance and/or the user of the wearable extended reality appliance is engaged or ready to engage or interact with the extended reality environment. In some embodiments, data indicating when the wearable extended reality appliance is active may relate to any information used by the computing device and/or wearable extended reality appliance to determine or analyze the activity of the wearable extended reality appliance and/or the user to determine if the wearable extended reality appliance is ready for use, and/or the user is ready to use the wearable extended reality appliance, in the extended reality environment. In one example, the wearable extended reality appliance may be active when the power status of the wearable extended reality appliance is “on.” The wearable extended reality appliance may be turned on by pressing a power button on the wearable extended reality appliance such that power is delivered to individual components of the wearable extended reality appliance. In another example, the wearable extended reality appliance may be active when the user of the wearable extended reality appliance is engaged with the extended reality environment or when digital content is ready for display within the extended reality environment via a wearable extended reality appliance. The user of the wearable extended reality appliance may be engaged with the wearable extended reality appliance when it is turned on, when the user is ready to use the wearable extended reality appliance (e.g., computing device is on and the user is near the physical display), and/or when the user is currently using the wearable extended reality appliance. In yet another example, the wearable extended reality appliance may be active when the wearable extended reality appliance is connected to, or otherwise in communication with, a computing device. Alternatively, the wearable extended reality appliance may be inactive when the wearable extended reality appliance is disconnected from the computing device In another embodiment, the usage status of the wearable extended reality appliance is determined based on data indicating when the wearable extended reality appliance is physically connected through a wire to a port of the computing device. As used herein, the term “physically connected” may be used to refer to a connection status of the wearable extended reality appliance in which the wearable extended reality appliance is attached via a wire to a computing device and/or to at least one peripheral device of the computing device including a keyboard, a mouse, or a monitor. When the wearable extended reality appliance is physically connected to the computing device through a wire to a port (e.g., a port capable of facilitating the transmission of data related to the usage status of the wearable extended reality appliance) of the computing device, the wearable extended reality appliance may be configured to transmit information directly or indirectly to the computing device. In one example, the wearable extended reality appliance may be configured to charge and transmit information to the computing device simultaneously. In another example, the wearable extended reality appliance may be physically connected to a wired keyboard connectable to a computing device via at least one input of the computing device. In another example, the wearable extended reality appliance may be physically connected to a wireless keyboard in communication with the computing device. In yet another example, the wearable extended reality appliance may be physically connected to a physical display in communication with the computing device. In another embodiment, the usage status of the wearable extended reality appliance is determined based on input from a sensor indicating when the wearable extended reality appliance is worn. As used herein, the term “sensor” may relate to any device in communication with the wearable extended reality appliance and/or the computing device configured to detect and/or measure a property associated with the user, the user's action, user's environment, and/or a property associated with the wearable extended reality appliance. In one example, sensor data may be based on information captured using one or more sensors of an input device in communication with the computing device. In another example, sensor data may be based on information captured using one or more sensors of the extended reality appliance. In yet another example, sensor data may be based on information captured using a combination one or more sensors of an input device in communication with the computing device and using one or more sensors of the extended reality appliance. In some embodiments, the sensor may include one or more image sensors (e.g., configured to capture images and/or videos of a user of the appliance or of an environment of the user), one or more motion sensors (such as an accelerometer, a gyroscope, a magnetometer, etc.), one or more positioning sensors (such as GPS, outdoor positioning sensor, indoor positioning sensor, etc.), one or more temperature sensors (e.g., configured to measure the temperature of at least part of the appliance and/or of the environment), one or more contact sensors, one or more proximity sensors (e.g., configured to detect whether the appliance is currently worn), one or more electrical impedance sensors (e.g., configured to measure electrical impedance of the user), one or more eye tracking sensors, such as gaze detectors, optical trackers, electric potential trackers (e.g., electrooculogram (EOG) sensors), video-based eye-trackers, infra-red/near infra-red sensors, passive light sensors, or any other technology capable of determining a usage status of the wearable extended reality appliance. The computing device may use input data (e.g., stimulus, response, command, and/or instruction targeted to a processing device) from at least one sensor to determine the usage status of the wearable extended reality appliance. For example, an input may be received by the at least one processor via an input interface (e.g., input interface430ofFIG.4and/or input interface330ofFIG.3), by a sensor associated with the wearable extended reality appliance (e.g., sensor interface470or370), by a different computing device communicatively coupled to the wearable extended reality appliance (e.g., mobile device206and/or remote processing unit208ofFIG.1), or any other source of input, for example, a camera (e.g., as gesture input or input relating to the usage of the wearable extended reality appliance). As used herein, “input from a sensor indicating when the wearable extended reality appliance is worn” may relate to any sensor data indicating that the wearable extended reality appliance is currently on, donned by, or a part of a user as a wearable electronic device such that the wearable extended reality appliance is capable of presenting an extended reality to the user. In one example, a proximity sensor, or combination of proximity sensors, connected to a physical display may provide sensor data indicating a presence of the wearable extended reality appliance in proximity to the physical display. For example, data from proximity sensors, or a combination of proximity sensors may be used to determine that the wearable extended reality appliance is in a position relative to the computing device or a peripheral device that is indicative of the wearable extended reality appliance being worn. In another example, an electrical impedance sensor and/or a motion sensor included in the wearable extended reality appliance may provide sensor data indicating the user is ready to engage or interact with the extended reality environment via the wearable extended reality appliance. In another embodiment, the usage status of the wearable extended reality appliance is determined based on image data captured using an image sensor. The term “image sensor” may include any instrument or group of instruments capable of converting rays of light (e.g., photons) into electrical signals. Examples of image sensors include CCD and CMOS arrays. Other types of image sensors include Lidar and radar sensors. In some examples, the image sensor may be included in the extended reality appliance, in the computing device, in an input device, and/or in an environment of the user. In one example, the image sensor may be in or on a laptop or computer monitor in communication with the computing device such as an integrated, built-in, or standalone webcam. In another example, the image sensor may be a part of the extended reality appliance. As used herein, the term “image data” may relate to any data captured by one or more image sensors and may be understood as described earlier. At least one processor may be configured to determine the usage status of the wearable extended reality appliance based on image data from any combination of signals emitted and/or reflected off physical objects in the extended reality environment, data stored in memory (e.g., for the location of stationary objects), predicted behavior and/or preferences of the wearer of the wearable extended reality appliance, ambient conditions (e.g., light, sound, dust), and any other criterion for determining a relative position of the wearable extended reality appliance and/or physical objects in the extended reality environment. The signals may include any combination of image data and/or IR signals detected by a camera (e.g., image sensor472ofFIG.4), position, location, and orientation data acquired by an IMU and/or GPS unit (e.g., motion sensor473), ultrasound, radio (e.g., Wi-Fi, Bluetooth, Zigbee, RFID) detected via suitable sensors (e.g., other sensors475). In one example, the image data may indicate the proximity of the user and whether the user is wearing the wearable extended reality appliance. In another example, the image data may indicate the position of the user relative to the wearable extended reality appliance. In another example, the image data may indicate the position of the user and/or the wearable extended reality appliance relative to the physical display. In some examples, the image data captured using the image sensor may be analyzed to determine the usage status of the wearable extended reality appliance. For example, a machine learning model (such as a classification model) may be trained using training examples to determine usage statuses of wearable extended reality appliances from images and/or videos. An example of such training example may include a sample image and/or sample video associated with a sample wearable extended reality appliance, together with a label indicating the usage status of the sample wearable extended reality appliance. The trained machine learning model may be used to analyze the image data captured using the image sensor and determine the usage status of the wearable extended reality appliance. In some examples, at least part of the image data may be analyzed to calculate a convolution of the at least part of the image data and thereby obtain a result value of the calculated convolution. Further, in response to the result value of the calculated convolution being a first value, one usage status of the wearable extended reality appliance may be determined, and in response to the result value of the calculated convolution being a second value, another usage status of the wearable extended reality appliance may be determined. In some examples, the image data may be analyzed to determine a type of environment of the wearable extended reality appliance, for example using scene recognition algorithms, and the usage status of the wearable extended reality appliance may be determined based on the type of the environment of the wearable extended reality appliance. In some examples, the image data may be analyzed to detect objects in the environment of the wearable extended reality appliance, for example using object detection algorithms, and the usage status of the wearable extended reality appliance may be determined based on the objects in the environment of the wearable extended reality appliance. In some examples, the image data may be analyzed to detect activities in the environment of the wearable extended reality appliance, for example using event detection algorithms, and the usage status of the wearable extended reality appliance may be determined based on the activities in the environment of the wearable extended reality appliance. In another embodiment, the usage status of the wearable extended reality appliance is determined based on data indicating when a communication channel is established between the computing device and the wearable extended reality appliance. The term “communication channel” includes any single or group of wired or wireless pathways or other medium over which data or information exchanges may occur. Such channels may permit the transport of data and/or information signals from one or more transmitters to one or more receivers. In one example, a wired transmission medium may relate to a wired communication channel configured to transport data and/or information between the computing device and the wearable extended reality appliance. In another example, a wireless transmission medium may relate to a wireless communication channel configured to transport data and/or information from between the computing device and the wearable extended reality appliance via at least one wireless network. For example, one or more components of the wearable extended reality appliance and/or computing device may communicate directly through a dedicated communication network including BLUETOOTH™, BLUETOOTH LE™ (BLE), Wi-Fi, near field communications (NFC), and/or any other suitable communication methods that provide a medium for exchanging data and/or information between the wearable extended reality appliance and the computing device. As used herein, a communication channel is “established” between the computing device and the wearable extended reality appliance when the computing device is connected to the wearable extended reality appliance and able to transmit information to and/or receive information from the wearable extended reality appliance via the communication channel. The term “data indicating when a communication channel is established” may relate to any information used by the computing device and/or wearable extended reality appliance to determine the connection status of the wearable extended reality appliance (e.g., connected or disconnected to the computing device). In one example, a connection status of the wearable extended reality appliance may indicate that a communication channel is established (e.g., exchange of data or information is possible) between the computing device and the wearable extended reality appliance. In another example, a connection status of the wearable extended reality appliance may indicate that a communication channel is not established (e.g., exchange of data or information is not possible) between the computing device and the wearable extended reality appliance. In another embodiment, the usage status of the wearable extended reality appliance is determined based on data indicative of a battery status of the wearable extended reality appliance. As used herein, the “battery status” may refer to the amount of battery life remaining in the wearable extended reality appliance and/or the time it will take to discharge and/or charge the wearable extended reality appliance if connected to a power source. In some examples, data indicative of a battery status of the wearable extended reality appliance may relate to any information used by the computing device and/or wearable extended reality appliance to determine the battery status of the wearable extended reality appliance and/or determine if the wearable extended reality appliance is suitable for use in the extended reality environment. In one example, when the battery status indicates the battery is full battery or partial battery, the wearable extended reality appliance may be suitable for use in the extended reality environment. In another example, when the battery status indicates the battery is low battery or no battery, the wearable extended reality appliance may not be suitable for use in the extended reality environment. Some embodiments involve selecting a display mode based on the usage status of the wearable extended reality appliance. The term “selecting a display mode” may refer to a picking or choosing a display mode for displaying digital objects from among a plurality of display modes, as display modes are described earlier. For example, the computing device may select a first display mode or a second display mode in view of the determined usage status (e.g., a first usage status or a second usage status). In some embodiments, the usage status of a wearable extended reality appliance may inform the determination of the display mode in which certain digital objects are displayed for presentation to a user via a physical display of a computing device and/or a wearable extended reality appliance. In one example, a first usage status of the wearable extended reality appliance may inform the computing device that the wearable extended reality appliance is unsuitable for use in the extended reality environment at or for a particular time. When the usage status of the wearable extended reality appliance indicates that the wearable extended reality appliance is in a first usage status, the computing device may select a first display mode in which digital objects are displayed via physical display. By way of a non-limiting example, turning toFIG.55A, user5512is shown not wearing a wearable extended reality appliance. Because user5512is not wearing a wearable extended reality appliance, data from at least one sensor of a wearable extended reality appliance and/or at least one image sensor of computing device5514indicates the wearable extended appliance is not being worn by user5512. In view of data indicating the wearable extended appliance is not being worn by user5512, a processor of computing device5514determines the usage status of the wearable extended reality appliance is a first usage status. When the usage status of a wearable extended reality appliance is a first usage status, the at least one processor selects the first display mode5510for displaying real digital objects5519A to5519C to user5512via physical display5515of computing device5514. In another example, a second usage status of the wearable extended reality appliance may inform the computing device that the wearable extended reality appliance is suitable for use in an extended reality environment at or for a particular time. When the usage status of the wearable extended reality appliance indicates that the wearable extended reality appliance is in a second usage status, the computing device may select a second display mode in which virtual digital objects are displayed via the wearable extended reality appliance. In some embodiments, the computing device may be configured to switch between different selected display modes in real time or near real time in response to a change in usage status of the wearable extended reality appliance. By way of a non-limiting example, turning toFIG.55B, user5512is shown wearing wearable extended reality appliance5513. Because user5512is wearing wearable extended reality appliance5513, data from at least one sensor of the wearable extended reality appliance5513and/or at least one image sensor of computing device5514indicates wearable extended reality appliance5513is being worn by user5512. Additionally, wearable extended reality appliance5513is shown to be physically connected to computing device5514and a communication channel is established between computing device5514and wearable extended reality appliance5513. Because wearable extended reality appliance5513is physically connected and in communication with the computing device5514, data received by the computing device indicates wearable extended reality appliance5513is in condition for use. In view of data indicating the wearable extended reality appliance5513being worn by user5512, is connected to computing device5514, and is in communication with computing device5514, a processor of computing device5514determines the usage status of the wearable extended reality appliance is a second usage status. When the usage status of wearable extended reality appliance5513is a second usage status, the at least one processor selects the second display mode5511for displaying real digital objects5519B and5519C to user5512via physical display5515and virtual digital objects5520and5521via wearable extended reality appliance5513. Some embodiments involve, in response to the display mode selection, outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode. The term “outputting for presentation” relates to a transmission of signals to cause digital content or virtual digital content to be presented for viewing by a user via a physical display or via a wearable extended reality appliance. In some embodiments, the outputting of digital objects for presentation may occur after a display mode has been selected and/or after a display mode has changed. As used herein. “a manner consistent with the selected display mode” may refer to the way in which real digital objects and/or virtual digital objects may be displayed, shown, or caused to appear for view by a user according to the method specified by the selected display mode. For example, when the first display mode is selected, the computing device may output the plurality of digital objects for display via the physical display. In another example, when the second display mode is selected, the computing device may output some of the plurality of digital objects for display via the physical display and at least one other of the plurality of digital objects for display via the wearable extended reality appliance. By way of a non-limiting example, turning toFIG.55A, when the first display mode5510is selected, at least one processor of the computing device5514outputs for presentation the plurality of digital objects in a manner consistent with the first display mode5510. Here, the plurality of digital objects includes real digital objects5519A to5519C to be displayed to user5512via physical display5515of computing device5514. Turning toFIG.55B, when the second display mode5511is selected, at least one processor of the computing device5514outputs for presentation the plurality of digital objects in a manner consistent with the second display mode5511. Here, the plurality of digital objects includes real digital objects5519B and5519C to be displayed to user5512via physical display5515and virtual digital objects5520and5521to be displayed to user5512via wearable extended reality appliance5513. In some embodiments, when the selected display mode is the second display mode, outputting for presentation the plurality of digital objects includes causing the at least one other of the plurality of digital objects to be displayed via the wearable extended reality appliance while the some of the plurality of digital objects are concurrently displayed via the physical display. “At least one other” of the plurality of digital objects are generated by the computing device in the second display mode and presented via the wearable extended reality appliance. The at least one other of the plurality of digital objects includes a virtual digital object or virtual digital objects consistent with the virtual digital objects described above. As used herein, the term “concurrently displayed” refers to the simultaneous display of real digital objects to the user via the physical display and virtual digital objects to the user via the wearable extended reality appliance. For example, in the second display mode, the at least one other of the plurality of digital objects (i.e., at least one virtual digital object) may be displayed to the user via the wearable extended reality appliance while some of the plurality of digital objects (i.e., some of the real digital objects) are displayed to the user via the physical display at the same time. By way of a non-limiting example, turning toFIG.55B, when the second display mode5511is selected, at least one processor of computing device5514outputs for presentation the plurality of digital objects in a manner consistent with the second display mode5511. Here, the plurality of digital objects includes real digital objects5519B and5519C to be displayed to user5512via physical display5515and virtual digital objects5520and5521to be displayed to user5512via wearable extended reality appliance5513. At least one other of the plurality of digital objects (e.g., virtual digital object5520) is displayed via wearable extended reality appliance5513while the some of the plurality of digital objects (e.g., real digital object5519B and real digital object5519C) are concurrently displayed via the physical display5515. In another embodiment, when the selected display mode is the second display mode, outputting for presentation the plurality of digital objects includes presenting at least one digital object concurrently via the wearable extended reality appliance and via the physical display. For example, in the second display mode, at least one digital object may be displayed to the user via the wearable extended reality appliance and via the physical display at the same time. The at least one digital object may be presented to the user as a virtual digital object via the wearable extended reality appliance and concurrently presented to the user as a real digital object via the physical display. By way of a non-limiting example, turning toFIG.56when the second display mode is selected, at least one processor of the computing device outputs for presentation the plurality of digital objects including real digital objects5615to be displayed to user5612via physical display5614and virtual digital objects5616, virtual digital object5618, and virtual digital object5619to be displayed to user5612via wearable extended reality appliance5613. Here, a digital object for controlling computer settings is concurrently displayed via wearable extended reality appliance5513(e.g., as virtual digital object5618) and via the physical display5614(e.g., as one of real digital objects5615). Some embodiments involve determining to display, in the second display mode, the at least one other of the plurality of digital objects via the wearable extended reality appliance. As used herein, “determining to display in the second display mode” may refer to a decision or causation to present particular digital objects, or groups of digital objects, to the user in the second display mode. In some embodiments, determining whether to display in the second display mode may be based on data and/or information related to past and/or present usage of the wearable extended reality appliance and/or any other peripheral device in communication with the computing device. A peripheral device may include any input and/or output device or devices directly or indirectly connected to, or otherwise in communication with, a computing device and/or the extended reality appliance. In some embodiments, determining to display in the second display mode may be based on data and/or information related to a specific digital object and/or groups of digital objects configured to be presented to a user. For example, information related to the usage, form, and/or function of a specific digital object, or groups of digital objects, may be utilized to determine which digital object, or groups of digital objects, are to be displayed for presentation via a physical display of a computing device and which of the digital objects are to be displayed via the wearable extended reality appliance. In one embodiment, the determining to display, in the second display mode, the at least one other of the plurality of digital objects via the wearable extended reality appliance is based on user input. The term “user input” may refer to any information and/or data that is sent by the user and received by the computing device for processing. The user may transmit input data and/or information from a variety of input devices, for example, a keyboard, a mouse, a touch pad, a touch screen, one or more buttons, a joystick, a microphone, an image sensor, and any other device configured to detect physical or virtual input. The received input may be in the form of at least one of text, sounds, speech, hand gestures, body gestures, tactile information, and any other type of physically or virtually input generated by the user. In some embodiments, the input received from the user may be used to determine which of the digital objects to display via the physical display and which to display via the wearable extended reality appliance. In some embodiments, the user's input or inputs may be detected and converted into virtual interactions with the extended reality environment thereby enabling the user to select, or otherwise interact with, digital objects within the extended reality environment. For example, a user may drag and drop a digital object from a real environment to an extended reality environment, and vice versa. In one example, when the computing device is in the second display mode, the user may select, or otherwise interact with, at least one digital object via hand gestures. For example, the user may drag a real digital object displayed via a physical display from the physical display to the wearable extended reality appliance such that the digital object is displayed as a virtual digital object. In another example, when the computing device is in the second display mode, the user may select, or otherwise interact with, at least one digital object via eye gestures. For example, the user may drag a virtual digital object displayed via the wearable extended reality appliance from the extended reality environment to the physical display such that the digital object is displayed as a real digital object. In yet another example, when the computing device is in the second display mode, the user may select, or otherwise interact with, at least one digital object via a cursor by using a computer mouse. For example, the user may drag and drop real digital objects and/or virtual digital objects between the physical display and the wearable extended reality appliance within the extended reality environment. In one embodiment, the determining to display, in the second display mode, the at least one other of the plurality of digital objects via the wearable extended reality appliance is based on a type of input device connected to the computing device. As used herein, the term “input device” may refer to any device, or combination of devices, configured to provide information and/or data to the computing device for processing. The input device may include any of the above-described input devices configured to detect a physical and/or digital input. As used herein, the term “type of input device” may refer to the category of input device, or input devices, in communication with a computing device. In some embodiments, the category of input device may relate to the use of certain input devices with a particular computing device (e.g., a wireless keyboard vs. a wired keyboard) and/or in a particular environment (e.g., at a user's home office space vs. work office space). In some embodiments, the category of input device, or input devices, may inform which digital object or digital objects are displayed via the wearable extended reality appliance. In one embodiment, input devices used in conjunction with one particular computing device (e.g., a desktop computer) may relate to one category of input devices (e.g., devices used at a user's work) and input devices used in conjunction with another particular computing device (e.g., a laptop) may relate to another category of input devices (e.g., devices used at a user's home). For example, an input device (e.g., a wired mouse) used in conjunction with the desktop computer may cause at least one particular digital object to be presented to the user via the wearable extended reality appliance, and an input device (e.g., an integrated trackpad) used in conjunction with the laptop computer may cause another particular digital object to be presented to the user via the wearable extended reality appliance. In another embodiment, at least one input device used with a particular computing device's operation in one setting (e.g., the user's home network) may relate to one category of input devices and at least another input device used with the particular computing device's operation in another setting (e.g., the user's office network) may relate to another category of input devices. For example, when a user is using a laptop on their home network, at least one particular digital object may be presented to the user via the wearable extended reality appliance, and when a user is using the laptop on their work network, at least another particular digital object may be presented to the user via the wearable extended reality appliance. In another example, when a user is using a laptop on their home network, at least one particular digital object having a first appearance may be presented to the user via the wearable extended reality appliance, and when a user is using the laptop on their work network, the at least one particular digital object having a second appearance may be presented to the user via the wearable extended reality appliance. Additionally, or alternatively, the type of input device may refer to the unique characteristics of a particular input device in communication with a computing device. In some embodiments, the unique characteristics of an input device, or input devices, in communication with the computing device may inform which digital object or digital objects are displayed via the wearable extended reality appliance. In one example, in the second display mode, when a first input device (e.g., a wired keyboard) is connected to the computing device, at least one of a first group of digital objects may be presented to the user via the wearable extended reality appliance. In another example, in the second display mode, when a second input device (e.g., a wireless keyboard) is connected to the computing device, at least one of a second group of digital objects may be presented to the user via the wearable extended reality appliance. In yet another example, in the second display mode, when the second input device and a third input device (e.g., a wireless mouse) are connected, at least one of a second group of digital objects and/or a third group of digital objects may be presented to the user via the wearable extended reality appliance. In one embodiment, determining to display, in the second display mode, the at least one other of the plurality of digital objects via the wearable extended reality appliance is based on past user actions. The term “past user actions” may relate to stored information and/or data corresponding to a user's former interactions with the extended reality environment. In some embodiments, a user's past interactions with the extended reality environment may include a user's prior virtual digital object selection, a user's interactions with digital objects in particular settings and/or at particular times, privacy levels associated with different virtual digital objects, the relationship between virtual digital objects and physical objects, the relationship between virtual digital objects and real digital objects, the user's preferences, the user's past behavior, and any other information and/or data associated with a user's past usage within an extended reality environment. In one example, a user's past actions may relate to the last time the user docked the at least one digital object to a particular physical object. Because of the past action concerning the docking of the at least one digital object, it may be determined to display the digital object via a wearable extended reality appliance (e.g., docked to the particular physical object) and/or to display the digital object via a physical display. In another example, a user's past actions may relate to the last digital object a user was interacting with in a particular setting. Because of the past action concerning the last digital object a user was interacting with in a particular setting, it may be determined to display the digital object via a wearable extended reality appliance (e.g., virtually displaying a digital object relative to one physical object within a work setting) and/or to display the digital object via a physical display (e.g., not virtually displaying a digital object relative to one physical object within a home setting and only displaying the digital object via a physical display). In another example, a user's past actions may relate to the last digital object a user was interacting with at a particular time. Because of the past action concerning the last digital object a user was interacting with at a particular time, it may be determined to display the digital object via a wearable extended reality appliance (e.g., if a digital object is routinely opened for display via a wearable extended reality appliance at a first time of the day) and/or to display the digital object via a physical display (e.g., if the same digital object is routinely opened for display via a physical display at a second time of the day). In another example, a user's past actions may relate to display preferences corresponding to certain virtual digital objects. Because of the past action concerning display preferences corresponding to certain virtual digital objects, it may be determined to display the certain digital object via a wearable extended reality appliance and/or a physical display (e.g., when they are commonly opened when other digital objects are open). In one embodiment, determining to display, in the second display mode the at least one other of the plurality of digital objects via the wearable extended reality appliance is based on at least one predefined rule. The term “predefined rule” includes any predetermined condition that serves as a trigger for object display. In some embodiments, a predefined rule may delineate parameters for what digital objects to display, or not display, as virtual digital objects via the wearable extended reality appliance when the computing device is in the second display mode. In some embodiments, at least one predefined rule may be triggered at or for a particular time in response to information and/or data corresponding to at least one action, option, and/or environment that may be distinguishable from some other action, option, and/or environment. For example, at least one predefined rule may be defined such that a weather widget must be presented virtually. In another example, at least one predefine rule may be defined such that a text editing application must be presented virtually to allow user to edit in a magnified view. In some embodiments, multiple predefined rules may be triggered simultaneously at the same time or over a period of time. For example, at least one predefined rule may be defined such that when mathematical equations are present on a physical display, a calculator and a notepad are presented virtually in an extended reality environment via the wearable extended reality appliance while other virtual digital objects are caused to disappear from the extended reality environment. In another embodiment, a predefined rule may call for a particular group of digital objects to be displayed, or not displayed, as virtual digital objects, in response to user input. For example, at least one predefined rule may be defined such that when a user opens an application using a mouse as an input, the application is displayed to the user via the physical display, and when a user opens an application using hand gestures as an input, the application is displayed to the user via the wearable extended reality appliance. In some embodiments, a predefined rule may call for a particular group of digital objects to be displayed, or not displayed, as virtual digital objects, in response to certain operating conditions of the computing device and/or the wearable extended reality appliance. For example, at least one predefined rule may be defined such that when the battery of the wearable extended reality appliance and/or the computing device is low (e.g., below 20%), a first subset of applications may be available for display via the wearable extended reality appliance, and when the battery of the wearable extended reality appliance and/or the computing device is very low (e.g., below 10%), a second subset of applications that is less than the first subset of applications may be available for display via the wearable extended reality appliance. In yet another embodiment, a predefined rule may define the manner in which certain virtual digital objects are presented to the user. For example, at least one predefined rule may be defined such when a particular digital object, such as a clock widget, is displayed virtually, the clock widget is displayed at a particular location on a user's wall. Some embodiments involve identifying a change in the usage status of the wearable extended reality appliance. In one example, while the plurality of digital objects are presented in a manner consistent with the first display mode, the at least one processor may perform operations for identifying a change in the usage status of the wearable extended reality appliance from a first usage status corresponding to the first display mode to a second usage status corresponding to the second display mode. In another example, while the plurality of digital objects are presented in a manner consistent with the second display mode, the at least one processor may perform operations for identifying a change in the usage status of the wearable extended reality appliance. As used herein, a “change in the usage status” of the wearable extended reality appliance may relate to any identifiable adjustment, modification, revision, shift, transition, or variation in a state of use, condition for use, and/or suitability for use of the wearable extended reality appliance. For example, an identifiable change in the usage status may include a change from one particular usage status (e.g., a first usage status of a high battery level) of the wearable extended reality appliance to another usage status (e.g., a second usage status of a low battery level) or vice-versa that is inconsistent with a current display mode. In another example, an identifiable change in the usage status may relate to a change in the user's degree of interaction with digital content (e.g., the user has not interacted with digital content in the past 5 minutes). In another example, an identifiable change in the usage status may relate to a change in the connection status of the wearable extended reality appliance (e.g., connecting the wearable extended reality appliance to a computing device). In one example, a change in the usage status of the wearable extended reality appliance may relate to a change from the first usage status corresponding to the first display mode to a second usage status corresponding to the second display mode. In another example, a change in the usage status of the wearable extended reality appliance may relate to a change from a second usage status corresponding to the second display mode to a first usage status corresponding to the first display mode. In yet another example, a change in the usage status of the wearable extended reality appliance may relate to a change from a second usage status corresponding to the second display mode to a third usage status corresponding to a display mode that is different from the first display mode and the second display mode. As used herein, “identifying a change in the usage status” may include sensing, detecting, or receiving an indication that a usage status, as previously described, has changed. The identification may be conducted in a manner similar to the processes for determining a usage status of the wearable extended reality appliance described above. In one embodiment, any identifiable measure, or combination of measures, of the wearable extended reality appliance's state, condition, and/or suitability for use may be utilized to identify a change in the usage status of the wearable extended reality appliance. For example, a change in the usage status of the wearable extended reality appliance may be detected in response to battery voltage, temperature of operating components, or ambient light and/or sound conditions, the location of physical objects, and/or any other factor related to the wearable extended reality appliances usage. In one embodiment, the change in the usage status of the wearable extended reality appliance may be identified based on any form of data and/or information received by, processed by, and/or stored by at least one computing device that is related to a change in the usage of the wearable extended reality appliance's state, condition, and/or suitability for use. In another example, the change in the usage status of the wearable extended reality appliance may be identified based on at least one input received by the computing device and/or the wearable extended reality appliance indicating the wearable extended reality appliance is appropriate for use or no longer appropriate for use. Some embodiments involve updating the display mode selection and/or revising the presentation of the plurality of digital objects in response to an identified change of the usage status of the wearable extended reality appliance. “Updating the display mode selection” may relate to a change to the display mode in response to an identified change to the usage status of the wearable extended reality appliance. In some embodiments, the particular display mode in which the computing device was operating prior to the identified change in usage status of the wearable extended reality appliance may be updated to another display mode that is consistent with the present usage status. In some embodiments, when the plurality of digital objects are presented in a manner consistent with the second display mode and a change in the usage status of the wearable extended reality appliance is identified, in response to the identified change in the usage status, the at least one processor may perform operations for updating the display mode selection from the second display mode to the first display mode. For example, in response to an identified change from the second usage status corresponding to the second display mode to a first usage status, the display mode may be updated from the second display mode to the first display mode. In another example, in response to an identified change from the first usage status corresponding to the first display mode to a second usage status, the display mode may be updated from the first display mode to the second display mode. By way of a non-limiting example, referring back toFIGS.55A and55B, computing device5514is configured to analyze input signals indicating when wearable extended reality appliance5513is in a first usage status or a second usage status. When the wearable extended reality appliance is in a first usage status (e.g., not being worn by user5512), as illustrated inFIG.55A, the plurality of digital objects (e.g., real digital objects5519A to5519C and cursor5518A) are presented to user5512via physical display5515in a manner consistent with the first display mode5510. When wearable extended reality appliance5513is in a second usage status (e.g., worn by user5512), as illustrated inFIG.55B, the plurality of digital objects (e.g., real digital objects5519B and5519C, virtual cursor5518B, virtual digital object5520, and virtual digital object5521) are presented to user5512in a manner consistent with the second display mode5511. As shown, real digital objects5519B and5519C are presented via the physical display5515. Virtual cursor5518B, virtual digital object5520, and virtual digital object5521are presented via wearable extended reality appliance5513. When computing device5514identifies the usage status of the wearable extended reality appliance5513has changed from a first usage status (as shown inFIG.55A) to a second usage status (as shown inFIG.55B), the display mode selection is updated. In response to the identified change in the usage status, computing device5514updates the display mode selection from a first display mode5510(as shown inFIG.55A) to a second display mode5511(as shown inFIG.55B). Additionally, when computing device5514identifies the usage status of the wearable extended reality appliance5513has changed from a second usage status (as shown inFIG.55B) to a first usage status (as shown inFIG.55A), the display mode selection is updated. In response to the identified change in the usage status, computing device5514updates the display mode selection from a second display mode5511(as shown inFIG.55B) to a first display mode5510(as shown inFIG.55A). Some embodiments involve automatically revising the presentation of the plurality of digital objects in response to an identified change in usage status and/or in response to an updated display mode selection of the wearable extended reality appliance. In one embodiment, when a change from the first usage status to the second usage status is identified, in response to the change in the usage status, the at least one processor may perform operations for automatically revising the presentation of the plurality of digital objects to be consistent with the second display mode. In another embodiment, when the display mode is updated from the second display mode to the first display mode, in response to the updated display mode selection, the at least one processor may perform operations for automatically revising the presentation of the plurality of digital objects to be consistent with the first display mode. As used herein, “revising the presentation of the plurality of digital objects” may relate to a change in the display of the plurality of digital objects to the user of the physical display and/or the wearable extended reality appliance. In some embodiments, the presentation of the plurality of digital objects may be revised in response to the updated display mode. As used herein, the term “automatically” may refer to a change, in real time or near real time, to the presentation of the plurality of digital objects in response to the identified change in the usage status, and/or a resulting change to the display mode in view of the identified change in the usage status, that is made without human intervention. In some embodiments, when the presentation of the plurality of digital objects is revised to be consistent with the second display mode, automatically revising the presentation of the plurality of digital objects may include causing a first digital object from the plurality of digital objects to disappear from the physical display, causing the first digital object to be presented via the wearable extended reality appliance, and/or causing an additional digital object excluded from the plurality of digital objects to be presented via the wearable extended reality appliance. In some embodiments, when the presentation of the plurality of digital objects is revised to be consistent with the first display mode, automatically revising the presentation of the plurality of digital objects may include causing a first digital object and a second digital object previously presented via the wearable extended reality appliance to reappear on the physical display, and/or causing a third digital object previously presented via the wearable extended reality appliance to disappear. By way of a non-limiting example, referring toFIGS.55A and55B, the computing device5514is configured to determine whether the usage status of the wearable extended reality appliance5513has changed from a first usage status (as shown inFIG.55A) to a second usage status (as shown inFIG.55B). When the computing device5514identifies a change from the first usage status to the second usage status (e.g., user5512puts on wearable extended reality appliance5513and wirelessly connects the wearable extended reality appliance5513to the computing device5514), computing device5514automatically revises the presentation of the plurality of digital objects to be consistent with the second display mode5511(as shown inFIG.55B). When the presentation of the plurality of digital objects is revised to be consistent with the second display mode5511, the automatically revised presentation of the plurality of digital objects includes a first digital object (real digital object5519A inFIG.55A) from the plurality of digital objects to disappear from the physical display5515and causing the first digital object (virtual digital object5520inFIG.55B) to be presented via the wearable extended reality appliance5513. Additionally, the automatically revised presentation of the plurality of digital objects includes an additional digital object (virtual digital object5521) excluded from the plurality of digital objects shown inFIG.55Ato be presented via the wearable extended reality appliance5513inFIG.55B. In another example, the computing device5514is configured to determine whether the usage status of the wearable extended reality appliance5513has changed from a second usage status (as shown inFIG.55B) to a first usage status (as shown inFIG.55A). When the computing device5514identifies a change from the second usage status to the first usage status (e.g., user5512takes off wearable extended reality appliance5513), computing device5514automatically revises the presentation of the plurality of digital objects to be consistent with the first display mode5510(as shown inFIG.55A). When the presentation of the plurality of digital objects is revised to be consistent with the first display mode5510, the automatically revised presentation of the plurality of digital objects includes a first digital object (virtual digital object5520inFIG.55B) previously presented via the wearable extended reality appliance5513to reappear on the physical display as real digital object5519A inFIG.55A. Additionally, the automatically revised presentation of the plurality of digital objects excludes an additional digital object (virtual digital object5521) previously presented via the wearable extended reality appliance5513from display via physical display5515, such that virtual digital object5521is caused to disappear from display to user5512. Some embodiments involve a system for selectively controlling display of digital objects, the system comprising at least one processor programmed to: generate a plurality of digital objects for display in connection with use of a computing device operable in a first display mode and in a second display mode, wherein in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device, and in the second display mode, some of the plurality of digital objects are displayed via the physical display and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance; determine a usage status of the wearable extended reality appliance; select a display mode based on the usage status of the wearable extended reality appliance; and in response to the display mode selection, output for presentation the plurality of digital objects in a manner consistent with the selected display mode. FIG.57illustrates a flowchart of an example process5700for selectively controlling a display of digital objects, consistent with some embodiments of the present disclosure. In some embodiments, process5700may be performed by at least one processor (e.g., one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5) to perform operations or functions described herein. In some embodiments, some aspects of process5700may be implemented as software (e.g., program codes or instructions) that are stored in a memory (e.g., any of memory devices212,311,411, or511, or a memory of mobile device206) or a non-transitory computer readable medium. In some embodiments, some aspects of process5700may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, process5700may be implemented as a combination of software and hardware. Referring toFIG.57, process5700may include a step5710of generating a plurality of digital objects for display in connection with use of a computing device operable in a first display mode and in a second display mode, wherein in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device, and in the second display mode, some of the plurality of digital objects are displayed via the physical display, and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance. Process5700may include a step5712of determining a usage status of the wearable extended reality appliance. In some embodiments, the usage status of the wearable extended reality appliance may be determined after the plurality of digital objects are generated for display in connection with use of a computing device by step5710. By way of example,FIG.58illustrates one non-limiting example of a process5800for determining a usage status of a wearable extended reality appliance.FIG.58is an exemplary representation of just one embodiment, and it is to be understood that some illustrated features might be omitted, and others added within the scope of this disclosure. Process5800may include a step5810of initiating determination of a usage status (e.g., a first usage status or a second usage status) of the wearable extended reality appliance. A processor (e.g., one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5) may determine the usage status, for example, using data indicating when the wearable extended reality appliance is active, when a communication channel is established, or when the wearable extended reality appliance is worn by the user, consistent with some embodiments of the present disclosure. Process5800may include a step5812of determining whether the wearable extended reality appliance is active, as described above. In one example, when the at least one processor determines that the wearable extended reality appliance is not active (e.g., the wearable extended reality appliance is off), the at least one processor may determine at step5816that the usage status of the wearable extended reality appliance is a first usage status. Alternatively, when the at least one processor determines that the wearable extended reality appliance is active (e.g., the wearable extended reality appliance is on), the at least one processor may proceed to step5813. In another embodiment, when the at least one processor determines that the wearable extended reality appliance is active, the at least one processor may determine the usage status of the wearable extended reality appliance is a second usage status. Process5800may include a step5813of determining whether a communication channel is established between the computing device and the wearable extended reality appliance, as described above. In one example, when the at least one processor determines that a communication channel is not established between the computing device and the wearable extended reality appliance (e.g., the wearable extended reality appliance is disconnected from the computing device), the at least one processor may determine at step5816the usage status of the wearable extended reality appliance is the first usage status. Alternatively, when the at least one processor determines that a communication channel is established between the computing device and the wearable extended reality appliance (e.g., the wearable extended reality appliance is connected to the computing device), the at least one processor may proceed to step5814. In another embodiment, when the at least one processor determines that a communication channel is established between the computing device and the wearable extended reality appliance, the at least one processor may determine the usage status of the wearable extended reality appliance is the second usage status. Process5800may include a step5814of determining whether the wearable extended reality appliance is worn by the user, as described above. In one example, when the at least one processor determines that the wearable extended reality appliance is not worn by the user (e.g., a proximity sensor detects the wearable extended reality appliance is not in proximity to the computing device and/or an image sensor detects the wearable extended reality appliance is not properly worn), the at least one processor may determine at step5816the usage status of the wearable extended reality appliance is the first usage status. Alternatively, when the at least one processor determines that the wearable extended reality appliance is worn by the user (e.g., a proximity sensor detects the wearable extended reality appliance is in proximity to the computing device and/or an image sensor detects the wearable extended reality appliance is properly worn), the at least one processor may determine at step5818the usage status of the wearable extended reality appliance is a second usage status. Referring back toFIG.57, process5700may include a step5714of selecting a display mode based on the usage status of the wearable extended reality appliance. In some embodiments, the display mode may be selected after a usage status of the wearable extended reality appliance is determined by step5712. By way of example,FIG.59illustrates one non-limiting example of a process5900for selecting a display mode of a wearable extended reality appliance based on a usage status of the wearable extended reality appliance.FIG.59is an exemplary representation of just one embodiment, and it is to be understood that some illustrated features might be omitted, and others added within the scope of this disclosure. Process5900may include a step5910of initiating determination of a display mode selection based on a determined usage status of the wearable extended reality appliance, consistent with some embodiments of the present disclosure. A processor (e.g., one or more of server210ofFIG.2, mobile communications device206, processing device360ofFIG.3, processing device460ofFIG.4, processing device560ofFIG.5) may determine the display mode (e.g., a first display mode or second display mode) based on a determination of whether the usage status of the wearable extended reality appliance is a first usage status or a second usage status. Process5900may include a step5912of determining whether the usage status of the wearable extended reality appliance is a first usage status, consistent with some embodiments of the present disclosure. In one example, when the at least one processor determines that the wearable extended reality appliance is in the first usage status, the at least one processor may select at step5914the first display mode of the computing device. When the at least one processor determines that the wearable extended reality appliance is not in the first usage status, the at least one processor may proceed to step5916. Process5900may include a step5916of determining whether the usage status of the wearable extended reality appliance is a second usage status, consistent with some embodiments of the present disclosure. In one example, when the at least one processor determines that the wearable extended reality appliance is in the second usage status, the at least one processor may select at step5918the second display mode of the computing device. In one embodiment, when the at least one processor determines that the wearable extended reality appliance is not in the second usage status at step5916, the at least one processor may retry step5910. In another embodiment, the at least one processor may determine whether the usage status of the wearable extended reality appliance is a second usage status prior to determining whether the usage status of the wearable extended reality appliance is a first usage status. Referring back toFIG.57, process5700U may include a step5716of, in response to the display mode selection, outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode. In one embodiment, the plurality of digital objects may be output for presentation after a display mode of the wearable extended reality appliance is selected by step5714. By way of example,FIG.59illustrates one non-limiting example of a process5900for selecting a display mode of a wearable extended reality appliance and outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode, consistent with some embodiments of the present disclosure. Process5900may include a step5914of selecting the first display mode of the computing device based on the first usage status of the wearable extended reality appliance, as described above. When the at least one processor selects the first display mode at step5914, the at least one processor outputs for presentation the plurality of digital objects in a manner consistent with the first display mode at step5915. Process5900may include a step5918, of selecting the second display mode of the computing device based on the second usage status of the wearable extended reality appliance, as described above. When the at least one processor selects the second display mode at step5918, the at least one processor outputs for presentation the plurality of digital objects in a manner consistent with the second display mode at step5919. FIG.60is a flowchart illustrating an exemplary process6000for determining to display certain digital objects in the second display mode via a wearable extended reality appliance when the wearable extended reality appliance is in a second usage status, consistent with some embodiments of the present disclosure.FIG.60is an exemplary representation of just one embodiment, and it is to be understood that some illustrated features might be omitted, and others added within the scope of this disclosure. With reference to step6010ofFIG.60, instructions contained in a non-transitory computer-readable medium when executed by at least one processor may cause the at least one processor to analyze input signals and stored data and/or information when the wearable extended reality appliance is in the second usage status to determine which digital objects are to be displayed in the second display mode, consistent with some embodiments of the present disclosure. For example, the at least one processor may analyze user input signals captured by at least one sensor in communication with a computing device and/or wearable extended reality appliance. In step6012, the at least one processor may be caused to access database6014related to the display of a plurality of digital objects for presentation in the second display mode. While only one database6014is depicted herein for illustrative purposes, it is to be understood that the referenced data and/or information shown therein may be contained in and/or across any number of databases. In one example, the at least one processor may access data and/or information stored in database6014related to at least one user input6011, described above. Based on data and/or information related to the at least one user input6011, the at least one processor may determine which of the plurality of digital objects to display via the wearable extended reality appliance in the second display mode at step6016. In another example, the at least one processor may access data and/or information stored in database6014related to at least one past user action6013, described above. Based on data and/or information related to the at least one past user action6013, the at least one processor may determine which of the plurality of digital objects to display via the wearable extended reality appliance in the second display mode at step6016. In another example, the at least one processor may access data and/or information related to at least one predefined rule6015, described above. Based on data and/or information related to the at least one predefined rule6015, the at least one processor may determine which of the plurality of digital objects to display via the wearable extended reality appliance in the second display mode at step6016. In another example, the at least one processor may access data and/or information stored in database6014related to at least one type of input device connected6017, described above. Based on data and/or information related to the at least one type of input device connected6017, the at least one processor may determine which of the plurality of digital objects to display via the wearable extended reality appliance in the second display mode at step6016. In yet another example, the at least one processor may access data and/or information stored in database6014related to any combination of user input6011, past user actions6013, predefined rules6015, and/or type of input device connected6017to determine which of the plurality of digital objects are to be displayed via the wearable extended reality appliance in the second display mode at step6016. FIG.61is a flowchart illustrating a process6100for identifying a change in a usage status of a wearable extended reality appliance and revising the presentation of a plurality of digital objects, consistent with some embodiments of the present disclosure in response to the identified change in the usage status of the wearable extended reality appliance.FIG.61is an exemplary representation of just one embodiment, and it is to be understood that some illustrated features might be omitted, and others added within the scope of this disclosure. With reference to step6110ofFIG.61, instructions contained in a non-transitory computer-readable medium when executed by at least one processor may cause the at least one processor to identify a change in the usage status of the wearable extended reality appliance. In step6112, the at least one processor may be caused to determine whether the usage status of the wearable extended reality appliance has changed from a first usage status to a second usage status, as described above. When the at least one processor identifies a change from the first usage status to the second usage status at step6112, the at least one processor may be caused to revise the presentation of the plurality of digital objects to be consistent with the second display mode at step6114. When the at least one processor determines that the usage status has not changed from the first usage status to the second usage status, the at least one processor may proceed to step6116. In step6116, the at least one processor may be caused to determine whether the usage status of the wearable extended reality appliance has changed from a second usage status to a first usage status, as described above. When the at least one processor identifies a change from the second usage status to the first usage status at step6116, the at least one processor may be caused to revise the presentation of the plurality of digital objects to be consistent with the first display mode at step6118. In another embodiment, the at least one processor may be caused to make the determination at step6116at the same time as and/or before the determination at step6112. Implementation of the method and system of the present disclosure may involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present disclosure, several selected steps may be implemented by hardware (HW) or by software (SW) on any operating system of any firmware, or by a combination thereof. For example, as hardware, selected steps of the disclosure could be implemented as a chip or a circuit. As software or algorithm, selected steps of the disclosure could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the disclosure could be described as being performed by a data processor, such as a computing device for executing a plurality of instructions. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure may be implemented as hardware alone. It is appreciated that the above-described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it can be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in the present disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above-described modules/units can be combined as one module or unit, and each of the above-described modules/units can be further divided into a plurality of sub-modules or sub-units. The block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer hardware or software products according to various example embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical functions. It should be understood that in some alternative implementations, functions indicated in a block may occur out of order noted in the figures. For example, two blocks shown in succession may be executed or implemented substantially concurrently, or two blocks may sometimes be executed in reverse order, depending upon the functionality involved. Some blocks may also be omitted. It should also be understood that each block of the block diagrams, and combination of the blocks, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or by combinations of special purpose hardware and computer instructions. In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as example only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method. It will be appreciated that the embodiments of the present disclosure are not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. And other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims. Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. These examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. | 839,388 |
11861062 | DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. The use of optical see-through head-mounted displays (OST-HMDs) for augmented reality and/or mixed reality applications has increased significantly in recent years. However, one particular area that has remained challenging for these systems is display calibration. For example, because augmented and mixed reality applications are designed to visualize virtual objects in a real-world environment, displaying the virtual objects according to a correct pose and alignment is important for proper user experience in augmented and/or mixed reality applications (e.g., surgical navigation). Proper calibration is also needed to correctly display virtual content anchored to the real-world in order to create a realistic augmented and/or mixed reality experience for other applications, such as training, gaming, and/or the like. However, existing methods intended to improve HMD calibration suffer from various drawbacks. For example, OST-HMD calibration procedures are generally designed to compute a transformation function that allows virtual objects and real-world objects to be represented in a common coordinate system. One technique that can be used to compute such a transformation function is based on a Single-Point Active Alignment Method (SPAAM) in which a user wearing the OST-HMD is tasked with aligning a virtual feature (e.g., a crosshair) displayed on a screen of the OST-HMD with a corresponding real-world feature (e.g., a particular point on a real-world object). When the user is satisfied that the virtual feature and the corresponding real-world feature are aligned from a perspective of the user, the user confirms the alignment using some form of interaction. In order to produce a transformation function with reasonable accuracy, this alignment task is typically performed multiple times (e.g., until a threshold quantity of alignment measurements are obtained, a threshold quantity of repetitions are performed, and/or the like). Existing methods for calibrating an OST-HMD tend to rely upon mouse clicks, button presses, hand motion, voice commands, and/or the like to register the interaction used to indicate that the virtual feature and the corresponding real-world feature are aligned from the perspective of the user. However, these input modalities suffer from lack of accuracy because the user has to redirect focus from the augmented and/or mixed reality to the input device used to register the interaction, which may lead to inaccurate alignment measurements due to subtle changes in position. Furthermore, even in the case of a speech command, generating the speech command may impart subtle vibrations, head and/or facial movements, and/or the like, which could undermine the accuracy of the alignment measurement. These input modalities also suffer from unnecessary external dependencies (e.g., input devices) and limited applicability (e.g., the user must be able to use one or both hands and/or produce recognizable speech). Consequently, in addition to decreasing accuracy, calibration methods that rely upon mouse clicks, button presses, hand motion, voice commands, and/or the like to register the interaction may decrease a target audience for OST-HMDs (e.g., by excluding individuals who may be unable to use their hands and/or produce voice commands, medical personnel working in sterile environments, users who are working with machinery, and/or the like). Some implementations described herein relate to calibrating an OST-HMD according to an approach that uses a voluntary blink as an interaction mechanism for indicating that the virtual feature and the corresponding real-world feature are aligned from the perspective of the user. In some implementations, the voluntary blink can be used as the interaction mechanism in any suitable calibration procedure, which may include calibration procedures based on a Single-Point Active Alignment Method (SPAAM) technique, a Multi-Point Active Alignment Method (MPAAM) technique, a technique that provides a mapping between three-dimensional points in a real-world coordinate system and corresponding two-dimensional points on a screen of the OST-HMD, a technique that provides a mapping between three-dimensional points in a real-world coordinate system and three-dimensional points in a display space of the OST-HMD, and/or the like. More particularly, in some implementations, the OST-HMD may include or be coupled to an eye tracking device that can track information related to a gaze of the user wearing the OST-HMD. Accordingly, the information related to the gaze of the user may be used to detect that the user performed a voluntary blink, which may be distinguished from spontaneous blinks or reflex blinks. For example, the voluntary blink may last a duration that satisfies a threshold value (e.g., longer than ˜200 milliseconds based on an average blink lasting a duration of about 100-150 milliseconds). In another example, the voluntary blink may be detected based on the user closing one or more eyes in a given pattern (e.g., a double blink, multiple blinks that occur within a threshold time of one another, closing a left eye and then a right eye, and/or the like). In still another example, the voluntary blink may be detected using one or more machine learning models that can classify blink and/or gaze information as a voluntary blink, a spontaneous blink, or a reflex blink. In this way, during the calibration procedure, the user does not have to use a hand, mouse, speech, and/or the like to register the interaction indicating that the virtual feature and the real feature are aligned, minimizing dependencies and/or externalities to only the eyes that can blink to register the interaction. Furthermore, in this way, the blink-based interaction may increase calibration accuracy by allowing the user to concentrate and focus attention on the alignment task, whereas existing methods that involve the user moving his or her hand, clicking a mouse, and/or performing other actions introduce error into the calibration. Moreover, because the blink-based interaction mechanism does not rely upon input devices, hand motion, and/or voice (i.e., the mere ability to blink suffices to perform the calibration), the blink-based interaction mechanism may significantly increase a potential OST-HMD target audience. For example, the blink-based interaction mechanism can be performed by individuals who cannot use their hands (e.g., disabled users, including stroke victims whose ability to move is limited to blinking), users who may not have hands (e.g., due to amputation or a birth defect), medical personnel who are working in a sterile environment and therefore cannot touch calibration objects and/or input devices, users involved in critical tasks working with machinery or doing maintenance, and/or the like. FIGS.1A-1Bare diagrams of one or more example implementations100,110described herein.FIGS.1A-1Bshow schematic views of misalignment between a tracking space (e.g., a real-world coordinate space tracked by a positional tracking device) and a display space (e.g., a display coordinate space in which an OST-HMD renders a three-dimensional virtual scene that may include one or more virtual objects). As mentioned above, because augmented and/or mixed reality applications are designed to visualize virtual objects in reality, a correct pose and alignment of the virtual objects to be rendered is important for a proper user experience. For example, if the OST-HMD in example implementations100,110is not calibrated with the positional tracking device, the misalignment between the tracking space and the display space will cause an unrealistic and/or undesirable augmented and/or mixed reality experience. Accordingly, in some implementations, a calibration procedure may be used to compute a transformation function that enables the OST-HMD to represent virtual objects in the same coordinate system as real-world objects (e.g., by aligning the display coordinate space in which the OST-HMD renders the virtual objects with the real-world coordinate space tracked by the positional tracking device). For example, given a real-world cube and a virtual cube to be overlaid on the real-world cube, the transformation function may be used to move, warp, and/or otherwise adjust a rendering of the virtual cube such that the virtual cube and the real-world cube are aligned. In another example, given a real-world cup and a virtual lid to be placed on the real-world cup, the transformation function may be used to move, warp, or otherwise adjust the rendering of the virtual lid in the display space to remain aligned with the top of the real-world cup. Moreover, as described in further detail herein, the calibration procedure can be used to compute the transformation function using a head-anchored tracking device (e.g., as shown inFIG.1A) and/or a world-anchored tracking device (e.g., as shown inFIG.1). As shown inFIG.1A, example implementation100may include an OST-HMD (e.g., a stereoscopic OST-HMD) that can generate a three-dimensional image in the display space by presenting a pair of two-dimensional perspective images of a three-dimensional virtual scene (e.g., a first perspective image for a left eye and a second perspective image for a right eye). For example, in some implementations, the OST-HMD may present the pair of two-dimensional perspective images from two slightly different viewing positions using a left eye projection operator and a right eye projection operator. As further shown inFIG.1A, example implementation100may include a head-anchored tracking device that can perform “inside-out” positional tracking. For example, in some implementations, the head-anchored tracking device may be a front-facing camera embedded in the OST-HMD, a camera and/or other sensors rigidly mounted on the OST-HMD, and/or the like. In some implementations, the head-mounted tracking device may have a similar line of sight as the user wearing the OST-HMD and may generally track a position of the OST-HMD in a three-dimensional Euclidean tracking space. For example, when the OST-HMD moves, the head-mounted tracking device may readjust the tracked position of the OST-HMD (e.g., based on translational movements, changes in pitch, roll, and yaw, and/or the like). In some implementations, the head-mounted tracking device may further track three-dimensional positions of real-world objects based on observed features of the surrounding environment using a marker-based tracking algorithm that offers simplicity and robustness, and/or a marker-free (or marker-less) tracking algorithm that offers better user experience. For example, when the marker-based tracking algorithm is used, real-world objects may have one or more fiducial markers (e.g., primitive shapes such as points, squares, circles, and/or the like) that are designed to be easily detected and serve as reference points for the head-mounted tracking device. Additionally, or alternatively, the head-mounted tracking device may perform inside-out positional tracking using infrared (IR) markers and a camera that is sensitive to IR light. When using the marker-free tracking algorithm, the head-mounted tracking device may use distinctive characteristics that originally exist in the real-world environment to determine position and orientation. Relative to a world-anchored tracking device, the head-mounted tracking device may have a reduced size, consume less power, and/or have a smaller computational cost. As shown inFIG.1B, example implementation110may also include an OST-HMD (e.g., a stereoscopic OST-HMD) that can generate a three-dimensional image by presenting a pair of perspective images of a three-dimensional scene from two slightly different viewing positions using a left eye projection operator and a right eye projection operator. As further shown inFIG.1B, example implementation110may include a world-anchored tracking device that can perform “outside-in” positional tracking. For example, in some implementations, the world-anchored tracking device may be a reflective marker tracking device, an electromagnetic sensing tracking device, a projective light-based tracking device, and/or the like. In the world-anchored tracking device, a tracking coordinate system may have a fixed pose with respect to a real-world coordinate system. Accordingly, in some implementations, the world-anchored tracking device may use spatial mapping capabilities associated with the OST-HMD (e.g., based on a simultaneous localization and mapping (SLAM) technique) to obtain a pose of the OST-HMD with respect to the world-anchored tracking device. Additionally, or alternatively, one or more markers (e.g., fiducial markers) may be attached to the OST-HMD and used to track the pose of the OST-HMD. In some implementations, as noted above, the world-anchored tracking device may perform outside-in positional tracking to trace three-dimensional scene coordinates of real-world objects (e.g., the OST-HMD and/or other real-world objects in the surrounding environment). The world-anchored tracking device may include one or more cameras and/or other sensors that are placed in a stationary location and oriented towards the tracked real-world object(s) that are allowed to move freely around an area defined by intersecting visual ranges of the cameras. Like the head-anchored tracking device in implementation100, the world-anchored tracking device may track real-world objects that have a set of markers (e.g., fiducial markers, infrared (IR) markers, and/or the like) and/or marker-free real-world objects. Relative to the head-anchored tracking device, the world-anchored tracking device can potentially be more accurate because the world-anchored tracking device may not have the same constraints as the head-anchored tracking device with respect to size, power consumption, computational resources, type of technology used, and/or the like. Furthermore, when the OST-HMD has self-localization and/or spatial mapping capabilities that can provide the pose of the OST-HMD, the world-anchored tracking device may be able to track real-world objects in the tracking space even when there is no direct line of sight from the camera of the OST-HMD to the tracked real-world objects. In some implementations, as mentioned above, in implementations100,110, a calibration procedure may be used to compute a transformation function that enables the OST-HMD to represent virtual objects in the same coordinate system as real-world objects (e.g., by aligning the display space with the tracking space). For example, the transformation function may be computed using a set of measurements gathered using the head-anchored tracking device, the world-anchored tracking device, and/or the like. More particularly, the calibration procedure may use the set of measurements to compute a transformation function that provides a mapping between three-dimensional points in a real-world coordinate system (e.g., the tracking space) and corresponding points in a three-dimensional virtual environment (e.g., the display space). Accordingly, the transformation function may be applied to default internal projection operators that the OST-HMD uses to generate a three-dimensional image based on two-dimensional perspective images that are presented for each of the user's eye, resulting in corrected projection operators that effectively adjust the default internal projection operators to correct misalignments in visualizing virtual objects in the display space with respect to real-world objects in the tracking space. As indicated above,FIGS.1A-1Bare provided merely as one or more examples. Other examples may differ from what is described with regard toFIGS.1A-1B. FIGS.2A-2Bare diagrams of one or more example implementations200described herein. More particularly, in implementation(s)200, a calibration platform may perform a calibration procedure to solve a transformation function T(⋅) that provides a mapping between three-dimensional points in a real-world coordinate system tracked by a positional tracking device and corresponding points in a three-dimensional virtual scene visualized by an HMD (e.g., an OST-HMD) worn by a user. However, the calibration procedure described herein is for illustration purposes only. Accordingly, as noted above, the blink-based interaction mechanism can additionally, or alternatively, be used in other suitable calibration procedures, which may include calibration procedures based on a Single-Point Active Alignment Method (SPAAM) technique, a Multi-Point Active Alignment Method (MPAAM) technique, a technique that maps three-dimensional points in a real-world coordinate system to corresponding two-dimensional points on a screen of the OST-HMD, and/or the like. For example, given the points q1, . . . , qnin the real-world coordinate system, the calibration platform may compute the transformation function T(⋅) to map the points q1, . . . , qnin the real-world coordinate system to corresponding points p1, . . . , pnin a display coordinate system as follows: pi=T(qi)i=1, . . . ,n. In some implementations, the calibration procedure may be performed based on an assumption that both piand qiε3(e.g., piand qiare both elements of a three-dimensional coordinate space). Accordingly, the calibration platform may estimate T based on a set of measurements (or observations) in the form of (qi, pi) for i=1, . . . , n. More specifically, the measurement of qimay be obtained using the positional tracking device, while piis pre-defined and visualized on the OST-HMD worn by the user. Accordingly, with the calculated transformation function T(⋅), a point in the real-world coordinate system tracked by the positional tracking device can be mapped to a corresponding point in the display coordinate system. In this way, the transformation function may correct misalignments of real objects with virtual counterparts in the user's eyes such that virtual objects and real-world objects can be represented in a common coordinate system. As shown inFIG.2A, and by reference number210, a calibration object with one or more fiducial markers may be given to a user wearing the OST-HMD. For example, in some implementations, the calibration object may be a cube with different fiducial markers attached on faces of the cube to aid the positional tracking device in tracking the real-world coordinates of the calibration object. Additionally, or alternatively, each face of the calibration object may have a different color to make an alignment task more intuitive. In some implementations, using a cube with fiducial markers and colored faces as the calibration object may provide the user with additional depth cues utilizing unique depth cue characteristics of three-dimensional visualization. As further shown inFIG.2A, and by reference number220, the calibration object provided to the user may additionally, or alternatively, be an asymmetrical calibration object. In this way, the asymmetrical calibration object may not have ambiguous corners (as in a cube), which may obviate a need to use different colors on the faces of the calibration object. In this way, the asymmetrical calibration object can be used in a monochromatic setting. Furthermore, the calibration object may have a stem and/or another suitable member to be held by the user, which may reduce the effects of poor alignment, hand tremor, and/or the like. As shown inFIG.2A, and by reference number230, the calibration platform may provide, to the OST-HMD worn by the user, a virtual image to be overlaid on the calibration object for the calibration procedure. For example, the virtual image may be a three-dimensional virtual object that has the same shape as the calibration object and/or additional visual characteristics that aid the user in aligning the calibration object and the virtual object (e.g., colored faces that match the colored faces of the calibration object). Additionally, or alternatively, the virtual image may have a feature (e.g., a crosshair) that the user is tasked with aligning with one or more features on the calibration object. As shown inFIG.2A, and by reference number240, the OST-HMD may display the virtual image in a field of view of the user wearing the OST-HMD. In some implementations, the virtual image may be displayed in a location that is not correctly aligned with the calibration object. As shown inFIG.2A, and by reference number250, the calibration platform may instruct the user (e.g., via an automated voice command or screen prompt) to align one or more features of the virtual image with one or more corresponding features in the real-world (e.g., a corner, an edge, and/or another suitable feature of the calibration object). In particular, the user may move the calibration object around in space, shift the user's head and/or body position, and/or the like until the user is satisfied that the calibration object and the virtual object are aligned in the user's view. For example, the implementation(s)200shown inFIG.2Aillustrate a multi-point alignment example in which the user is instructed to align multiple features (e.g., five corners) of the calibration object and multiple corresponding features of the virtual object. Additionally, or alternatively, the multi-point alignment example may instruct the user to align more than five points on the calibration object and the virtual object (e.g., the user could be instructed to align all seven corners of the cube that are visible). Additionally, or alternatively, the multi-point alignment example may instruct the user to align fewer than five points on the calibration object and the virtual object, as three (3) non-collinear and/or non-coplanar points in space are generally sufficient to fully determine a pose of a three-dimensional object. However, using more than three points (e.g., five in the illustrated example) may make the alignment task easier, provide the user with a better depth cue, and/or reduce a quantity of repetitions to be performed to obtain a threshold quantity of alignment measurements for computing the transformation function. Additionally, or alternatively, implementation(s)200may utilize a single point alignment in which the user is instructed to align only one feature (e.g., one corner) of the calibration object and the virtual object. In this way, the positional tracking device may measure the three-dimensional position of only the one feature, which may reduce the burden associated with each alignment repetition. However, the single point alignment may need more repetitions to obtain the threshold quantity of alignment measurements, which can lead to inaccuracy due to user fatigue. As shown inFIG.2A, and by reference number260, a voluntary blink may be detected using information obtained by an eye tracking device. More particularly, in some implementations, the eye tracking device may be integrated into or otherwise coupled to the OST-HMD in a location that enables the eye tracking device to obtain information relating to a gaze of the user. For example, the eye tracking device may include one or more projectors that can create a pattern of infrared or near-infrared light on the user's eyes and one or more cameras that capture high-resolution images of the user's eyes and the pattern created thereon. In some implementations, the eye tracking device may employ one or more algorithms (e.g., machine learning algorithms, image processing algorithms, mathematical algorithms, and/or the like) to determine a position and gaze point of the user's eyes. Furthermore, in some implementations, the high-resolution images of the user's eyes and the pattern created thereon may be used to detect one or more blinks based on the gaze information. The one or more blinks may be analyzed to determine whether the one or more blinks were voluntary (and thus reflect an intent to register an interaction indicating that the virtual and real-world objects are aligned) or instead a spontaneous blink or reflex blink. For example, in some implementations, the one or more blinks may be determined to be voluntary based on the blink(s) lasting a duration and/or having a peak velocity that satisfies a threshold value (e.g., an amount of time and/or a velocity that is sufficiently longer and/or slower relative to spontaneous blinks and/or reflex blinks). In general, spontaneous blinks occur without any external stimuli and/or internal effort, while reflex blinks typically occur in response to external stimuli (e.g., tactile contact with a cornea, an eyelash, an eyelid, and eyebrow, and/or the like, optical stimuli such as a sudden bright light, loud noises and/or other auditory stimuli, and/or the like). Accordingly, voluntary blinks, spontaneous blinks, and reflex blinks may generally have different average durations, peak velocities, and/or other characteristics. Accordingly, in some implementations, the threshold value may be defined as a quantity of time longer than a typical duration of a spontaneous blink and/or a reflex blink, a peak velocity that is slower than a typical peak velocity of a spontaneous blink and/or a reflex, and/or the like. Additionally, or alternatively, the one or more blinks may be determined to be voluntary based on a frequency of the blink(s) satisfying a threshold value. For example, in some implementations, a typical eye blink rate (EBR) and/or EBR associated with the specific user wearing the OST-HMD may be determined and a value of the EBR may be used to determine the threshold value. In this way, a double blink, a rapid succession of multiple blinks, and/or the like can be used to register the voluntary blink. Additionally, or alternatively, the one or more blinks may be determined to be voluntary based on the one or more blinks occurring according to a given pattern (e.g., closing only one eye and subsequently closing only the other eye). Additionally, or alternatively, the one or more blinks may be determined to be voluntary using one or more machine learning models that are trained to determine whether gaze information indicates that an eye blink is voluntary, spontaneous, or reflex. For example, blinking is an essential bodily function that all humans perform involuntarily to help spread tears across the eyes, remove irritants, keep the eyes lubricated, and/or the like. Accordingly, there may be various parameters (e.g., blink rate, blink speed or duration, blink patterns) that are similar across an entire population and/or specific to a given user or class of users. For example, blink rates, blink speeds, and/or the like can be affected by elements such as fatigue, eye injury, medication, disease, and/or the like. Accordingly, in some implementations, the one or more machine learning models may take various parameters as input to classify one or more blinks as voluntary, spontaneous, or reflex blinks. For example, one input parameter may be a medication that the user wearing the OST-HMD is taking, which may affect the user's blink rate. In another example, because users may become fatigued after several repetitions performing the alignment task, one or more threshold values may be adjusted based on the number of alignment repetitions that the user has performed. In another example, the parameters input to the machine learning models may include information related to the virtual images shown to the user (e.g., to discard a blink that may occur if and/or when the virtual image is suddenly displayed in a manner that triggers a reflex blink). Accordingly, in some implementations, the one or more blinks may be appropriately classified as voluntary, spontaneous, or a reflex based on one or more parameters relating to the environment in which the alignment task is performed, the virtual images displayed via the OST-HMD, typical blink frequencies and/or durations for spontaneous and/or reflex blinks (e.g., for a population of users, the specific user wearing the OST-HMD, a category of users that includes the user wearing the OST-HMD, and/or the like), and/or the like. As shown inFIG.2A, and by reference number270, the positional tracking device may provide one or more alignment measurements to the calibration platform based on the gaze information tracked by the eye tracking device indicating that the user performed the voluntary blink(s) to register an interaction indicating that the appropriate feature(s) on the calibration object and the virtual object appear to be aligned in the user's view. In some implementations, the virtual object may then appear in another location in the field of view of the user and the user may be instructed to perform the alignment task again. This process may be repeated until the threshold number of alignment measurements are obtained (e.g., ˜20 measurements, which may be obtained in just four repetitions in the five-point alignment example). In some implementations, the virtual object may be displayed in a different location for each repetition, to cover as much of an area within the user's reach as possible. In this way, the alignment measurements used to compute the transformation function may be more balanced and less biased toward any given geometrical location. As shown inFIG.2A, and by reference number280, the calibration platform may compute the transformation function based on the threshold quantity of alignment measurements provided by the positional tracking device. More specifically, as mentioned above, the calibration platform may estimate the transformation function (T) based on a set of measurements (or observations) in the form of (qi, pi) for i=1, . . . , n, where the measurement of qiis a three-dimensional point obtained from the positional tracking device (e.g., a three-dimensional position of a point on the calibration object to be aligned with a corresponding point on the virtual object), while piis pre-defined and visualized on the OST-HMD (e.g., a three-dimensional position at which the corresponding point on the virtual object is visualized). In some implementations, the transformation function computed by the calibration platform may be a linear transformation function. For example, because the aim is to find a transformation between a three-dimensional coordinate system associated with the positional tracking device and a three-dimensional display coordinate system associated with a display space of the OST-HMD, the calibration platform may compute the transformation function as an affine transformation with 12 unknown parameters, as the transformation between coordinate systems is affine. Additionally, or alternatively, the calibration platform may solve for a general case where the transformation is a perspective transformation with 15 unknown parameters (excluding an arbitrary scale parameter). Additionally, or alternatively, because fewer unknown parameters require fewer calibration alignments and thus can considerably reduce the burden on the user, the calibration platform may compute the transformation function as an isometric transformation that has 6 unknown parameters. For example, a 3D-to-3D rigid transformation may generally be represented as: {circumflex over (p)}i=[T]4×4·{circumflex over (q)}i More specifically, the mathematical representation of the affine, perspective, and isometric transformations may be as follows: i) Affine Transformation: p^i=[TA]4×4·q^i,Tp=[a11a12a13a14a21a22a23a24a31a32a33a340001] where the first three rows of TAare arbitrary. ii) Perspective Transformation: p^i=[Tp]4×4·q^i,Tp=[p11p12p13p14p21p22p23p24p31p32p33p34p41p42p43p44] where both {circumflex over (p)}iand {circumflex over (q)}iare represented in normalized homogeneous coordinates and TPis an arbitrary 4×4 matrix with 15 unknown parameters (excluding an arbitrary scale). iii) Isometric Transformation: p^i=[TI]4×4·q^i,TI=[r11r12r13i1r21r22r23i2r31r32r33i30001] where T1is composed of a 3×3 orthonormal matrix {ri,j} representing rotation, and a 3×1 translational vector {right arrow over (l)}. In some implementations, to solve the calibration problem, the calibration platform may compute a transformation function T that minimizes a reprojection error of the set of alignment measurements (Ereproj), which is represented as follows: Ereproj=∑i=1n(pi-T(qi))2n In some implementations, the calibration platform may calculate the affine transformation and/or the perspective transformation using a Direct Linear Transformation (DLT) algorithm, with an objective of minimizing a total algebraic error. For the isometric transformation, the problem is equal to registration of two rigid three-dimensional point sets, whereby an absolute orientation method may be used with an objective of minimizing a least-square error of the registration. In some implementations, as mentioned above, the OST-HMD generates a three-dimensional image by presenting two two-dimensional perspective images, one for each eye, of a three-dimensional scene from two slightly different viewing positions. Each two-dimensional perspective image has a respective projection operator (P). In some implementations, the projection operator P may be represented by a 3×4 matrix and the OST-HMD may have a default configuration and preset internal projection matrices that can be represented as follows: [PLD]3×4, Left Eye Default: [PRD]3×4Right Eye Default: In some implementations, the computed transformation, T, may be applied to each of the left eye default projection operator and the right eye default projection operator, which may result in the following effective left eye and right eye projection matrices in implementations where the projection operator P is represented by a 3×4 matrix: [PLE]3×4=[PLD]3×4·[T]4×4 [PRE]3×4=[PRD]3×4·[T]4×4 Accordingly, the computed transformation, T, may effectively adjust the default internal projection operators used in the OST-HMD to correct misalignments in visualizing virtual objects with respect to a real scene. In other words, the computed elements in the effective projection operators may adjust an original or default calibration associated with the OST-HMD (e.g., with respect to aspect ratio, focal length, extrinsic transformation, and/or the like). In general, a 3×4 projection matrix contains eleven degrees of freedom (e.g., six for camera extrinsics and five for camera intrinsics). For stereo visualization, one common approach is to use the same projection matrix for both eyes, except with a translation (obtained using the interpupillary distance) along one coordinate direction, for a total of twelve degrees of freedom. Accordingly, while the different types of transformations (e.g., isometric, affine, and perspective) may all be 3D-3D transformations (i.e., each transformation takes a 3D point as an input and produces a 3D point as an output), the different types of transformations may vary in the number of degrees of freedom and the manner and/or extent to which the transformations can adjust the default projection matrices. For example, the isometric transformation is more constrained with six (6) parameters to estimate and maintains the dimensions (e.g., distances and angles) of the virtual objects to be displayed and merely changes the pose (e.g., the position and/or orientation) of the virtual objects. The affine transformation is less constrained than the isometric transformation with twelve (12) parameters to estimate and preserves parallel lines while warping (e.g., stretching and/or shearing) the shape of the virtual objects. The perspective transformation is the least constrained with fifteen (15) parameters to estimate and preserves collinearity, cross-ratio, order of points, and/or the like. In some implementations, the calibration platform may analyze an error that results from each of the affine, perspective, and isometric transformations to determine which one represents a most accurate model. For example, because the alignment task is performed by a human user and thus is prone to error (e.g., because of fatigue due to having to perform multiple repetitions, hand tremor, and/or the like), a Random Sample Consensus (RANSAC) algorithm may be used to find the most accurate transformation and reject outliers based on the reprojection error Ereprojof the set of measurements. In some implementations, as shown inFIG.2B, the calibration procedure may be performed independently of any internal features of the OST-HMD. For example, as shown inFIG.2B, different OST-HMDs may use different display technologies, firmware packages, software development kits (SDKs), internal projection operators, and sensor configurations (e.g., built-in sensors, interfaces to external sensors, and/or the like). In particular, the calibration procedure may treat the OST-HMD like a blackbox, using data from a positional tracking system as an input and a visualization of a virtual three-dimensional object in the eyes of an observer (e.g., a user wearing the OST-HMD) as an output. For example, the OST-HMDs may enable the calibration platform to create a three-dimensional visualization of virtual content in a three-dimensional projective virtual space in front of the user's eyes and/or provide access to a final three-dimensional visualization of virtual content. Furthermore, the calibration platform may have access to the data gathered using the positional tracking device. Accordingly, regardless of a level of access (if any) to internal settings associated with the OST-HMD, the calibration may use the three-dimensional representation of virtual content and the data gathered using the positional tracking device to compute a 3D-to-3D projection that corrects misalignments between real objects and virtual counterparts in the user's eyes (e.g., a 3D-to-3D projection from a three-dimensional Euclidean tracking space tracked by the positional tracking device to a three-dimensional projective virtual space perceived by the user). In this way, regardless of any intermediate processes used in the OST-HMD to create the virtual scene, the calibration procedure may correct the final alignment in the three-dimensional perceived scene, which is what matters for the user and affects the augmented and/or mixed reality experience. Accordingly, from this perspective, the projection computed by the calibration platform is from a three-dimensional real world to a three-dimensional space rather than two planar screens, whereby a mapping model used by the calibration platform becomes a 3D-3D registration procedure (e.g., representing information three-dimensionally in space rather than two-dimensionally within a screen coordinate system). As indicated above,FIGS.2A-2Bare provided merely as one or more examples. Other examples may differ from what is described with regard toFIGS.2A-2B. FIG.3is a diagram of an example implementation300described herein.FIG.3shows a setup for performing the calibration procedure described in further detail above with a head-anchored tracking device configured to perform inside-out positional tracking. As shown inFIG.3, implementation300may use a front-facing camera embedded in an HMD as the head-anchored tracking device. Additionally, or alternatively, the head-anchored tracking device may be a camera and/or other suitable positional tracking device that is rigidly mounted or otherwise fixed to the HMD. As shown inFIG.3, a user wearing the HMD holds a real calibration object having one or more fiducial markers. As further shown inFIG.3, coordinate systems of the positional tracking device (or camera), the calibration object, and the HMD are respectively represented as {C}, {O}, and {H}. Because the camera is fixed to the HMD, an extrinsic geometric transformation between the camera and the HMD (GHC) is fixed. The point on the calibration object to be aligned with the virtual object is fixed at {right arrow over (q)}Owith respect to the coordinate system of {O}. The corresponding point on the virtual object, visualized in the user's view, is at {right arrow over (p)} in the coordinate system associated with the HMD {H}. In some implementations, the positional tracking device may determine a pose of the tracked object (GCO), which may eventually yield point sets {q|qi=GCO,i·{right arrow over (q)}o,, i=1, . . . , n} and {pi|i=1, . . . , n} that can be used to compute the transformation function T, as described above. In some implementations, the pose of the tracked object (GCO) that eventually yields the point sets may be determined based on a time when the user performs a voluntary blink to indicate that the points on the calibration object and the virtual object appear to be aligned in the user's view (e.g., within a threshold amount of time before and/or after the user performs the voluntary blink). As indicated above,FIG.3is provided merely as one or more examples. Other examples may differ from what is described with regard toFIG.3. FIG.4is a diagram of an example implementation400described herein.FIG.4shows example views from a perspective of the user when the user is performing the calibration procedure described in further detail above with the head-anchored tracking device configured to perform inside-out positional tracking. As shown inFIG.4, and by reference number410, the virtual object and the real calibration object may be misaligned in the user's view prior to calibration. As shown inFIG.4, and by reference number420, the user may be instructed to align one or more features (e.g., corner points) on the calibration object with one or more corresponding features on a virtual target. For example, a first virtual target may be displayed at a first location within the user's field of view. The user may move the calibration object around in space, move a head position and/or a body position, and/or the like until the user is satisfied that the real feature(s) on the calibration object are aligned with the corresponding feature(s) on the first virtual target. In some implementations, the user may perform a voluntary blink to indicate that the real feature(s) on the calibration object appear to be aligned with the corresponding feature(s) on the first virtual target within the user's view. The voluntary blink may be detected using information obtained by an eye tracking device that can track information relating to a gaze of the user. The head-mounted tracking device may provide a measurement indicating a three-dimensional position of the real feature(s) on the calibration object, which may be recorded along with the corresponding feature(s) on the first virtual target. As further shown inFIG.4, a second virtual target may then be displayed at a different location in the user's field of view and the user may again be instructed to align the one or more features on the calibration object with the second virtual target. This process may be repeated until a set of measurements having a threshold quantity of points have been collected (e.g., ˜20 repetitions for ˜20 measurements in a single point calibration procedure, ˜4 repetitions for ˜20 measurements in a five-point calibration procedure, and/or the like). In some implementations, the calibration platform may compute the transformation function based on the set of measurements, as described above. As shown inFIG.4, and by reference number430, the transformation function can be applied to internal projection operators that the HMD uses to present a two-dimensional image for each eye such that the virtual object is superimposed on the real calibration object. As indicated above,FIG.4is provided merely as one or more examples. Other examples may differ from what is described with regard toFIG.4. FIGS.5A-5Bare diagrams of one or more example implementations500,510described herein.FIGS.5A-5Bshow a setup for performing the calibration procedure described in further detail above with a world-anchored tracking device configured to perform outside-in positional tracking. As shown inFIG.5A, in implementation500, coordinate systems associated with the world-anchored tracking device, the calibration object, the HMD, and the real-world may be respectively represented as {E}, {O}, {H}, and {W}. A difference between the setup using the head-anchored tracking device (e.g., as shown inFIG.3) and implementation500is that the transformation GHCbetween the coordinate system of the head-anchored tracking device {C} and the coordinate system of the MD {H} is fixed, which is not the case for the world-anchored tracking device. Rather, for the world-anchored tracking device, the transformation GHEbetween the coordinate system of the world-anchored tracking device {E} and the coordinate system of the HMD {H} is expressed as GHE=GWH−1·GWE. Because the world-anchored tracking device is stationary, and therefore does not change a pose within the environment, GWEis fixed. Therefore, the calibration platform may obtain an additional component to maintain and update the transformation GWHbetween the real-world and the HMD {H} such that the transformation GHEbetween the positional tracking device and the HMD can be determined. For example, in some implementations, a SLAM-based spatial mapping capability of the MD may be used to complete a transformation chain from the tracked calibration object to the user's view. In other words, the transformation from the world-anchored tracking device to the calibration object may be determined because the calibration object is tracked and the external world-anchored tracking device is fixed in the world. The spatial mapping capabilities may provide and update the pose of the HMD with respect to the world, which may close the transformation chain and permit the calibration platform to determine the pose of the calibration object relative to the UN/D. In this way, direct line of sight between the camera of the HMD and the calibration object is not needed, as long as the user can see the calibration object to perform the alignment task and the world-anchored tracking device remains in the same fixed position in the world. Accordingly, the HMD spatial mapping may not be reliant on the calibration object, but rather on self-localizing within the environment (e.g., with respect to large features such as walls). Alternatively, if the HMD does not have spatial mapping and/or self-localization capabilities, another method may be used to maintain and update the transformation GWHbetween the real-world and the HMD {H} (e.g., using the world-anchored tracking device and mounting fiducial markers to also track the HMD). As shown inFIG.5B, in implementation510, the setup based on the world-anchored tracking device may use a calibration object attached to a frame composed from passive spherical markers. These spherical markers may be tracked by the world-anchored tracking device and used to determine the three-dimensional position of one or more points on the calibration object that are to be aligned with a virtual object. In some implementations, the three-dimensional position of the one or more points on the calibration object may be determined based on a time when the user performs a voluntary blink to indicate that the points on the calibration object and the corresponding points on the virtual object appear to be aligned in the user's view. As indicated above,FIGS.5A-5Bare provided merely as one or more examples. Other examples may differ from what is described with regard toFIGS.5A-5B. FIG.6is a diagram of an example implementation600described herein.FIG.6shows example views from a perspective of the user when the user is performing the calibration procedure described in further detail above with the world-anchored tracking device configured to perform outside-in positional tracking. As shown inFIG.6, and by reference number610, the virtual object and the real calibration object may be misaligned in the user's view prior to calibration. As shown inFIG.6, and by reference number620, the user may be instructed to align one or more features (e.g., corner points) on the calibration object with one or more corresponding features on a virtual target. For example, a first virtual target may be displayed at a first location within the user's field of view. The user may move the calibration object around in space, move a head position and/or a body position, and/or the like until the user is satisfied that the real feature(s) on the calibration object are aligned with the corresponding feature(s) on the first virtual target. In some implementations, the user may perform a voluntary blink to indicate that the real feature(s) on the calibration object appear to be aligned with the corresponding feature(s) on the first virtual target within the user's view. The voluntary blink may be detected using information obtained by an eye tracking device that can track information relating to a gaze of the user. The head-mounted tracking device may provide a measurement indicating a three-dimensional position of the real feature(s) on the calibration object, which may be recorded along with the corresponding feature(s) on the first virtual target. As further shown inFIG.6, a second virtual target may then be displayed at a different location in the user's field of view and the user may again be instructed to align the one or more features on the calibration object with the second virtual target. This process may be repeated until a set of measurements having a threshold quantity of points have been collected (e.g., ˜20 repetitions for ˜20 measurements in a single point calibration procedure, ˜4 repetitions for ˜20 measurements in a five-point calibration procedure, and/or the like). In some implementations, the calibration platform may compute the transformation function based on the set of measurements, as described above. As shown inFIG.6, and by reference number630, the transformation function can be applied to internal projection operators that the HMD uses to present a two-dimensional image for each eye such that the virtual object is superimposed on the real calibration object. As indicated above,FIG.6is provided merely as one or more examples. Other examples may differ from what is described with regard toFIG.6. FIG.7is a diagram of an example implementation700described herein.FIG.7shows an example calibration setup that may be designed to reduce error that may be caused by poor alignment (e.g., due to hand tremor, user fatigue, and/or the like). Furthermore, the calibration setup shown inFIG.7may be used in combination with the blink-based interaction mechanism to provide a substantially hands-free calibration setup. As shown inFIG.7, and by reference number710, the real calibration object(s) may be stabilized (e.g., by mounting or otherwise fixing the calibration object(s) to a rigid and/or flexible stand). In this way, the user does not have to hold the calibration object(s), which may reduce or eliminate error due to hand tremor. As further shown inFIG.7, and by reference number720, the virtual object may then be displayed relative to one or more of the real calibration objects. Accordingly, because the user is not holding the calibration object(s), the user may move around the real-world environment, shift a head and/or body position, and/or the like, and verify proper alignment from multiple viewpoints. In this way, more accurate and reliable alignment measurements may be obtained because the user is not holding the calibration object and possibly imparting small movements to the calibration object during alignment. Furthermore, in this way, a distance from the calibration object to the HMD is not limited by the user's arm length (e.g., allowing the virtual object to be placed at other three-dimensional locations that are not limited to the space within the user's reach). As shown inFIG.7, and by reference number730, the user may align the calibration object with the virtual object and perform a voluntary blink when the virtual object and the calibration object appear to be aligned in the user's view. In some implementations, based on the voluntary blink, the calibration platform may record the three-dimensional position of the point(s) on the real calibration object that are to be aligned with the corresponding point(s) on the virtual object. As indicated above,FIG.7is provided merely as one or more examples. Other examples may differ from what is described with regard toFIG.7. FIG.8is a diagram of an example environment800in which systems and/or methods described herein may be implemented. As shown inFIG.8, environment800may include a display device810, an eye tracking device820, a positional tracking device830, a calibration platform840, and a network850. Devices of environment800may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. Display device810includes any display that is capable of presenting imaging provided by calibration platform840. Display device810may include technologies such as liquid crystal display (LCDs) devices, light-emitting diode (LED) display devices, plasma display devices, wearable display devices (e.g., head-mounted display devices), handheld display devices, and/or the like. For example, in some implementations, display device810may include or be part of a wearable display device such as an optical see-through head-mounted display (OST-HMD) device, a video see-through head-mounted display (VST-HMD) device, and/or the like. Additionally, or alternatively, display device810may be a non-wearable display device, such as a handheld computer, a tablet computer, and/or the like. In some implementations, display device810may be a stereoscopic or three-dimensional display device. Eye tracking device820includes one or more devices capable of receiving, generating, processing, and/or providing information related to a gaze of a user. For example, eye tracking device820may include one or more projectors that create a pattern of infrared or near-infrared light on a user's eyes and one or more cameras that capture high-resolution images of the user's eyes and the pattern created thereon. In some implementations, eye tracking device820may employ one or more algorithms (e.g., machine learning algorithms, image processing algorithms, mathematical algorithms, and/or the like) to determine a position and gaze point of the user's eyes. Furthermore, in some implementations, the high-resolution images of the user's eyes and the pattern created thereon may be used to detect blink information (e.g., where the user closes one or more eyes). In some implementations, eye tracking device820may be configured to track the information related to the gaze of the user and provide the tracked gaze information to display device810, positional tracking device830, and/or calibration platform840. Positional tracking device830includes one or more devices capable of receiving, generating, processing, and/or providing information related to a position (e.g., three-dimensional coordinates) of one or more real-world objects. For example, positional tracking device830may be a head-anchored tracking device that can perform “inside-out” positional tracking (e.g., a front-facing camera embedded in display device810, a camera and/or other sensors rigidly mounted on display device810, and/or the like). Additionally, or alternatively, positional tracking device830may be a world-anchored tracking device that can perform “outside-in” positional tracking (e.g., a reflective markers tracking device, an electromagnetic tracking device, a projective light-based tracking device, and/or the like). In some implementations, positional tracking device830may be configured to track three-dimensional coordinates of real-world objects (e.g., display device810, a calibration object, and/or the like) and provide the tracked three-dimensional coordinates to display device810and/or calibration platform840. Calibration platform840includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with augmented reality imaging, mixed reality imaging, a position (e.g., a pose) of one or more real-world objects (e.g., a calibration object), and/or the like. For example, calibration platform840may include an image processing system of display device810, an external image processing computing device connected to display device810(e.g., via a peripheral cable, via network850, and/or the like), an image processing platform implemented in a cloud computing environment, and/or the like. In some implementations, calibration platform840may provide output to display device810for display. In some implementations, calibration platform840may compute a transformation function that provides a mapping between three-dimensional points in a real-world coordinate system and points used for generating a three-dimensional virtual scene based on measurements gathered using positional tracking device830. In some implementations, the measurements may be gathered using positional tracking device830based on information tracked by eye tracking device820indicating that a user performed a voluntary blink. Network850includes one or more wired and/or wireless networks. For example, network850may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a body area network (BAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these and/or other types of networks. The number and arrangement of devices and networks shown inFIG.8are provided as one or more examples. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.8. Furthermore, two or more devices shown inFIG.8may be implemented within a single device, or a single device shown inFIG.8may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment800may perform one or more functions described as being performed by another set of devices of environment800. FIG.9is a diagram of example components of a device900. Device900may correspond to display device810, eye tracking device820, positional tracking device830, and/or calibration platform840. In some implementations, display device810, eye tracking device820, positional tracking device830, and/or calibration platform840may include one or more devices900and/or one or more components of device900. As shown inFIG.9, device900may include a bus910, a processor920, a memory930, a storage component940, an input component950, an output component960, and a communication interface970. Bus910includes a component that permits communication among multiple components of device900. Processor920is implemented in hardware, firmware, and/or a combination of hardware and software. Processor920is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, processor920includes one or more processors capable of being programmed to perform a function. Memory930includes a random-access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor920. Storage component940stores information and/or software related to the operation and use of device900. For example, storage component940may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid-state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component950includes a component that permits device900to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component950may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component960includes a component that provides output information from device900(via, e.g., a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface970includes a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables device900to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface970may permit device900to receive information from another device and/or provide information to another device. For example, communication interface970may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. Device900may perform one or more processes described herein. Device900may perform these processes based on processor920executing software instructions stored by a non-transitory computer-readable medium, such as memory930and/or storage component940. As used herein, the term “computer-readable medium” refers to a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory930and/or storage component940from another computer-readable medium or from another device via communication interface970. When executed, software instructions stored in memory930and/or storage component940may cause processor920to perform one or more processes described herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.9are provided as an example. In practice, device900may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.9. Additionally, or alternatively, a set of components (e.g., one or more components) of device900may perform one or more functions described as being performed by another set of components of device900. FIG.10is a flow chart of an example process1000for blink-based calibration of an optical see-through head-mounted display device. In some implementations, one or more process blocks ofFIG.10may be performed by a calibration platform (e.g., calibration platform840). In some implementations, one or more process blocks ofFIG.10may be performed by another device or a group of devices separate from or including the calibration platform, such as a display device (e.g., display device810), an eye tracking device (e.g., eye tracking device820), a positional tracking device (e.g., positional tracking device830), and/or the like. As shown inFIG.10, process1000may include receiving information from a positional tracking device that relates to a position of at least one point on a three-dimensional real-world object (block1010). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive information from a positional tracking device that relates to a position of at least one point on a three-dimensional real-world object, as described above. As further shown inFIG.10, process1000may include causing an optical see-through head-mounted display device to display a virtual image having at least one feature in a display space of the optical see-through head-mounted display device (block1020). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may cause an optical see-through head-mounted display device to display a virtual image having at least one feature in a display space of the optical see-through head-mounted display device, as described above. As further shown inFIG.10, process1000may include receiving information from an eye tracking device indicating that a user wearing the optical see-through head-mounted display device performed a voluntary eye blink, wherein the voluntary eye blink is a calibration input to indicate that the at least one feature of the virtual image appears to the user to be aligned with the at least one point on the three-dimensional real-world object in the display space of the optical see-through head-mounted display device (block1030). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive information from an eye tracking device indicating that a user wearing the optical see-through head-mounted display device performed a voluntary eye blink, as described above. In some implementations, the voluntary eye blink may be a calibration input to indicate that the at least one feature of the virtual image appears to the user to be aligned with the at least one point on the three-dimensional real-world object in the display space of the optical see-through head-mounted display device. As further shown inFIG.10, process1000may include recording an alignment measurement based on the position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink (block1040). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may record an alignment measurement based on the position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink, as described above. As further shown inFIG.10, process1000may include generating a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in the display space of the optical see-through head-mounted display device based on the alignment measurement (block1050). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may generate a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in the display space of the optical see-through head-mounted display device based on the alignment measurement, as described above. Process1000may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, the voluntary eye blink may be detected based on the information received from the eye tracking device indicating that the user wearing the optical see-through head-mounted display device closed one or more eyes for a duration that satisfies a threshold value and/or that the user wearing the optical see-through head-mounted display device closed one or more eyes according to a pattern. For example, in some implementations, the pattern may comprise multiple eye blinks that occur within a threshold time of one another, a sequence in which the user closes a first eye while a second eye is open and subsequently closes the second eye while the first eye is open, and/or the like. Additionally, or alternatively, the voluntary eye blink may be detected using one or more machine learning models that are trained to determine whether gaze information indicates that an eye blink is voluntary, spontaneous, or reflex. In some implementations, the function may define a relationship between the three-dimensional points in the real-world coordinate system and corresponding three-dimensional points in the display space of the optical see-through head-mounted display device. Additionally, or alternatively, the function may define a relationship between the three-dimensional points in the real-world coordinate system and corresponding two-dimensional points on a screen of the optical see-through head-mounted display device. In some implementations, the function providing the mapping between the three-dimensional points in the real-world coordinate system and the corresponding points in the display space of the optical see-through head-mounted display device may be generated based on a Single-Point Active Alignment Method (SPAAM) technique, a Multi-Point Active Alignment Method (MPAAM) technique, and/or the like. AlthoughFIG.10shows example blocks of process1000, in some implementations, process1000may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.10. Additionally, or alternatively, two or more of the blocks of process1000may be performed in parallel. FIG.11is a flow chart of an example process1100for blink-based calibration of an optical see-through head-mounted display device. In some implementations, one or more process blocks ofFIG.11may be performed by a calibration platform (e.g., calibration platform840). In some implementations, one or more process blocks ofFIG.11may be performed by another device or a group of devices separate from or including the calibration platform, such as a display device (e.g., display device810), an eye tracking device (e.g., eye tracking device820), a positional tracking device (e.g., positional tracking device830), and/or the like. As shown inFIG.11, process1100may include receiving, from a positional tracking device, information that relates to three-dimensional real-world coordinates for a plurality of points on a three-dimensional real-world object (block1110). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive, from a positional tracking device, information that relates to three-dimensional real-world coordinates for a plurality of points on a three-dimensional real-world object, as described above. As further shown inFIG.11, process1100may include causing an optical see-through head-mounted display device to display a virtual object having a plurality of features to be simultaneously aligned with the plurality of points on the three-dimensional real-world object from a perspective of a user wearing the optical see-through head-mounted display device (block1120). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may cause an optical see-through head-mounted display device to display a virtual object having a plurality of features to be simultaneously aligned with the plurality of points on the three-dimensional real-world object from a perspective of a user wearing the optical see-through head-mounted display device, as described above. As further shown inFIG.11, process1100may include receiving, from an eye tracking device, information indicating that the user wearing the optical see-through head-mounted display device performed a voluntary blink, wherein the voluntary blink is a calibration input to indicate that the plurality of features of the virtual object appear to the user to be simultaneously aligned with the plurality of points on the three-dimensional real-world object from the perspective of the user (block1130). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive, from an eye tracking device, information indicating that the user wearing the optical see-through head-mounted display device performed a voluntary blink, as described above. In some implementations, the voluntary blink may be a calibration input to indicate that the plurality of features of the virtual object appear to the user to be simultaneously aligned with the plurality of points on the three-dimensional real-world object from the perspective of the user. As further shown inFIG.11, process1100may include recording a plurality of alignment measurements based on the three-dimensional real-world coordinates for the plurality of points on the three-dimensional real-world object based on a time when the user performed the voluntary blink (block1140). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may record a plurality of alignment measurements based on the three-dimensional real-world coordinates for the plurality of points on the three-dimensional real-world object based on a time when the user performed the voluntary blink, as described above. As further shown inFIG.11, process1100may include generating a function providing a mapping between the three-dimensional real-world coordinates and corresponding three-dimensional points in a display space of the optical see-through head-mounted display device based on the plurality of alignment measurements (block1150). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may generate a function providing a mapping between the three-dimensional real-world coordinates and corresponding three-dimensional points in a display space of the optical see-through head-mounted display device based on the plurality of alignment measurements, as described above. Process1100may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, the voluntary blink may be detected based on the information received from the eye tracking device indicating that the user wearing the optical see-through head-mounted display device closed one or more eyes for a duration that satisfies a threshold value, that the user wearing the optical see-through head-mounted display device performed multiple blinks at a frequency that satisfies a threshold value, and/or that the user wearing the optical see-through head-mounted display device closed a first eye while a second eye was open and subsequently closed the second eye while the first eye was open. Additionally, or alternatively, the voluntary blink may be detected using one or more machine learning models that are trained to determine whether gaze information indicates that an eye blink is voluntary, spontaneous, or reflex. AlthoughFIG.11shows example blocks of process1100, in some implementations, process1100may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.11. Additionally, or alternatively, two or more of the blocks of process1100may be performed in parallel. FIG.12is a flow chart of an example process1200for blink-based calibration of an optical see-through head-mounted display device. In some implementations, one or more process blocks ofFIG.12may be performed by a calibration platform (e.g., calibration platform840). In some implementations, one or more process blocks ofFIG.12may be performed by another device or a group of devices separate from or including the calibration platform, such as a display device (e.g., display device810), an eye tracking device (e.g., eye tracking device820), a positional tracking device (e.g., positional tracking device830), and/or the like. As shown inFIG.12, process1200may include receiving, from a positional tracking device, information that relates to a position of at least one point on a three-dimensional real-world object (block1210). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive, from a positional tracking device, information that relates to a position of at least one point on a three-dimensional real-world object, as described above. As further shown inFIG.12, process1200may include causing an optical see-through head-mounted display device to display a virtual image having at least one feature in a display space of the optical see-through head-mounted display device (block1220). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may cause an optical see-through head-mounted display device to display a virtual image having at least one feature in a display space of the optical see-through head-mounted display device, as described above. As further shown inFIG.12, process1200may include receiving, from an eye tracking device, information relating to a gaze of a user wearing the optical see-through head-mounted display device (block1230). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may receive, from an eye tracking device, information relating to a gaze of a user wearing the optical see-through head-mounted display device, as described above. As further shown inFIG.12, process1200may include determining, based on the information relating to the gaze of the user, that the user performed a voluntary eye blink to indicate that the at least one feature of the virtual image appears to the user to be aligned with the at least one point on the three-dimensional real-world object in the display space of the optical see-through head-mounted display device (block1240). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may determine, based on the information relating to the gaze of the user, that the user performed a voluntary eye blink to indicate that the at least one feature of the virtual image appears to the user to be aligned with the at least one point on the three-dimensional real-world object in the display space of the optical see-through head-mounted display device, as described above. As further shown inFIG.12, process1200may include recording an alignment measurement based on the position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink (block1250). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may record an alignment measurement based on the position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink, as described above. As further shown inFIG.12, process1200may include generating a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in the display space of the optical see-through head-mounted display device based on the alignment measurement (block1260). For example, the calibration platform (e.g., using a processor920, a memory930, a storage component940, an input component950, an output component960, a communication interface970, and/or the like) may generate a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in the display space of the optical see-through head-mounted display device based on the alignment measurement, as described above. Process1200may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, the voluntary eye blink may be detected based on the information received from the eye tracking device indicating that the user wearing the optical see-through head-mounted display device closed one or more eyes for a duration that satisfies a threshold value, that the user blinked multiple times at a frequency that satisfies a threshold value, and/or that the user closed one or more eyes according to a particular pattern. Additionally, or alternatively, the voluntary eye blink may be detected using one or more machine learning models. AlthoughFIG.12shows example blocks of process1200, in some implementations, process1200may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.12. Additionally, or alternatively, two or more of the blocks of process1200may be performed in parallel. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. Some implementations are described herein in connection with thresholds. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, more than the threshold, higher than the threshold, greater than or equal to the threshold, less than the threshold, fewer than the threshold, lower than the threshold, less than or equal to the threshold, equal to the threshold, or the like. Certain user interfaces have been described herein and/or shown in the figures. A user interface may include a graphical user interface, a non-graphical user interface, a text-based user interface, and/or the like. A user interface may provide information for display. In some implementations, a user may interact with the information, such as by providing input via an input component of a device that provides the user interface for display. In some implementations, a user interface may be configurable by a device and/or a user (e.g., a user may change the size of the user interface, information provided via the user interface, a position of information provided via the user interface, etc.). Additionally, or alternatively, a user interface may be pre-configured to a standard configuration, a specific configuration based on a type of device on which the user interface is displayed, and/or a set of configurations based on capabilities and/or specifications associated with a device on which the user interface is displayed. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. | 87,085 |
11861063 | DETAILED DESCRIPTION Example embodiments are described in greater detail below with reference to the accompanying drawings. In the following description, like drawing reference numerals are used for like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the example embodiments. However, it is apparent that the example embodiments can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail. An eye-tracking device and a display apparatus including the same will now be described more fully with reference to the accompanying drawings, in which example embodiments are shown. The same reference numerals in the drawings denote the same elements, and sizes of elements in the drawings may be exaggerated for clarity and convenience of explanation. Also, example embodiments are described, and various modifications may be made from the example embodiments. Also, when a first element is “on” or “over” a second element in a layer structure, it may include a case where a first element contacts a second element and is directly disposed on the top, bottom, left, or right of the second element, and a case where the first element does not contact the second element and is disposed on the top, bottom, left, or right of the second element with a third element therebetween. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. For example, the expression, “at least one of a, b, and c,” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or any variations of the aforementioned examples. FIG.1is a cross-sectional view illustrating a structure of an eye-tracking device according to an example embodiment. Referring toFIG.1, an eye-tracking device100according to an example embodiment may include a light source10that emits illumination light, a photodetector array14that detects light, a light guide plate8that transmits the illumination light emitted from the light source10to an observer's eye E and transmits illumination light reflected from the retina of the observer's eye E in a direction opposite to a propagation direction of the illumination light emitted from the light source10, and a signal processor15that determines an angle of rotation of the observer's eye based on an output of the photodetector array14. The light source10may be an infrared light source that emits infrared light. For example, the light source10may be a laser diode (LD) or a light-emitting diode (LED) that emits near-infrared light having a wavelength ranging from about 750 nm to about 3 μm. Also, a low-power light source satisfying the safety standards for human eyes may be selected as the light source10. The photodetector array14may include a plurality of infrared detectors for detecting infrared light. For example, the photodetector array14may include an array of infrared detectors that are arranged in a two-dimensional (2D) manner and may detect light in a near-infrared band. In particular, the plurality of infrared detectors of the photodetector array14may be photodiodes having a high sensitivity to a wavelength band of the illumination light emitted from the light source10. The light guide plate8may be formed of a material transparent to infrared light to function as an optical waveguide for transmitting the illumination light. For example, the light guide plate8may be formed of a material such as polymethyl methacrylate (PMMA) or polydimethylsiloxane (PDMS). Also, the light guide plate8may have a thin flat panel construction. The light guide plate8may include a first surface8aand a second surface8bopposite to the first surface8a. From a perspective of the observer, the first surface8aand the second surface8bmay be referred to as a front surface and a rear surface, respectively. The light source10and the photodetector array14may be disposed closer to the first surface8aof the light guide plate8than to the second surface8bof the light guide plate8, and first and second input/output couplers12and13configured to obliquely guide incident light into the light guide plate8and output light obliquely traveling inside the light guide plate8to the outside of the light guide plate8may be disposed on the second surface8bof the light guide plate8. For example, the first input/output coupler12may be disposed on an edge portion of the second surface8bof the light guide plate8, and the second input/output coupler13may be disposed on another edge portion of the second surface8bof the light guide plate8. The first and second input/output couplers12and13are configured to obliquely guide light, which is incident in a direction substantially perpendicular to the first and second input/output couplers12and13from the outside of the light guide plate8, into the light guide plate8. For example, the first and second input/output couplers12and13may be configured to guide light, which is incident on the first and second input/output couplers12and13within a predetermined angle of incidence in a direction perpendicular to surfaces of the first and second input/output couplers12and13, into the light guide plate8. The light guided into the light guide plate8is repeatedly totally reflected by the first surface8aand the second surface8band travels inside the light guide plate8. Also, the first and second input/output couplers12and13are configured to output light, which is obliquely incident on the first and second input/output couplers12and13from the inside of the light guide plate8, in the substantially perpendicular direction to the outside of the light guide plate8. The first and second input/output couplers12and13may be configured to be applied only to light obliquely incident on the surfaces of the first and second input/output couplers12and13within the predetermined angle of incidence range and not to be applied to light perpendicularly incident on the surfaces of the first and second input/output couplers12and13. In other words, the first and second input/output couplers12and13may simply function as transparent flat panels for light perpendicularly incident on the surfaces of the first and second input/output couplers12and13. Each of the first and second input/output couplers12and13may include, for example, a diffractive optical element (DOE) or a holographic optical element (HOE). The DOE includes a plurality of periodic fine grating patterns. The plurality of periodic grating patterns of the DOE function as diffraction gratings and diffract incident light. In particular, the grating patterns may change a propagation direction of light by causing destructive interference and constructive interference by diffracting light incident in a specific angle range, according to a size, a height, a period, etc. of the grating patterns. Also, the HOE includes periodic fine patterns of materials having different refractive indices, instead of grating patterns. Although the configuration of the HOE is different from that of the DOE, the operating principle of the HOE may be substantially the same as that of the DOE. The DOE or the HOE included in each of the first and second input/output couplers12and13may be configured to be dependent on a wavelength. In other words, the first and second input/output couplers12and13may be configured to function as input/output couplers only for a wavelength band of the illumination light emitted from the light source10and to be transparent to light in other wavelength bands. For example, only light in an infrared band may be coupled by the first and second input/output couplers12and13, and light in other wavelength bands such as visible light may be transmitted through the first and second input/output couplers12and13. In this configuration of the light guide plate8, light incident on the first input/output coupler12travels inside the light guide plate8through total reflection and then is output to the outside of the light guide plate8through the second input/output coupler13, and light incident on the second input/output coupler13travels inside the light guide plate8through total reflection and then is output to the outside of the light guide plate8through the first input/output coupler12. The light source10may be aligned with the first input/output coupler12so that the light source10and the first input/output coupler12are located at a same position in a longitudinal direction (a horizontal direction) of the light guide plate8. The second input/output coupler13may be disposed at a predetermined position in the longitudinal direction of the light guide plate8, where the observer's eye E is assumed to be placed. As shown inFIG.1, when the light source10faces the first input/output coupler12and the observer's eye E faces the second input/output coupler13, the illumination light emitted from the light source10first passes through the first surface8aand is incident on the first input/output coupler12. Next, the illumination light travels inside the light guide plate8in a first direction (i.e., rightward inFIG.1). The illumination light is diffracted by the second input/output coupler13, passes through the first surface8aof the light guide plate8, and reaches the observer's eye E. The illumination light reflected by the observer's eye E passes through the first surface8aof the light guide plate8, is incident on the second input/output coupler13, and then travels inside the light guide plate8in a second direction (i.e., leftward inFIG.1) that is opposite to the first direction. The illumination light reflected by the observer's eye E is diffracted by the first input/output coupler12, passes through the first surface8aof the light guide plate8, and reaches the photodetector array14. In order to separate the illumination light emitted from the light source10from the illumination light reflected by the observer's eye E, the eye-tracking device100may further include a beam splitter6. The beam splitter6may be disposed closer to the first surface8aof the light guide plate8than to the second surface8bof the light guide plate8, and may face the first input/output coupler12. The beam splitter6may include a first surface6aand a second surface6bthat are adjacent to each other and shares a vertex of the beam splitter6. The light source10may face the first surface6aand the photodetector array14may face the second surface6b. Also, the eye-tracking device100may further include a collimating lens5disposed between the light source10and the beam splitter6and configured to make a beam emitted from the light source10parallel. The light source10, the collimating lens5, the beam splitter6, and the first input/output coupler12may be aligned with one another, whereas the photodetector array14may be disposed in an optical path bent about 90° by the beam splitter6. In this configuration, the beam splitter6may be configured to transmit the illumination light emitted from the light source10and reflect the illumination light reflected by the observer's eye E. The illumination light emitted from the light source10may be incident on the first surface6aof the beam splitter6, may pass through the beam splitter6, and may reach the first input/output coupler12. The illumination light reflected by the observer's eye E may be output-coupled by the first input/output coupler12, may be reflected by the beam splitter6, and may be incident on the photodetector array14through the second surface6bof the beam splitter6. The beam splitter6may be, for example, a half mirror that simply reflects half of incident light and transmits the other half. Instead, the beam splitter6may be a polarizing beam splitter having polarization selectivity. For example, the beam splitter6may be configured to reflect light having a first linear polarization component and transmit light having a second linear polarization component perpendicular to the first linear polarization component. In particular, light having the second linear polarization component from among the illumination light emitted from the light source10passes through the beam splitter6and is incident on the first input/output coupler12. In order to improve light use efficiency, the light source10may be a polarized light source such as a polarized laser that emits only light having the second linear polarization component. Accordingly, the illumination light emitted from the light source10may pass through the beam splitter6and may be incident on the first input/output coupler12with little loss. When the beam splitter6is a polarizing beam splitter, the eye-tracking device100may further include a quarter-wave plate16disposed between the first surface8aof the light guide plate8and the beam splitter6. The light source10, the collimating lens5, the beam splitter6, the quarter-wave plate16, and the first input/output coupler12may be aligned with one another. The quarter-wave plate16delays incident light by a quarter wavelength of the incident light. Accordingly, the illumination light having the second linear polarization component passing through the beam splitter6has a second circular polarization component while passing through the quarter-wave plate16. Next, the illumination light is reflected by the observer's eye E in a direction opposite to an incident direction and has a first circular polarization component opposite in a rotational direction to the second circular polarization component. The illumination light having the first circular polarization component passes through the quarter-wave plate16again to have the first linear polarization component and is reflected by the beam splitter6. The illumination light reflected by the beam splitter6is incident on the photodetector array14. Also, the beam splitter6may be configured to have a wavelength selectivity. In other words, the beam splitter6may be configured to function as a beam splitter only for a wavelength band of the illumination light emitted from the light source and to be transparent to light in other wavelength bands. For example, the beam splitter6may serve as a half mirror or a polarizing beam splitter only for light in an infrared band and may transmit light in other wavelength bands such as visible light. In a structure of the eye-tracking device100according to the present example embodiment, a 2D intensity distribution of the illumination light emitted from the light source10, incident on the observer's eye E, and reflected from the observer's eye E to the photodetector array14may vary according to an angle of rotation of the observer's eye E. For example, as shown by et1ofFIG.1, when the observer's eye E looks directly at the second input/output coupler13, that is, when an optical axis of the pupil of the observer's eye E is perpendicular to the second input/output coupler13, the illumination light output-coupled in a direction perpendicular to the second input/output coupler13from among the illumination light emitted from the light source10mainly reaches the retina of the observer's eye E. Next, the illumination light reflected by the retina is perpendicularly incident on the second input/output coupler13, travels in the opposite direction along the same optical path as a previous optical path, and is mainly incident on a central portion of the photodetector array14, as marked by a solid line. Only a small part of the illumination light output-coupled in a direction oblique to the second input/output coupler13from among the illumination light emitted from the light source10may be reflected by the retina of the observer's eye E and may reach the photodetector array14. As shown by et2ofFIG.1, when the observer's eye E obliquely looks at the second input/output coupler13, that is, when the optical axis of the pupil of the observer's eye E is inclined with respect to the second input/output coupler13, the illumination light output-coupled in a direction oblique to the second input/output coupler13from among the illumination light emitted from the light source10mainly reaches the retina of the observer's eye E. In particular, the illumination light output-coupled by the second input/output coupler13at the same angle as an angle formed between the optical axis of the pupil of the observer's eye E and the second input/output coupler13mainly reaches the retina of the observer's eye E. Next, the illumination light reflected by the retina are obliquely incident on the second input/output coupler13, travels in the opposite direction along the same optical path as a previous optical path, and is mainly incident on a peripheral portion of the photodetector array14, as marked by a dashed line. Only a small part of the illumination light output-coupled by the second input/output coupler13at an angle different from an angle formed between the optical axis of the pupil of the observer's eye E and the second input/output coupler13from among the illumination light emitted from the light source10may be reflected by the retina of the observer's eye E and may reach the photodetector array14. Accordingly, the illumination light incident on the observer's eye E along the optical axis of the pupil of the observer's eye E may be reflected by the retina of the observer's eye E and may return to the photodetector array14. In order to cause the illumination light reflected by the retina of the observer's eye E to travel in the opposite direction along the same optical path as the previous optical path, the second input/output coupler13may be configured to have no optical refractive power or to intentionally have an optical refractive power. For example,FIG.2Aillustrates a path of a beam in the observer's eye E when the second input/output coupler13of the eye-tracking device100has no optical refractive power, andFIG.2Billustrates a path of a beam in the observer's eye E when the second input/output coupler13of the eye-tracking device100has an optical refractive power. Referring toFIG.2A, the second input/output coupler13may be configured to output-couple light, which is incident on the second input/output coupler13at the same angle from the inside of the light guide plate8, at the same angle. Accordingly, the light incident on the second input/output coupler13at the same angle from the inside of the light guide plate8is output-coupled by the second input/output coupler13to form parallel beams. Light incident on the pupil of the observer's eye E in a direction parallel to the optical axis of the pupil of the observer's eye E from among the output-coupled parallel beams is focused on the retina of the observer's eye E by the pupil. Next, light reflected by the retina of the observer's eye E becomes a parallel beam again by the pupil of the observer's eye E and is incident on the second input/output coupler13. Accordingly, illumination light may travel in the opposite direction along the same optical path as a previous optical path and may reach the photodetector array14. Also, referring toFIG.2B, the second input/output coupler13may be configured to output-couple light incident on the second input/output coupler13at the same angle from the inside of the light guide plate8and focus the output-coupled light on one point. In other words, the second input/output coupler13may be configured to perform not only a function of an input/output coupler but also a function of a lens having a positive (+) refractive power. To this end, the second input/output coupler13may include the HOE designed to have a positive (+) refractive power. In more detail, the second input/output coupler13may have a positive (+) refractive power to additionally focus light, which is incident on the pupil of the observer's eye E, in front of the retina, particularly, on the center of rotation of the eye E, due to the pupil of the observer's eye E. Accordingly, light incident on the observer's eye E along the optical axis of the pupil of the observer's eye E from among illumination light output-coupled by the second input/output coupler13may be focused on the center of rotation of the observer's eye E and then may be perpendicularly incident on the retina of the observer's eye E. In this case, because the illumination light is perpendicularly incident on the retina of the observer's eye E, reflected illumination light travels in the opposite direction along the same optical path as a previous optical path from the retina of the observer's eye E. Also, light passing through the pupil in a direction perpendicular to the pupil of the observers eye E, that is, along with the optical axis of the pupil of the observer's eye E, always exists at the center of rotation of the observer's eye E, regardless of a position of the eye E. Accordingly, a range of a region where the position of the observer's eye E may be tracked is increased. Also, because illumination light having a large beam diameter may not be required to ensure that the illumination light is incident on the pupil of the observer's eye E, light use efficiency may be improved and power consumption of the eye-tracking device100may be reduced. As described above, only a part of the illumination light emitted from the light source10is reflected by the retina of the observer's eye E and reaches the photodetector array14. The illumination light reaching the photodetector array14varies according to an angle formed between the optical axes of the pupil of the observer's eye E and the second input/output coupler13. In particular, a 2D intensity distribution of the illumination light incident on the photodetector array14may vary according to an angle formed between the optical axis of the pupil of the observer's eye E and the second input/output coupler13. Such a 2D intensity distribution of incident light may be detected by a plurality of infrared detectors of the photodetector array14. For example,FIG.3illustrates a change in an output of the photodetector array14according to the rotation of the observer's eye E when the photodetector array14includes a 2×2 array of photodiodes. InFIG.3, X denotes a rotational displacement in a left-and-right direction of the observer's eye E, and Y denotes a rotational displacement in an up-and-down direction of the observer's eye E. Referring toFIG.3, when the observer's eye E looks straight ahead, that is, in the case of X0° Y0°, light having substantially the same intensity is incident on four photodiodes. In particular, the intensity of the incident light is slightly greater than a middle intensity between a minimum intensity and a maximum intensity of light incident on each photodiode in an entire angle range of the rotation of the observer's eye E. As an optical axis of the pupil of the observer's eye E is inclined with respect to a light incident surface of the second input/output coupler13, an intensity of light incident on some photodiodes increases and an intensity of light incident on other photodiodes decreases. For example, when the observer's eye E laterally rotates (e.g., X5° Y0°, X10° Y0°, and X15° Y0°), an intensity of light incident on photodiodes arranged on the left side increases/decreases or an intensity of light incident on photodiodes arranged on the right side decreases/increases. However, when an angle of rotation of the observer's eye E in the left-and-right direction exceeds a detection limit of the eye-tracking device100(e.g., X20° Y0°, X25° Y0°, and X30° Y0°), an intensity of light incident on all photodiodes is minimized. Likewise, as the observer's eye E vertically rotates (e.g., X0° Y5°, X0° Y10°, and X0° Y15°), an intensity of light incident on photodiodes arranged on the upper side increases/decreases or an intensity of light incident on photodiodes arranged on the lower side decreases/increases. When an angle of rotation of the observer's eye E in the up-and-down direction exceeds a detection limit of the eye-tracking device100, an intensity of light incident on all photodiodes is minimized. The detection limit of the eye-tracking device100in each of the left-and-right direction and the up-and-down direction inFIG.3is about ±15°. Also,FIG.4illustrates a change in an output of the photodetector array14according to the rotation of the observer's eye E when the photodetector array14includes a 2×10 array of photodiodes. Referring toFIG.4, when the observer's eye E looks straight ahead, light having substantially the same intensity is incident on four photodiodes at the center, and an intensity of light incident on photodiodes gradually decreases away from the center. As the observer's eye E laterally rotates (e.g., X5° Y0°, X10° Y0°, X15° Y0°, X20° Y0°, X25° Y0°, and X30° Y0°), an intensity of light incident on photodiodes on the left side increases/decreases or an intensity of light incident on photodiodes on the right side decreases/increases, and a region where light is mainly incident in the photodetector array14moves leftward or rightward. InFIG.4, a detection limit of the eye-tracking device100in the left-and-right direction is about ±35° and a detection limit of the eye-tracking device100in the up-and-down direction is about ±15°. The detection limit of the eye-tracking device100described with reference toFIGS.3and4is an example merely provided for better understanding, and the detection limit of the eye-tracking device100may be determined according to the number of photodiodes two-dimensionally arranged in the photodetector array14or optical properties of the light guide plate8and the first and second input/output couplers12and13. As described above, an angle of rotation of the observer's eye E may be accurately tracked based on a change in an output of the photodetector array14ofFIGS.3and4. For example, the signal processor15may determine an angle of rotation of the observer based on a 2D intensity distribution of light detected by the photodetector array14. To this end, the signal processor15may include previously measured information about a relationship between a 2D intensity distribution of light detected by the photodetector array14and an angle of rotation of the observer's eye E. The signal processor15may include or communicate with a computer-readable recording medium storing computer-readable code. The computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Also, an example embodiment may be written as a computer program transmitted over a computer-readable transmission medium, such as a carrier wave, and received and implemented in general-use or special-purpose digital computers that execute the programs. In this way, the eye-tracking device100according to the present example embodiment may accurately detect an angle of rotation of the observer's eye E over a wide angle range. Also, the eye-tracking device100may relatively rapidly track the observer's eye because the eye-tracking device100does not require a complicated calculation, unlike an existing eye-tracking device that analyzes an eye image obtained by using a charge-coupled device (CCD) image sensor or a complementary metal-oxide-semiconductor (CMOS) image sensor by using software through an image processing algorithm. Also, because the eye-tracking device100uses only photodiodes, the eye-tracking device100may be manufactured relatively inexpensively. Also, because the eye-tracking device100uses the light guide plate8that is relatively thin, the eye-tracking device100may be made compact to have a small thickness and a small weight. Also, because the eye-tracking device100does not need to use a high power light source, power consumption is relatively small and the risk of damage to the observer's eye is relatively small. FIG.5is a cross-sectional view illustrating a structure of an eye-tracking device100′ according to another example embodiment. Compared to the eye-tracking device100ofFIG.1, in the eye-tracking device100′ ofFIG.5, positions of the light source10and the photodetector array14are changed. For example, the photodetector array14may face the first surface6aof the beam splitter6, and the light source10may face the second surface6bof the beam splitter6. When the beam splitter6is configured to reflect light having a first linear polarization component and to transmit light having a second linear polarization component perpendicular to the first linear polarization component, the light source10may include a polarized laser that emits only light having the first linear polarization component. Instead, the light source10may include a polarized laser that emits only light having the second linear polarization component, and the beam splitter6may be configured to reflect light having the second linear polarization component and to transmit light having the first linear polarization component. FIG.6is a cross-sectional view illustrating a structure of an eye-tracking device110according to another example embodiment. Compared to the eye-tracking device100ofFIG.1, in the eye-tracking device110ofFIG.6, the first and second input/output couplers12and13are disposed on the first surface8aof the light guide plate8. For example, the first input/output coupler12may be disposed on an edge portion of the first surface8aof the light guide plate8, and the second input/output coupler13may be disposed on another edge portion of the first surface8aof the light guide plate8. In particular, the first input/output coupler12is disposed on the first surface8aof the light guide plate8to face the beam splitter6. FIG.7is a cross-sectional view illustrating a structure of an eye-tracking device120according to another example embodiment. Referring toFIG.7, the eye-tracking device120may further include a varifocal lens17disposed between the light guide plate8and the beam splitter6. Although the varifocal lens17is disposed between the quarter-wave plate16and the light guide plate8inFIG.7, the present disclosure is not limited thereto. For example, the varifocal lens17may be disposed in any optical path between the light guide plate8and the beam splitter6without limitation. Alternatively, the varifocal lens17may be disposed in any optical path between the light guide plate8and the observer's eye E without limitation. As shown inFIG.2B, when the second input/output coupler13has a positive (+) refractive power and when a distance between the observer's eye E and the second input/output coupler13is within a pre-determined range, illumination light output-coupled from the second input/output coupler13is accurately focused on the center of rotation of the observer's eye E. When the observer's eye E is too close to the second input/output coupler13(e.g., a distance between the observer's eye E and the second input/output coupler13is less than a lower distance limit) or too far from the second input/output coupler13(e.g., the distance between the observer's eye E and the second input/output coupler13is greater than an upper distance limit), the illumination light is not focused on the center of rotation of the observer's eye E. Also, because a focal length of the pupil of the eye E may vary according to the observer, the illumination light may not be focused on the center of rotation of the observer's eye E according to the focal length of the pupil of the observer's eye E. When the illumination light is not accurately focused on the center of rotation of the observer's eye E, illumination light reflected by the retina may not travel along the same optical path as a previous optical path, thereby reducing the accuracy of measurement. Also, as shown inFIG.2A, when the second input/output coupler13has no optical refractive power, the illumination light may not be accurately focused on the retina of the observer's eye E according to the focal length of the pupil of the observer's eye E. Even in this case, the illumination light reflected by the retina may not travel along the same optical path as the previous optical path, thereby reducing the accuracy of measurement. The varifocal lens17may change a focal length to accurately focus the illumination light output-coupled from the second input/output coupler13on the center of rotation of the observer's eye E or on the retina of the observer's eye E. For example, the varifocal lens17may be configured to change a focal length according to a distance between the second input/output coupler13and the observer's eye E. Accordingly, the accuracy of measurement may be improved by using the varifocal lens17. For example, the varifocal lens17may further include a liquid crystal lens or an electrowetting lens. FIG.8is a cross-sectional view illustrating a structure of an eye-tracking device130according to another example embodiment. Compared to the eye-tracking device100ofFIG.1, the eye-tracking device130ofFIG.8includes the light guide plate8having a curved shape. For example, when the eye-tracking device130is applied to a display apparatus worn on a person's head such as a head-mounted display (HMD), it may be useful to use the light guide plate8having a curved shape. The eye-tracking device100,100′,110,120, or130may be applied to a display apparatus for providing an image in accordance with an observer's viewpoint. In particular, the eye-tracking device100,100′,110,120, or130may be easily integrated to a display apparatus for providing an image by using a light guide plate. For example,FIG.9is a cross-sectional view of a display apparatus including an eye-tracking device according to an example embodiment. Referring toFIG.9, a display apparatus200is integrated to the eye-tracking device100ofFIG.1. For example, the display apparatus200may include an image forming device20for forming an image, an eye-tracking device for tracking an observer's eye, and an image shifter27for moving the image according to the observer's eye provided from the eye-tracking device. Although the eye-tracking device ofFIG.9has substantially the same structure as that of the eye-tracking device100ofFIG.1, the display apparatus200may include any of the eye-tracking devices100′,110,120, and130ofFIGS.5through8. The eye-tracking device may include the light source10that emits infrared illumination light, the photodetector array14that detects infrared light, the light guide plate8that transmits illumination light, the beam splitter6that separates illumination light emitted from the light source10from illumination light reflected from the observer's eye E, and the signal processor15that determines the angle of rotation of the observer's eye based on an output of the photodetector array14. Also, the eye-tracking device may include a wavelength selective mirror11that reflects infrared illumination light emitted from the light source10to the beam splitter6. For example, the wavelength selective mirror11may be configured to reflect light in an infrared band and transmit light in a visible band. The collimating lens5may be further disposed between the wavelength selective mirror11and the beam splitter6. Also, the image forming device20may include a light source21that emits visible light, a spatial light modulator24that modulates the visible light emitted from the light source21and generates an image, and a beam splitter23that transmits the visible light emitted from the light source21to the spatial light modulator24and transmits the image formed by the spatial light modulator24to the light guide plate8. Also, the image forming device20may further include a collimating lens22that is disposed between the light source21and the beam splitter23, a focusing lens25that focuses light transmitted through the beam splitter23, and an aperture26that transmits only light including the image. The aperture26may be disposed on a focus position of the focusing lens25. According to the present example embodiment, the beam splitter6, the collimating lens5, and the wavelength selective mirror11of the eye-tracking device and the aperture26, the focusing lens25, the beam splitter23, and the spatial light modulator24of the image forming device20may be aligned with one another. The beam splitter23may be a half mirror that simply transmits half of incident light and transmits the other half. Instead, the beam splitter23may be a polarizing beam splitter having polarization selectivity. For example, the beam splitter23may be configured to reflect light having a first polarization component and transmit light having a second linear polarization component perpendicular to the first linear polarization component. In this case, light having the first linear polarization component from among visible light emitted from the light source21is reflected by the beam splitter23and is incident on the spatial light modulator24, and light having the second linear polarization component is transmitted through the beam splitter23and is discarded. Also, the light source21may be a polarized laser that emits only light having the first linear polarization component. Accordingly, light emitted from the light source21may be all reflected by the beam splitter23and may be incident on the spatial light modulator24. In the example embodiment ofFIG.9, the spatial light modulator24may be a reflective spatial light modulator that reflects and modulates incident light. For example, a liquid crystal on silicon (LCoS), a digital micromirror device (DMD), or a semiconductor modulator may be used as the spatial light modulator24. Light reflected by the beam splitter23is modulated by the spatial light modulator24to include image information. Light having the first linear polarization component is reflected by the spatial light modulator24to have the second linear polarization component. Accordingly, light modulated by the spatial light modulator24is transmitted through the beam splitter23. Visible light transmitted through the beam splitter23passes through the focusing lens25and the aperture26. The visible light passing through the aperture26becomes divergent light having a larger beam diameter. Next, the visible light passes through the wavelength selective mirror11and becomes parallel light due to the collimating lens5. Next, the visible light including the image information is transmitted through the beam splitter6of the eye-tracking device and is incident on the light guide plate8. As described above, the beam splitter6may be configured to function as a beam splitter for illumination light emitted from the light source10and to be transparent to light in other wavelength bands. For example, the beam splitter6may be configured to function as a beam splitter only for infrared light and to transmit visible light. Accordingly, visible light emitted from the light source21may pass through the beam splitter6and may be incident on the light guide plate8. The light guide plate8is configured to transmit both infrared light and visible light. To this end, an input coupler7that obliquely guides visible light incident from the outside into the light guide plate8and an output coupler9that outputs visible light obliquely traveling inside the light guide plate8to the outside of the light guide plate may be further disposed on the first surface8aof the light guide plate8. For example, the input coupler7may be disposed on an edge portion of the first surface8aof the light guide plate8, and the output coupler9may be disposed on another edge portion of the first surface8aof the light guide plate8. The input coupler7and the output coupler9may include a DOE or an HOE, like the first and second input/output couplers12and13. In particular, the output coupler9may be configured to have an optical refractive power. The output coupler9may output-couple light obliquely incident on the output coupler9and may focus the light on one point. Also, the first and second input/output couplers12and13that obliquely guide infrared light, which is incident from the outside, into the light guide plate8and output infrared light, which obliquely travels inside the light guide plate8, to the outside of the light guide plate8may be disposed on the second surface8bof the light guide plate8. For example, the first input/output coupler12may be disposed on an edge portion of the second surface8bof the light guide plate8, and the second input/output coupler13may be disposed on another edge portion of the second surface8bof the light guide plate8. Although the first and second input/output couplers12and13are disposed on the second surface8bof the light guide plate8and the input coupler7and the output coupler9are disposed on the first surface8aof the light guide plate8, the present disclosure is not limited thereto. For example, the first and second input/output couplers12and13may be disposed on the first surface8aof the light guide plate8, and the input coupler7and the output coupler9may be disposed on the second surface8bof the light guide plate8. In any case, the input coupler7and the first input/output coupler12face each other, and the output coupler9and the second input/output coupler13face each other. Also, the input coupler7may face the beam splitter6. Accordingly, the first input/output coupler12, the input coupler7, the beam splitter6, the collimating lens5, the wavelength selective mirror11, the aperture26, the focusing lens25, the beam splitter23, and the spatial light modulator24may be sequentially aligned with one another, or the input coupler7, the first input/output coupler12, the beam splitter6, the collimating lens5, the wavelength selective mirror11, the aperture26, the focusing lens25, the beam splitter23, and the spatial light modulator24may be sequentially aligned with one another. The input coupler7and the output coupler9may be configured to serve as couplers only for visible light, and the first and second input/output couplers12and13may be configured to serve as couplers only for infrared light. Accordingly, infrared light transmitted through the beam splitter6is transmitted through the input coupler7and is input-coupled by the first input/output coupler12. Also, infrared light output-coupled by the second input/output coupler13is transmitted through the output coupler9and is incident on the observer's eye E, and infrared light reflected by the observer's eye E is transmitted through the output coupler9and is input-coupled by the second input/output coupler13. The infrared light output-coupled by the first input/output coupler12may be transmitted through the input coupler7and may be reflected by the beam splitter6. Accordingly, visible light including image information may be provided through the input coupler7and the output coupler9to the observer's eye E. Also, infrared light for eye tracking may be emitted through the first and second input/output couplers12and13to the observer's eye E, may be reflected by the observer's eye E, and may be incident on the photodetector array14. The signal processor15of the eye-tracking device may determine the angle of rotation of the observer's eye based on an output of the photodetector array14and may control the image shifter27based on determined eye information. For example, the signal processor15may accurately provide an image to the observer's eye E by controlling the image shifter27to move a position of the image in a direction perpendicular to an optical axis according to the observer's eye position. Accordingly, regardless of a change in a position of the observer's eye E, the image may always be provided to the observer's eye E. The image shifter27may be disposed adjacent to, for example, a light exit surface of the aperture26. The image shifter27may move a path of the image in the direction perpendicular to the optical axis under the control of the signal processor15, without changing a propagation angle of the image passing through the aperture26. Instead, the image shifter27may be an actuator that moves the image forming device20in the direction perpendicular to the optical axis. As described above, the display apparatus200having a structure ofFIG.9may be easily integrated to an eye-tracking device. For example, an optical path of infrared light for eye tracking and an optical path of visible light for providing an image may be formed by using one light guide plate8, and the infrared light for eye tracking and the visible light for providing the image may travel along one optical path without being interfered with each other. Accordingly, the display apparatus200employing the eye-tracking device may be made compact. Also, because visible light including image information is focused by the output coupler9having a large size and is incident on the observer's eye E, the display apparatus200may provide a relatively large field of view (FoV). The second input/output coupler13is configured to serve as a coupler only for infrared light, and the output coupler9is configured to serve as a coupler only for visible light that is obliquely incident. Visible light incident from the outside toward the second surface8bof the light guide plate8may pass through the second input/output coupler13and the output coupler9and may be incident on the observer's eye E. Accordingly, the display apparatus200according to the present example embodiment may be applied to realize AR or MR. In particular, the display apparatus200according to the present example embodiment that is a holographic display apparatus may be a near-eye AR display apparatus. For example, the observer's eye E may see external light containing an external foreground scene IMG2perpendicularly transmitted through the output coupler9and an image IMG1reproduced by the spatial light modulator24. The external light may contain the external foreground scene IMG2which actually exists in front of the observer, instead of an artificial image modulated and generated by a separate spatial light modulator or displayed by a separate display panel. Accordingly, the observer may simultaneously recognize both the image IMG1that is an artificially generated virtual image and the external foreground scene IMG2that actually exists. Although the spatial light modulator24is a reflective spatial light modulator inFIG.9, a transmissive spatial light modulator for modulating transmitted light may be used. For example,FIG.10is a cross-sectional view illustrating a display apparatus including an eye-tracking device according to another example embodiment. Referring toFIG.10, an image forming device20′ of a display apparatus200′ may include the light source21, the collimating lens22, a spatial light modulator24′, the focusing lens25, and the aperture26that are sequentially disposed in a propagation direction of light. In particular, the first input/output coupler12, the input coupler7, the beam splitter6, the collimating lens5, the wavelength selective mirror11, the aperture26, the focusing lens25, the spatial light modulator24′, the collimating lens22, and the light source21may be aligned with one another. The spatial light modulator24′ is a transmissive spatial light modulator that modulates transmitted light. For example, the spatial light modulator24′ may use a liquid crystal device (LCD). When the spatial light modulator24′ that is a transmissive spatial light modulator is used, a configuration of an optical system may be more simplified because the beam splitter23may be omitted. Other elements of the display apparatus200′ ofFIG.10are substantially the same as those of the display apparatus200ofFIG.9. As described above, the display apparatus200or200′ may be applied to realize AR or MR. For example,FIGS.11through13illustrate various electronic devices to which a display apparatus may be applied. As shown inFIGS.11through13, at least some of display apparatuses according to various example embodiments may constitute wearable apparatuses. In other words, a display apparatus may be applied to a wearable apparatus. For example, the display apparatus may be applied to an HMD. Also, the display apparatus may be applied to a glasses-type display, a goggle-type display, etc. Wearable electronic devices ofFIGS.11through13may interoperate with smartphones. In addition, display apparatuses according to various example embodiments may be provided in smartphones, and the smartphones may be used as multi-image display apparatuses. In other words, the display apparatus may be applied to small electronic devices (mobile electronic devices), instead of wearable apparatus ofFIGS.11through13. Fields to which display apparatuses according to various example embodiments are applied may be changed in various ways. For example, display apparatuses according to various example embodiments may be applied to realize AR or MR and may also be applied to other fields. In other words, the various example embodiments may be applied to displays that may simultaneously provide a plurality of images, instead of AR or MR. The foregoing exemplary embodiments are merely exemplary and are not to be construed as limiting. The present teaching can be readily applied to other types of apparatuses. Also, the description of the exemplary embodiments is intended to be illustrative, and not to limit the scope of the claims, and many alternatives, modifications, and variations will be apparent to those skilled in the art. While an eye-tracking device and a display apparatus including the same have been particularly shown and described with reference to example embodiments thereof, it will be understood by one of ordinary skill in the art that various changes in form and details may be made therein. | 50,241 |
11861064 | DETAILED DESCRIPTION OF THE INVENTION FIGS.1and2schematically depict a wearable data input device ID according to an embodiment of the invention when worn on a human hand HH. A human hand is known to comprise a thumb and four fingers, which in this description will respectively be denoted index finger, middle finger, ring finger and little finger when starting at the thumb side of the hand. InFIG.1, the thumb TH and the index finger IF are clearly visible. The other fingers are hidden behind the index finger IF. The thumb TH and fingers comprise bones, called phalanges or phalanx bones. The phalanx bone closest to the hand is referred to as the proximal phalanges. The phalanx bone at the fingertips is referred to as the distal phalanges. The thumb only comprises a corresponding proximal phalanges and distal phalanges. The fingers further comprise an intermediate phalanges in between the proximal phalanges and the distal phalanges. The data input device ID comprises a base B with a proximal end PE and a distal end DE opposite the proximal end PE. The proximal end PE side of the base B is configured to engage with the hand HH, here the palm of the hand HH and is therefore not visible inFIGS.1and2, but indicated using dashed lines. The distal end DE side of the base B is provided with a set of sensors S1, S2, S3to interact with a fingertip FT of the index finger IF of the hand HH to allow user input. FIG.3schematically depicts an electric diagram of the data input device ID ofFIGS.1and2. Shown inFIG.3are the sensors S1-S3. The sensors S1-S3are connected to an output unit OU configured to send data corresponding to the user input to an external device (not shown). Sending the data to an external device is preferably done wirelessly, e.g. using Bluetooth, WiFi, infrared, ZigBee, or any other wireless data transfer method. However, it is not excluded that the data transfer between the input device and the external device is carried out using a wire connection, e.g. when a fast and stable connection is required, for instance when using the input device for gaming. The user input transferred from the sensors S1-S3to the output unit is indicated by the sensor signals SS1, SS2and SS3, and the data transfer from the output unit to the external device is indicated by the output signal OS. Referring again toFIGS.1and2, the data input device ID comprises a finger support FS to receive a portion of a finger IF of the hand HH corresponding to the proximal phalanges. The finger support FS is here embodied in the form of a ring, but can be any supporting structure suitable to engage with the finger IF such that when said finger portion is received in the finger support FS, the input device ID is carried by the hand via the finger support. In this embodiment, engagement between the finger portion and the finger support is such that an orientation of the finger support substantially follows the orientation of said finger portion. In an embodiment, the finger support FS is a rigid ring having a diameter that enables a finger to be easily received in the finger support while at the same time allows the finger to engage with and support the finger support. In another embodiment, the finger support has a ring-like shape, but comprises elastic material to allow the finger support to be used with a wide variety of fingers. In an embodiment, the finger support FS is releasably mounted to the base B, so that a variety of rings, e.g. having different diameters, can be provided and a user can choose and mount a ring to the base B that matches best with the dimension of the finger of the user. The finger support FS is attached to a connecting member, in this case a beam BE1, which in turn is hingedly connected to a connecting member, in this case a beam BE2of the base B, so that beam BE1is able to rotate relative to beam BE2about a rotation axis RA. The rotation axis RA extends substantially out of plane of the drawing and is positioned to be aligned with the metacarpophalangeal joint of the corresponding finger IF when said finger portion is received in the finger support. The metacarpophalangeal joint can be found between the corresponding proximal phalanges and the corresponding metacarpal bone. Due to this location of the rotation axis RA, the finger IF can easily be moved up and down relative to the base B while at the same time continuing to support the data input device ID. This is illustrated by comparing the position of the finger IF in theFIGS.1and2. InFIG.1, the finger is able to interact with sensor S3and inFIG.2, the finger has moved upwards allowing the fingertip FT to interact with the sensor S1. Although not shown, an intermediate position of the finger allows the finger to interact with sensor S2. Although the sensors S1-S3have been depicted as being provided on a more or less flat base B, it is also possible to provide the sensors at different positions allowing to limit the required movement of the fingers to reach the sensors. FIG.1also depicts another embodiment in which the sensors are arranged differently using dashed lines to indicate the location of upper surfaces of the sensors S1-S3. It will be apparent for the skilled person that such an arrangement may require a different shape of the base B in side view, e.g. a step-like shape or a concave shape. From the orientation of the dashed lines it can also be seen that the upper surfaces of the sensors may be tilted relative to each other so that a normal to these upper surfaces may be substantially aligned with a frequently occurring direction of approach of the fingertip. In an embodiment, the location of upper surfaces of the sensors may be chosen such that when a fingertip engages with a respective upper surface, an angle between the proximal phalange and the metacarpal phalange is in the range of 150 to 170 degrees, e.g. 160 degrees. This means a deviation of 10-30 degrees, e.g. 20 degrees, compared to the normal angel with stretched fingers. It is explicitly noted here that an upper surface of a sensor arranged to engage with a fingertip does not necessarily have to be directly above or near the sensor part where the engagement between fingertip and sensor causes a signal representing data input. Although a set of sensors S1-S3for the index finger is shown, it will be apparent to the skilled person that any number of sensors may be provided, e.g. 1, 2, 3, 4 and 5 sensors, and that the set of sensors may include additional similar subsets for other fingers, like the middle finger, the ring finger, and the little finger. Although the embodiment has been described and depicted only for the index finger IF, the base and set of sensors may also be extended in a direction parallel to the plane of the drawing so that a similar arrangement is provided for other fingers of the hand HH, which will be described in more detail below. Although not shown, the base may also provide a rest position in which the fingertip FT of the finger is able to engage with the base B, so that the fingers can rest against the base B without interacting with any sensor. In this embodiment, the input device is depicted in combination with a left hand HH. A similar device can be provided for the right hand of a user. FIG.4depicts a schematic top view of a wearable data input device ID to be worn on a human hand (not shown here but see for referenceFIGS.1and2) according to another embodiment of the invention. The data input device ID comprises a base B comprising a proximal end PE and a distal end DE opposite to the proximal end PE. In this embodiment, the base B comprise all electronics, including a battery to power the data input device ID and an output unit configured to send data corresponding to user input entered via the data input device ID to an external device, e.g. a computer, phone, game controller, television, smart TV, virtual reality glasses, tablet or any other device. The base B further comprises a set of sensors S, in this embodiment, in the form of a 3×4 array of sensors S at the distal end side of the base, wherein the three rows extend in an X-direction and wherein the four columns extend in a Y-direction. The sensors S allow to interact with fingertips (FT) of the hand to allow user input. The sensors S are thus in connection with the output unit similarly as inFIG.2. The data input device ID further comprises a first finger support FS1and a second finger support FS2, which in this embodiment are configured to receive an index finger and little finger of a left hand, respectively. The first and second finger supports FS1, FS2are configured to adjust their orientation to the orientation of the respective finger portions, but at the same time allow the data input device ID to be supported by the human hand via the finger supports and corresponding finger portions. Both the first and second finger support may therefore comprise one or more rigid portions and one or more flexible, preferably elastic, portions. The one or more flexible, preferably elastic portions allow a range of finger diameters to fit in the first and second finger supports. The first and second finger supports FS1, FS2are rotatably connected to the base B via a respective hinge structure HS1and HS2. The hinge structures HS1, HS2each provide two rotation axes to allow the respective finger supports to move relative to the base B. Hinge structure HS1defines a first rotation axis RA1substantially extending in X-direction and a second rotation axis RA2substantially extending in a Z-direction that is perpendicular to both the X- and Y-direction. Hinge structure HS2defines a third rotation axis RA3substantially extending in X-direction and a fourth rotation axis RA4substantially extending in Z-direction. The first and third rotation axis RA1, RA3are arranged to be substantially aligned with the metacarpophalangeal joint of the corresponding finger when a finger portion is received in the respective first or second finger support FS1, FS2and allow the fingers to be moved up and down in the Z-direction away and towards the base B to interact with the array of sensors S. As the metacarpophalangeal joint of, in this embodiment, the little finger and the index finger do not have to be at the same location seen in Y-direction, the first and third rotation axis RA1, RA3do not necessarily have to be aligned with respect to each other. The second and fourth rotation axis RA2, RA4, although not necessary per se, add an additional degree of freedom for the fingers, preferably as depicted here for the index finger and the little finger, to move in X-direction, e.g. to reach additional sensors arranged next to the 3×4 array, e.g. one or two sensors next to the 3×4 array to add a specific functionality, for instance a sensor to switch between input modes (keyboard mode, mouse mode and/or game mode) or a special key/data input that is semi-permanently provided (e.g. SHIFT key, ESC key, etc.). Alternatively, or additionally, a sensor on the second and/or fourth rotation axis RA2, RA4may change the meaning of a sensor in the array. Alternatively, or additionally, a sensor in the array may e.g. be able to detect position information, where e.g. touching on the left side has a different meaning than touching on the right side. InFIG.4, an embodiment is shown in which additional columns of sensors S′ as indicated in dashed lines can be arranged next to the 3×4 array, which then effectively becomes a 3×5 array or a 3×6 array. In an embodiment, the position of the hinge structures HS1, HS2relative to the base, i.e. a distance between the first rotation axis RA1and the base B, and between the third rotation axis RA3and the base B is adjustable. This allows to adjust the distance between the respective finger support and the base and to optimize this distance depending on the size of the hand and/or fingers. Although in the embodiment ofFIG.4, when traveling from the base B to the hinge structures HS1, HS2, the rotation axes RA2, RA4are respectively encountered first and subsequently the rotation axes RA1, RA3, it is also possible that the order of rotation axes is reversed. Further, the rotation axis may be formed by a longitudinal axis of a hinge part, but can also be provided by an equivalent kinematic joint in which the rotation axis is located in free space. Alternatively, the location of the rotation axes RA1, RA3may also be combined, e.g. when a ball and socket joint is used which is able to rotate in two orthogonal directions, or when there is sufficient play to allow rotation in two orthogonal directions. The data input device ID further comprises a thumb portion TB. InFIG.4, the thumb portion is shown in plan view, but for clarity reasons, the thumb portion is also shown in rear view inFIG.5. The thumb portion TB is attached to the base B via a connecting member CM. It will be apparent to the skilled person by providing the thumb portion TB on the right side of the base B, the data input device ID is more suitable for a left hand and that making the data input device ID more suitable for a right hand, the thumb portion TB needs to be attached to the left side of the base B. The thumb portion TB of this embodiment comprises a tubular cross section with four sidewalls W1-W4enclosing a space SP to receive a thumb of a human hand. In this embodiment, each sidewall W1-W4is provided with a corresponding sensor TS1-TS4to allow additional user input using the thumb as will be explained below in more detail. Each sensor S and/or sensor S′ and/or sensor TS1-TS4may comprise one or more detectors to detect interaction with the fingertips or thumb. Such a detector may be in the form of a switch, but a sensor may alternatively or additionally comprise an analog sensor, such as a force sensor, optical sensor or proximity sensor, to detect the amount of force or resulting movement when the fingertip or thumb engages with the sensor. Other examples of sensors or detectors that can be used are a pushbutton, capacitive sensor, optical sensor or any other sensor allowing fingertips or thumbs to interact with in order to allow user input In an embodiment, the sensors S, and possibly the sensors S′, may be provided with a sensor display allowing to indicate the kind, type or value of user input when interacting with the sensor. However, a separate display indicating this may also be provided at another location, e.g. above the hand where the display is easily visible for a user. It is also possible that an external screen, e.g. a computer screen, TV screen or any other external screen is used to provide such information to the user. In an embodiment, the proximal end side of the base B is configured to engage with the hand, e.g. the palm of the hand, or corresponding arm at a wrist side of the metacarpophalangeal joint in order to delimit the freedom to move the base B, but it is explicitly noted here that this is not essential per se. More ways to delimit the moveability can be envisaged, for instance by using at least two finger supports where one finger support can delimit rotation, e.g. rotation about the RA1or RA3rotation axis, of the other finger support. Although the base B is depicted as a rigid structure in the above schematic drawings, it is specifically noted here that the base B may comprise a plurality of interconnected parts that together form the base B. In an embodiment, the base B may comprise a main part to carry the one or more finger supports and to accommodate the majority of the electronics, e.g. the battery, the control unit, etc. The base B may further comprise one or more finger base parts carrying at least the set of sensors. The one or more finger base parts may be connected to the main part such that their position and/or orientation relative to the main part can be adjusted. The main part may for instance have a concave upper surface for engagement with the palm of the hand, wherein the finger base part(s) are connectable to the main part at different locations on the concave upper surface such that connecting a finger base part at a specific location, both the position and orientation (following the contour of the upper surface) may be set. FIGS.6to10depict a standard QWERTY keyboard layout. When providing a data input device ID as depicted inFIG.4comprising a 3×4 array of sensors S, the data input device ID may be configured and used as follows to mimic the use of a real QWERTY keyboard. In this embodiment, by default, the array of sensors S is assigned to a 3×4 array of keys as indicated inFIG.6by encircling the symbols on the keys as an example to provide a predetermined input function for each sensor. Hence, in a default configuration, the sensors S may allow to enter the letters ‘Q’, ‘W’, ‘E’, ‘R’, ‘A’, ‘S’, ‘D’, ‘F’, ‘Z’, ‘X’, ‘C’, and ‘V’ by interaction between fingertips and corresponding sensors. As described above, a display may be provided, e.g. as a separate display or by displaying the letters on the sensors S itself or alternatively using an external display, so that there is a visual indication for a user enabling him to determine whether the correct letter is entered as user input. Alternatively, or additionally, visual information may be provided on a display of the input device or on a display of the external device the input device is communicating with by showing the predetermined input function without actually providing user input. This can for instance be done using a proximity sensor which detects the presence of a fingertip. When the proximity sensor indicates the presence of a fingertip nearby, this may trigger the display of the assigned predetermined input function. When the user actually wants to enter this input function as user input, the fingertip is operated further to engage with the corresponding detector/sensor. In an embodiment, the proximity sensor indicates the presence of a fingertip nearby and thus indicates the distance between sensor and the fingertip which may be used to derive 3D information about the positions of the joints of the hand. This information can then be used for gaming or gesture control. Another example is to use the combination of two sensors. The input device may for instance be provided with a display sensor, e.g. a pushbutton. By interacting with the display sensor first and subsequently or simultaneously interacting with another sensor, the assigned predetermined input function of the other sensor may be displayed without entering the input function as user input. When the user actually wants to enter this input function as user input, the other sensor may be interacted with again without interaction with the display sensor. The mentioned display sensor is an example of a dedicated sensor providing a predetermined functionality. Another example of such a sensor is a mode sensor allowing to change mode or a caps lock sensor allowing to select or deselect caps lock. It will be apparent that even when two similar data input devices ID are used, one suitable for the left hand as inFIG.3and one suitable for the right hand, not all keys on a standard QWERTY keyboard are addressable in a default configuration. However, as will be explained below, the sensors TS1-TS4of the thumb portion can advantageously be used to reach other keys as well. In order to reach the keys ‘1’, ‘2’, ‘3’, and ‘4’, the thumb may interact with sensor TS3by moving downwards which corresponds to a similar relative motion of the thumb relative to the other fingers when these fingers reach for the keys ‘1’, ‘2’, ‘3’, and ‘4’ on a normal keyboard and thus feels natural. The entire array of sensors S thus shifts to be assigned to the keys encircled inFIG.7. However, as an alternative, only the row of sensors assigned to the letters ‘Q’, ‘W’, ‘E’ and ‘R’ shifts to the keys ‘1’, ‘2’, ‘3’, and ‘4’. An opposite movement may be made over the virtual keyboard using sensor TS1to reach for instance the ‘Ctrl’, Win Key′, ‘Alt’ and ‘spacebar’ keys. Again, the entire array may shift or only the lower row of sensors, wherein lower means the row of sensors closest to the palm of the hand when using the input device. In order to reach the keys ‘T’, ‘G’ and ‘B’, the thumb may interact with sensor TS4by moving to the left which corresponds to a similar relative motion of the thumb relative to the other fingers when these fingers reach for keys ‘T’, ‘G’ and ‘B’ on a normal keyboard and again feels natural. The entire array of sensors S thus may shift to be assigned to the keys encircled inFIG.8. Again, as an alternative, only the right column of sensor may shift An opposite movement may be made over the virtual keyboard using sensor TS2to reach for instance the ‘Tab’ or ‘Caps Lock’ keys. Again, the entire array may shift or only the left column shifts. Alternatively, it is possible that the Tab′ and/or ‘Caps Lock’ keys are skipped when shifting over the virtual keyboard when one or more of the keys have been assigned to dedicated sensors allowing them to be accessible at least most of the time. FIG.9depicts the 3×4 array of keys assigned by default to the array of sensors S of a data input device ID to be worn by a right hand by encircling the symbols on the keys. The keys surrounding this array can be reached by corresponding interaction of the thumb with sensors TS1-TS4in a similar way as described above for the left hand. The above described interaction of the thumb with sensors TS1-TS4to reach other keys only works to reach directly neighbouring keys. Hence, still some keys cannot be used as input, e.g. the ‘Enter’ key on the right side of the keyboard. To reach the ‘Enter’ key, the thumb may interact twice with sensor TS2, e.g. shortly after each other, to shift the entire array two keys to the right as indicated byFIG.10. Similar movements may be made in other directions thereby enabling to reach any key of a standard QWERTY keyboard. In the above described operation of the input device, changing input function may be affected by interaction between thumb and one of the sensors TS1-TS4. In an embodiment, engagement between thumb and one of the sensors TS1-TS4changes the input function and subsequent disengagement between thumb and one of the sensors TS1-TS4automatically changes the input back to the default setting, possibly after a time-out period has lapsed. However, it is also possible that disengagement does not change the input function, thereby allowing to engage again with the sensor, possibly within a predetermined time period, to result in an additional change of input function in the same direction or to engage with another sensor, e.g. the opposite sensor, to result in a change of input function in another direction, e.g. back to the default setting. In another embodiment, a distinction can be made between shifting one key in a direction or two or more keys in said direction by detecting the force with which the thumb engages with one of the sensors TS1-TS4. When the applied force is for instance below a predetermined value, the assignment of input function is shifted only one key in the corresponding direction, and when the applied force is above the predetermined value, the assignment of input function is shifted two keys in said corresponding direction. It is also possible that both the left thumb and the right thumb work together to allow a distinction between fine shifts, i.e. shifts of one key in a particular direction, and coarse shifts, i.e. shifts of two or more keys in the same direction. It is noted there that although the above described embodiments relate to a QWERTY keyboard, the same principle can be applied to any keyboard layout. Further, the above described embodiments use the keys Q. W, E, R, A, S, D, F, Z, X, C and V as starting point for the left hand, and keys U, I, O, P, J, K, L, “;”, M, “,”, “.”, and “/” as starting point for the right hand, but the same principle can be applied to any starting point, also starting points that differ in size as an appropriate starting point may for instance be dependent on the number of available sensors. FIG.11depicts an external device ED to be controlled by a data input device, in this example the data input device according toFIG.4. The external device comprises a display DI. The display in this drawing depicts text TX that may be entered using the data input device, for instance using the method and configurations described in relation toFIGS.6-10. To enter the text TX, the data input device is provided in keyboard mode. Also shown inFIG.11is a cursor CU indicating the location where text will be added when corresponding user input is provided using a keyboard or the data input device according to the invention. In the course of typing the text, the location of the cursor CU may need to be changed to amend or add text at another location. In an embodiment, this is done using an arrow mode of the data input device. The data input device can be provided in arrow mode using a dedicated sensor that allows to switch between modes, but it is also possible that switching mode is carried out using sensors that have been assigned other input functions by interacting simultaneously with a predetermined combination of sensors, which combination in any mode is preferably not or not frequently used. In arrow mode, the assigned input function of the sensors is changed, preferably such that it is possible to indicate the following directions for the cursor CU:a direction U corresponding to moving the cursor up;a direction D corresponding to moving the cursor down;a direction R corresponding to moving the cursor to the right; anda direction L corresponding to moving the cursor to the left. The directions may be assigned to distinct sensors, but alternatively a single sensor, e.g. with a plurality of detectors, such as a joystick may be used. As an example, the sensors TS1, TS2, TS3and TS4as shown inFIG.5may be assigned the directions U, R, D and L, respectively. It is additionally or alternatively possible to provide the data input device in mouse mode allowing to control movement of an arrow (mouse pointer) MO that is normally controlled by a standard mouse or laptop pad. Again, the data input device can be provided in mouse mode using a dedicated sensor that allows to switch between modes, but it is also possible that switching mode is carried out using sensors that have been assigned other input functions by interacting simultaneously with a predetermined combination of sensors, which combination in any mode is preferably not or not frequently used. In mouse mode, the assigned input function of the sensors is changed, preferably such that it is possible to indicate the following directions for the cursor CU:a direction U corresponding to moving the arrow up;a direction D corresponding to moving the arrow down;a direction R corresponding to moving the arrow to the right; anda direction L corresponding to moving the arrow to the left. Additionally, the left click (and possibly the right click) function of a mouse is/are assigned to one or more of the sensors. In an embodiment, four sensors are used for the U, R, D and L movements. In an embodiment, the sensors TS1-TS4are used for the U, R, D and L movements and one of the other sensors S or S′ is assigned the left click or right click functionality. In an embodiment, a joystick is used for the U, R, D and L movements. It might be convenient to add possible moving directions to mimic the functionality of a mouse more closely. Hence, by interacting for instance simultaneously with the U sensor and the L sensor, e.g. the sensors TS1and TS4, the control unit may for instance be configured to output a signal corresponding to a direction OD having angle α=45 degrees with respect to a reference direction parallel to the L and R direction. Hence, in addition to the U, D, R and L directions it may be possible to use other directions OD as well in accordance with the following table3. TABLE 3overview of sensor combination andangle α of the other direction ODSensor combinationAngle αU + R, e.g. TS1 + TS2135degreesR + D, e.g. TS2 + TS3−135degreesD + L, e.g. TS3 + TS4−45degreesL + U, e.g. TS4 + TS145degrees When the U, R, D and L sensors, e.g. the sensors TS1-TS4are or comprise for instance analog sensors, such as force sensors, it is also possible to move in directions OD having other angles α. The ratio between the force or pressure applied to one sensor and the other sensor then determines the value of angle α. Additionally, or alternatively, the sum or vector sum of the forces applied to the sensor or combination of sensors can be used to determine a setpoint, including snap (also known as jounce), jerk, acceleration, speed or distance of travel of the arrow MO on the display DI. For instance, the jerk can be determined using the yank, i.e. the rate of change of force. In an embodiment, additional sensors are provided to determine the setpoint, e.g. a gyroscope and/or accelerometer. In an embodiment, the gyroscope is used to determine an additional angle on the U, R, D and L sensors. Preferably, a relatively large rotational movement, rotational speed or rotational acceleration of the hand translates to a relatively small change of direction of the mouse cursor. In an embodiment, an accelerometer can be used in combination with the set of U, R, D and L sensors. Preferably the accelerometer is configured for fine movement of the mouse cursor and the U, R, D and L sensors are configured for coarse movement of the mouse cursor. However, the opposite situation is also envisaged. Preferably, a relatively large change in velocity results in a relatively small change of velocity of the mouse cursor. In an embodiment, Artificial Intelligence is used to determine a 2D interpretation of the 3D data presented by the gyroscope and/or accelerometer. E.g. using a correction later in the interaction as feedback for learning. In an embodiment, a sensor can be assigned to boost the setpoint of the mouse cursor. This can be a digital sensor for a fixed boost factor or analog sensor for a variable boost factor. This may for instance be useful to quickly move the mouse over a larger surface. Alternatively, or additionally, a sensor can be assigned to soften the setpoint of the mouse cursor. This can be a digital sensor for a fixed boost factor or analog sensor for a variable softening factor. This may for instance be useful to accurately move the mouse over a smaller surface. In an embodiment, the left hand and right hand can work together in determining a setpoint, wherein adding is the simplest form, and wherein one hand may for example have a larger weight than the other hand. In an embodiment, it is possible that the devices are configured such that when using one hand only, independent of which hand, the setpoint determination is fine and when using both hands, the setpoint determination is coarse. In another embodiment, it is possible that the devices are configured such that when using one hand only, the setpoint determination is fine, when using the other hand, the setpoint determination is coarse, and when using both hands, the setpoint determination is very coarse. Using both hands can also be used to define more angles, e.g. pressing an L sensor with one hand and the U and L sensors with the other hand may allow an angle of 22.5 degrees. Using other combinations of sensors then allows to choose any angle n*22.5 degrees with n being an integer. In an embodiment, a contribution of a sensor in relation to the setpoint determination can be configured individually for each sensor. In an embodiment, the input data device comprises visual indication devices, e.g. using lights or a display, to indicate in which mode the data input device is. Although the operating method and configurations of the data input device have been demonstrated using a data input device comprising an array of 3×4 sensors, it will be apparent to the skilled person that any data input device according to the invention can be used in a similar manner. The amount of sensors S, S′ and TS1-TS4that are provided will determine the exact way the data input device needs to be operated to provide the desired user input, but the basic principles of how this is done are the same. FIG.12schematically depicts a finger support FS of a wearable data input device according to a further embodiment of the invention. The finger support FS in this embodiment comprises four ring segments1,2,3and4shown inFIG.12in engagement with an index finger IF of a human hand HH. The ring segments1-4may alternatively be referred to as engaging portions of the finger support FS. Although not shown, the four ring segments1,2,3,4are rigidly connected to each other using associated connecting elements, alternatively called interconnecting portions, which connecting elements do not necessarily have to engage with the index finger IF. The ring segments1-4are all configured to engage with a portion of the index finger IF corresponding to the proximal phalanges of the index finger IF. The regions on the index finger IF where the ring segments1-4engage with the index finger IF are referred to as first region, second region, third region and fourth region, respectively. In the below description use will be made of the following symbols to describe relative movement between the finger portion and the finger:X+: indicating a translation in positive X-direction;X−: indicating a translation in negative X-direction;rX+: indicating a rotation about the X-axis according to the right-hand-rule;rX−: indicating a rotation about the X-axis according to the left-hand-rule;Y+: indicating a translation in positive Y-direction;Y−: indicating a translation in negative Y-direction;rY+: indicating a rotation about the Y-axis according to the right-hand-rule;rY−: indicating a rotation about the Y-axis according to the left-hand-rule;Z+; indicating a translation in positive Z-direction;Z−: indicating a translation in negative Z-direction;rZ+: indicating a rotation about the Z-axis according to the right-hand-rule; andrZ−: indicating a rotation about the Z-axis according to the left-hand-rule. The right-hand-rule is a well-known rule in which the fingers of the right hand indicate the rotational direction when the thumb of the right hand is pointing in the direction of an arrow, vector or positive direction. In the corresponding left-hand-rule the fingers of the left hand indicate the rotational direction when the thumb of the left hand is pointing in the direction of an arrow, vector or positive direction. The ring segments1-4are preferably curved and configured to engage the finger, such that the ring segments1-4cannot move relative to the index finger IF in X+ or X− direction. Ring segments1and3then prevent movement in Z+ direction while ring segments2and4prevent movement in Z− direction. The ring segments1and2are in this embodiment configured to engage with the metacarpophalangeal joint or tissue nearby thereby allowing to prevent movement of the finger support in Y− direction. When the index finger is in its rest position, alternatively referred to as neutral position or position of function, the intermediate phalanges usually makes an angle with the proximal phalanges so that the ring segment4is prevented to move in Y+ direction keeping the finger support in place. Additionally, tissue in between the ring segments2and3may provide resistance to movement in the Y+ direction. When the intermediate phalanges is aligned with the proximal phalanges, this allows to remove the finger support in the Y+ direction. The ring segments1-4or the corresponding interconnecting portions may also prevent any rotation in rX+, rX−, rZ+ and rZ− direction. Rotation in rY− direction may be prevented when the ring segment1is arranged sufficiently close to the metacarpophalangeal joint such that it engages with the neighbouring joint of the middle finger. Ring segment2may similarly be arranged close to the metacarpophalangeal joint such that it engages with the neighbouring joint of the middle finger to prevent movement in the rY+ direction. Alternatively, or additionally, using a plurality of similar finger supports for other fingers as well allows to prevent movement in the rY+ and rY− direction. Further, movement in the rY+ and rY− direction may be prevented due to the engagement between the proximal end side of the base and the hand or corresponding arm at a wrist side of the metacarpophalangeal joint. One or more of the ring segments may be at least partially, possibly entirely, elastic to allow the finger support to adapt to the finger of the user and to allow an easy putting on and off of the finger support. FIGS.13A and13Bdepict a base B of a wearable data input device to be worn on a human hand HH.FIG.13Adepicts a portion of the base B and the orientation thereof with respect to the hand HH when in use, andFIG.13Bdepicts the base B ofFIG.13Aand additional components in exploded view. The focus inFIGS.13A and13Bis the base B of the data input device. Other parts of the input device will not be described in detail but may be similar to the already shown embodiments inFIGS.1-12. Hence, not all features, e.g. the number of sensors and their spatial configuration, have to be similar. The base B comprises a first member B1forming a proximal end PE of the base B, wherein a proximal end side of the base, in this case formed by first member B1, is configured to engage with the hand HH at a wrist side of the metacarpophalangeal joint as shown inFIG.13A. The first member B1is provided with respective interfaces IN2, IN3, IN4and IN4. FIG.13Balso depicts the first member B1, but now not in relation to the hand HH. Also shown are second member B2, third member B3, fourth member B4and fifth member B5, in this embodiment each provided with three sensors Sat a distal end side of the base B opposite to the proximal end PE of the base B. Each member B2-B5is connected to the interfaces IN2, IN3, IN4, IN5of the first member B1via a corresponding intermediate member12,13,14and15. In this embodiment, each intermediate member12-15is moveably connected to the respective interface IN2, IN3, IN4, IN5of the first member B1and moveably connected to the respective member B2-B5, in this embodiment by being slidably received in the respective interface IN2, IN3, IN4, IN5of the first member B1and by being slidably received in the respective member B2-B5. It is also possible that one or more intermediate members12-15are only moveably connected at only one side. In an embodiment, the position and/or orientation of an intermediate member12-15or a member B2-B5can be temporarily fixed to prevent any further movement once an optimal position and/or orientation has been found. In an embodiment, the four interfaces IN2, IN3, IN4, IN5provide a position and direction that optimizes the positions of the sensors relative to the fingertips, e.g. an angle over rZ may optimize the position of the sensors to the movement direction of the fingertip, e.g. an angle over rY may optimize the individual positions of the sensors to the amount of rotation required from the interphalangeal joints of a finger. In an embodiment, the four intermediate members12-15provide a position and direction that optimizes the positions of the sensors relative to the fingertips, e.g. an angle over rZ may optimize the position of the sensor to the movement direction of the fingertip, e.g. an angle over rY may optimize the individual positions of the sensors to the amount of rotation required from the interphalangeal joints of a finger. In an embodiment, the portions of the intermediate members that are moveably connected to the interfaces IN2, IN3, IN4, IN5of the first member B1are arranged at an angle with the respective portions of the intermediate members that are moveably connected to the members B2-B5. This has the advantage that the arrangement of the intermediate member relative to the members B2-B5sets a distance between respective member B2-B5and the first member B1and the arrangement of the intermediate member relative to the interface IN2, IN3, IN4, IN5of the first member B1implicitly sets a distance between the members B2-B5in X-direction and thereby allowing to adjust the base B to the length and width of the hand HH and the corresponding fingers, so that the sensors S are properly positioned for the fingertips of the respective fingers. This way other configurations of the interfaces IN2-IN5like the rotations over rX and rZ, if applicable, keep their benefits. In an embodiment, although not shown, the first member B1comprises one or more height adjuster to adjust the position of the members B2-B5in a Z-direction (perpendicular to the X- and Y-directions). In case of one height adjuster, the position of all members B2-B5may be adjusted simultaneously, while in another embodiment, the position of a member B2-B5may be adjusted individually. A similar mechanism as described may be used to configure the positions of the thumb sensors. However, as an alternative, the thumb sensors may be connected to an intermediate member12or15, depending on the left- or right-hand applicability, or member B2or B5, so that the thumb sensors are adjusted together with the sensors for the adjacent fingers. As shown with respect to the above described embodiment, the distal end of the base B supporting the sensors may make an angle about 90-120 degrees, e.g. 110 degrees relative to the palm of the hand, which in this embodiment is in contact with the proximal end of the base. However, the proximal end of the base does not necessarily have to be in contact with the palm of the hand, and the angle between the palm of the hand and the distal end of the base can also be much smaller. Further, the proximal end is not necessarily an elongation of the distal end. Although the invention describes wearable data input devices in general to send data to an external device in general, the invention, whether being the first, second, third, fourth, or any combination thereof, is especially suitable to the situation in which the main function of the wearable data input device is to translate user input into data and send the data to an external device. Data input devices that have such a main function include a game console, keyboard and mouse. The use of such data input devices without the external device may be very limited. Such data input devices may also be referred to as peripheral devices used to input information to an external device, e.g. a computer. | 42,770 |
11861065 | DETAILED DESCRIPTION Certain aspects and embodiments of this disclosure are provided below. Some of these aspects and embodiments may be applied independently and some of them may be applied in combination as would be apparent to those of skill in the art. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the application. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive. The ensuing description provides example embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the application as set forth in the appended claims. As previously mentioned, extended reality (e.g., augmented reality, virtual reality, etc.) devices, such as smart glasses and head-mounted displays (HMDs), generally implement cameras and various sensors to track the position of the extended reality (XR) device and other objects within the physical environment. The XR reality devices can use such tracking information to provide a user of the XR device a realistic XR experience. For example, an XR device can allow a user to experience or interact with immersive virtual environments or content. To provide realistic XR experiences, XR technologies can integrate virtual content with the physical world, which can involve matching the relative pose and movement of objects and devices. The XR technologies can use tracking information to calculate the relative pose of devices, objects, and/or maps of the real-world environment in order to match the relative position and movement of the devices, objects, and/or the real-world environment, and anchor content to the real-world environment in a convincing/realistic manner. The relative pose information can be used to match virtual content with the user's perceived motion and the spatio-temporal state of the devices, objects, and real-world environment. In some cases, XR devices can be paired with controllers that users can use to select and interact with content rendered by the XR devices during an XR experience. To enable realistic interactions with rendered content using controllers, the XR devices can use the cameras and other sensors on the XR devices to track a pose and movement of the controllers, and use the pose and motion of the controller to match the state of the controller with the user's perceived motion and the spatio-temporal state of rendered content and other objects in the environment. However, controller-based XR systems often require a significant amount of power and compute resources to implement, which can negatively impact the performance and battery life of the XR devices used with the controllers. Moreover, the use of controllers may not be intuitive for the user and, in many cases, can be difficult to use. For example, controllers can be difficult to use when the user of the controller is in certain positions, such as lying down or reclined. Controllers can also be difficult to use in space-constrained environments such as airplanes, crowded areas, etc. In many cases, controllers used with XR devices can also create privacy issues. For example, a person or computer with visibility to the user of the controller can analyze the user's movements and interactions with the controller to recognize the user's interactions with the content rendered by the XR device during the XR experience as well as associated information, potentially putting the privacy of the user's information at risk. In some examples, an artificial intelligence (AI) interpreter or system can be used to process a recording of the user's interactions and identify the information provided by the user through the rendered XR interface. The AI interpreter or system could potentially recognize the information provided by the user through the XR interface. Accordingly, when a user is engaged in an XR experience using a controller associated with an XR device, the user could potentially expose inputs and associated information to other users, such as user selections, Personal Identification Numbers (PINs), gestures, etc. The user may want to protect the privacy of interactions with XR interfaces rendered by the XR device even if other users are not also engaged in the same XR experience as the user or able to see the XR interface. In some aspects, systems, apparatuses, processes (also referred to as methods), and computer-readable media (collectively referred to herein as “systems and techniques”) are described herein using a wearable device/accessory that interfaces/interacts with an electronic device (e.g., an XR device, a mobile device, a television, a smart wearable device, an electronic device with a user interface, or any other electronic device) to provide enhanced user interface, input, and/or XR experiences and functionalities. In some examples, a wearable accessory can be used with XR devices to securely and intuitively provide XR inputs and interact with XR content. The wearable accessory can include one or more sensors to assist with tracking, gesture detection, and/or content interaction functionalities, among others. In some cases, the wearable accessory can be a ring or ring structure that can be worn on a user's finger or hand. A user can use the ring or ring structure during an XR experience to provide one or more types of inputs to an XR device providing the XR experience. The XR device can detect different modes of input, which can be converted (e.g., interpreted as, mapped to, etc.) into specific inputs and/or functionalities. For example, a user can wear a ring on a particular finger and rotate the ring about a longitudinal axis of the user's finger to scroll through content, manipulate rendered content, manipulate the XR environment, select content, generate measurements in the physical world, etc. As described herein, a longitudinal axis is generally parallel to a receiving space (e.g., a lumen) of the wearable accessory that provides a longitudinal access opening for a finger and at least a portion of the finger wearing the wearable accessory. A lateral axis is normal to the longitudinal axis and a transverse axis extends normal to both the longitudinal and lateral axes. The longitudinal direction is a direction substantially parallel to the longitudinal axis, the lateral direction is a direction substantially parallel to the lateral axis, and the transverse direction is a direction substantially parallel to the transverse axis. Other example ways the user can provide an input to the XR device using the ring can include tapping the ring with a different finger than the finger wearing the ring, squeezing the ring with one or more fingers that are adjacent to the finger wearing the ring, rotating or swiping the ring with a finger such as a thumb, rotating or swiping the ring with one or more fingers that are adjacent to the finger wearing the ring, and/or otherwise physically interacting with the ring. In some cases, the user can use motion of the finger wearing the ring and/or the hand to provide one or more types of inputs to the XR device based on the tracked motion of the finger and/or the hand. In some examples, the ring can include one or more sensors to detect such motion and/or interactions with the ring. The ring can include a wireless interface to send measurements corresponding to detected inputs to the XR device. In some cases, different modes of input using the ring can correspond to, and/or can be converted into (e.g., interpreted as), different types of XR inputs. For example, the user can rotate the ring about a longitudinal axis of the finger wearing the ring and/or rotate a rotatable portion of the ring about a longitudinal axis of the ring and relative to another portion of the ring to scroll through content, forward or rewind a video or any other sequence of content, move rendered content (e.g., rotate content, etc.), navigate content, etc. As another example, the user can tap or swipe the ring to perform a selection; move the finger wearing the ring (and/or the hand with the finger wearing the ring) to provide gestures, manipulate XR content and/or environments, define a plane, create XR spaces and/or content, etc.; among other things. In general, because of the configuration of the ring (e.g., the size, shape, etc.) and how it is used by the user (e.g., worn on a finger, wrist, etc.), user interactions with the ring can be more discreet, inconspicuous, and/or otherwise harder to detect/notice than user interactions with a different controller device. Thus, the privacy and associated data of XR inputs provided via the ring on the user's finger can be better protected from people and other devices in the environment. Moreover, the user can easily and conveniently provide inputs using the ring even when the user is in space-constrained areas, lying down and/or otherwise positioned in way that would be difficult to otherwise move a controller to generate an input. In some cases, the ring can reduce power consumption and resource usage at the XR device. For example, the ring can offload certain operations such as hand tracking and/or other tracking operations from the XR device, allowing the XR device to reduce power consumption and resource usage such as sensor, camera, and/or compute resource usage. In some examples, when tracking operations are offloaded from the XR device to the ring, the XR device can turn off, or reduce a power mode of, one or more tracking resources such as cameras and/or other sensors that the XR device would otherwise use to track the user's hands and/or other objects. The ring can include one or more sensors to track and detect activity such as, for example, motion, inputs, etc. For example, the ring can include a rotary encoder to track rotation and/or swiping of the ring (and/or portions thereof) for one or more types of inputs. An inertial measurement unit (IMU) in the ring can integrate multi-axes, accelerometers, gyroscopes, and/or other sensors (e.g., magnetometers, etc.) to provide the XR device an estimate of the hand's pose in physical space. One or more sensors in the ring, such as ultrasonic transmitters/transducers and/or microphones, can be used for ranging of the hands. In some examples, one or more ultrasonic transmitters/transducers and/or microphones can help determine if the user's hands are closer together or farther apart, if any of the user's hands are close to one or more other objects, etc. In some examples, a barometric air pressure sensor in the ring can determine relative elevation changes and can be used to interpret selection events. The ring can send measurements from one or more sensors to the XR device, which can convert the sensor measurements into user inputs. The ring can provide new user experience (UX) functionalities that enable easier, more intuitive actions by the user, and enable various types of actions based on sensor inputs. In some examples, the ring can enable scrolling and other actions via one or more interactions with the ring. For example, in some cases, the ring can include an outer ring and an inner ring. The outer ring can spin around and/or relative to the inner ring. The ring can include a rotary encoder to detect a rotation magnitude. The ring can send the rotation magnitude to the XR device, which can convert the rotation magnitude into an input such as a scroll magnitude. In some cases, the entire ring can spin around the user's finger (or a portion of the user's finger), and an IMU in the ring can detect the spin motion to determine an input such as a scroll magnitude. In some cases, the ring can include a touch sensor to provide touch sensitivity at one or more areas of the ring and/or across a surface of the ring to detect touch inputs such as a selection, a scroll magnitude, etc. In some examples, a touch sensor can be positioned on an outside of the ring, making the ring non-symmetric. The touch area can be akin to a small touch pad and can be used by a different finger to provide inputs. For example, the touch area can be akin to a small touch pad that can be used by a thumb when the ring is on the index finder. The ring can be equipped with various power saving features. For example, in some cases, the ring can save power by shutting down after the XR application on the XR device has stopped and/or been terminated. As another example, the ring can remain off or in a lower power mode, and turn on or switch to higher power mode based on one or more user interactions/inputs. For example, the ring can remain off or in a lower power mode, and turn on or switch to higher power mode when the ring is rotated by a certain amount. The ring can provide privacy benefits, as previously explained, as well as other benefits. For example, with the ring, the user does not have to (but can) wave any hands or fingers in the air to generate an input. As another example, the ring can conserve power of the XR device by providing tracking functionalities and allowing the XR device to turn off or power down resources on the XR device such as, for example, cameras, tracking sensors, etc. In some cases, the ring can include a processor or chip that provides various functionalities, and can interact with a processor and/or chip on the XR device. The present technologies will be described in the following disclosure as follows. The discussion begins with a description of example systems and techniques for providing enhanced XR functionalities/experiences using a ring device, as illustrated inFIGS.1through5. A description of an example process for using a ring device for XR functionalities, as illustrated inFIG.6, will then follow. The discussion concludes with a description of an example computing device architecture including example hardware components suitable for performing XR and associated operations, as illustrated inFIG.7. The disclosure now turns toFIG.1 FIG.1is a diagram illustrating an example of an XR system100and a ring device150for XR experiences and functionalities, in accordance with some examples of the present disclosure. The XR system100and the ring device150can be communicatively coupled to provide various XR functionalities. The XR system100and the ring device150can include separate devices used as described herein for XR experiences and functionalities. In some examples, the XR system100can implement one or more XR applications such as, for example and without limitation, a video game application, a robotic application, an autonomous driving or navigation application, a productivity application, and/or any other XR application. In some examples, the XR system100can include an electronic device configured to use information about the relative pose of the XR system100and/or the ring device150to provide one or more functionalities, such as XR functionalities, gaming functionalities, autonomous driving or navigation functionalities, computer vision functionalities, robotic functions, etc. For example, in some cases, the XR system100can be an XR device (e.g., a head-mounted display, a heads-up display device, smart glasses, a smart television system, etc.) and the ring device150can generate inputs used to interact with the XR system100and/or content provided by the XR system100. In the illustrative example shown inFIG.1, the XR system100can include one or more image sensors, such as image sensor102and image sensor104, other sensors106, and one or more compute components110. The other sensors106can include, for example and without limitation, an inertial measurement unit (IMU), a gyroscope that is separate from a gyroscope in the IMU, an accelerometer that is separate from an accelerometer of the IMU, a magnetometer that is separate from a magnetometer of the IMU, a radar, a light detection and ranging (LIDAR) sensor, an audio sensor, a position sensor, a pressure sensor, and/or any other sensor. In some examples, the XR system100can include additional sensors and/or components such as, for example, a light-emitting diode (LED) device, a storage device, a cache, a communications interface, a display, a memory device, etc. An example architecture and example hardware components that can be implemented by the XR system100are further described below with respect toFIG.7. Moreover, in the illustrative example shown inFIG.1, the ring device150includes an IMU152, a position sensor154(e.g., a position/rotation encoder and/or any other type of position/rotation sensor), a pressure sensor156(e.g., a barometric air pressure sensor and/or any other pressure sensor), and a touch sensor158(or tactile sensor). The sensor devices shown inFIG.1are non-limiting examples provided for explanation purposes. In other examples, the ring device150can include more or less sensors than shown inFIG.1. Moreover, in some cases, the ring device150can include other types of sensors such as, for example, an audio sensor, a light sensor, an image sensor, etc. It should be noted that the components shown inFIG.1with respect to the XR system100and the ring device150are merely illustrative examples provided for explanation purposes and, in other examples, the XR system100and/or the ring device150can include more or less components than those shown inFIG.1. The XR system100can be part of, or implemented by, a single computing device or multiple computing devices. In some examples, the XR system100can be part of an electronic device (or devices) such as a camera system (e.g., a digital camera, an IP camera, a video camera, a security camera, etc.), a telephone system (e.g., a smartphone, a cellular telephone, a conferencing system, etc.), a laptop or notebook computer, a tablet computer, a set-top box, a smart television, a display device, a gaming console, an XR device such as an HMD, a drone, a computer in a vehicle, an IoT (Internet-of-Things) device, a smart wearable device, or any other suitable electronic device(s). In some implementations, the image sensor102, the image sensor104, the one or more other sensors106, and/or the one or more compute components110can be part of the same computing device. For example, in some cases, the image sensor102, the image sensor104, the one or more other sensors106, and/or the one or more compute components110can be integrated into a camera system, a smartphone, a laptop, a tablet computer, a smart wearable device, an XR device such as an HMD, an IoT device, a gaming system, and/or any other computing device. However, in other implementations, the image sensor102, the image sensor104, the one or more other sensors106, and/or the one or more compute components110can be part of, or implemented by, two or more separate computing devices. The one or more compute components110of the XR system100can include, for example and without limitation, a central processing unit (CPU)112, a graphics processing unit (GPU)114, a digital signal processor (DSP)116, and/or an image signal processor (ISP)118. In some examples, the XR system100can include other types of processors such as, for example a computer vision (CV) processor, a neural network processor (NNP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), etc. The XR system100can use the one or more compute components110to perform various computing operations such as, for example, extended reality operations (e.g., tracking, localization, pose estimation, mapping, content anchoring, content rendering, etc.), image/video processing, graphics rendering, machine learning, data processing, modeling, calculations, and/or any other operations. In some cases, the one or more compute components110can include other electronic circuits or hardware, computer software, firmware, or any combination thereof, to perform any of the various operations described herein. In some examples, the one or more compute components110can include more or less compute components than those shown inFIG.1. Moreover, the CPU112, the GPU114, the DSP116, and the ISP118are merely illustrative examples of compute components provided for explanation purposes. The image sensor102and the image sensor104can include any image and/or video sensor or capturing device, such as a digital camera sensor, a video camera sensor, a smartphone camera sensor, an image/video capture device on an electronic apparatus such as a television or computer, a camera, etc. In some cases, the image sensor102and/or the image sensor104can be part of a camera or computing device such as a digital camera, a video camera, an IP camera, a smartphone, a smart television, a game system, etc. Moreover, in some cases, the image sensor102and/or the image sensor104can include multiple image sensors, such as rear and front sensor devices, and can be part of a dual-camera or other multi-camera assembly (e.g., including two camera, three cameras, four cameras, or other number of cameras). In some examples, the image sensor102and/or the image sensor104can capture image data and generate frames based on the image data and/or provide the image data or frames to the one or more compute components110for processing. A frame can include a video frame of a video sequence or a still image. A frame can include a pixel array representing a scene. For example, a frame can be a red-green-blue (RGB) frame having red, green, and blue color components per pixel; a luma, chroma-red, chroma-blue (YCbCr) frame having a luma component and two chroma (color) components (chroma-red and chroma-blue) per pixel; or any other suitable type of color or monochrome picture. In some examples, the one or more compute components110can perform XR processing operations based on data from the image sensor102, the image sensor104, the one or more other sensors106, and/or the ring device150. For example, in some cases, the one or more compute components110can perform tracking, localization, pose estimation, mapping, content anchoring, content rendering, image processing, modeling, content generation, and/or other operations based on data from the image sensor102, the image sensor104, the one or more other sensors106, and/or the ring device150. In some examples, the one or more compute components110can implement one or more algorithms for tracking and estimating a relative pose of the ring device150and the XR system100. In some cases, the one or more compute components110can receive image data captured by the image sensor102and/or the image sensor104and perform pose estimation based on the received image data to calculate a relative pose of the ring device150and the XR system100. In some cases, the one or more compute components110can implement one or more computer vision models to calculate the relative pose of the ring device150and the XR system100. In some cases, the one or more other sensors106can detect acceleration by the XR system100and generate acceleration measurements based on the detected acceleration. In some cases, the one or more other sensors106can additionally or alternatively detect and measure the orientation and angular velocity of the XR system100. For example, the one or more other sensors106can measure the pitch, roll, and yaw of the XR system100. In some examples, the XR system100can use measurements obtained by the one or more other sensors106to calculate the relative pose of the XR system100. The ring device150can use the IMU152, the position sensor154, the pressure sensor156, and/or the touch sensor158to detect inputs for the XR system100, as further described herein. The ring device150can detect one or more modes of input such as for example and without limitation, applying a force (e.g., tapping, squeezing, pressing, rubbing, swiping, touching, etc.) on one or more portions of the ring device150, rotating and/or swiping one or more portions of the ring device150, etc. The ring device150can provide one or more detected inputs to the XR system100to modify a content, operation, and/or behavior of the XR system100. In some cases, the ring device150can calculate a magnitude of an input and provide the magnitude of the input to the XR system100as part of a provided input. For example, the ring device150can calculate a magnitude of a force and/or rotation applied on the ring device150and provide the magnitude of force and/or rotation to the XR system100as an input. The XR system100can use the magnitude information to determine a type of input (e.g., a single click, double click, selection, scroll, gesture, object resizing, control input, settings input, etc.) and/or an input magnitude (e.g., an amount of scrolling, object resizing, environment/object manipulation, etc.). In some examples, the ring device150and/or the XR system100can use measurements obtained by the IMU152, the position sensor154, and/or the pressure sensor156, to calculate (and/or to assist in calculating) the location and/or relative pose of the ring device150. In some cases, the IMU152can detect an orientation, velocity (e.g., rotational, linear, etc.), and/or acceleration (e.g., angular rate/acceleration, linear acceleration, etc.) by the ring device150and generate orientation, velocity and/or acceleration measurements based on the detected orientation, velocity, and/or acceleration. For example, in some cases, a gyroscope of the IMU152can detect and measure a rotational rate/acceleration by the ring device150(and/or a portion of the ring device150). In some examples, the IMU152can additionally or alternatively detect and measure linear velocity and/or acceleration by the ring device150. In some examples, the IMU152can additionally or alternatively detect and measure an orientation of the ring device150. In some cases, the IMU152can additionally or alternatively detect and measure the orientation and angular velocity of the ring device150. For example, the IMU152can measure the pitch, roll, and yaw of the ring device150. In some examples, the position sensor154can calculate a position of the ring device150in terms of rotational angle, linear motion, and three-dimensional (3D) space. For example, the position sensor154can detect a rotation and/or spin of the ring device150. The pressure sensor156can detect pressure such as air pressure, and can determine relative pressure changes. In some examples, measurements from the pressure sensor156can be used as inputs to interpret content selection events. The touch sensor158can measure physical forces or interactions with the ring device150, which can be interpreted as inputs to the XR system100, as further described herein. The ring device150can include one or more wireless communication interfaces (not shown) for communicating with the XR system100. The one or more wireless communication interfaces can implement any wireless protocol and/or technology to communicate with the XR system100, such as short-range wireless technologies (e.g., Bluetooth, etc.) for example. The ring device150can use the one or more wireless communication interfaces to transmit sensor measurements and/or other XR inputs to the XR system100, as further described herein. While the XR system100and the ring device150are shown to include certain components, one of ordinary skill will appreciate that the XR system100and the ring device150can include more or fewer components than those shown inFIG.1. For example, the XR system100and/or the ring device150can also include, in some instances, one or more other memory devices (e.g., RAM, ROM, cache, and/or the like), one or more networking interfaces (e.g., wired and/or wireless communications interfaces and the like), one or more display devices, caches, storage devices, and/or other hardware or processing devices that are not shown inFIG.1. An illustrative example of a computing device and hardware components that can be implemented with the XR system100and/or the ring device150described below with respect toFIG.7. FIG.2Aillustrates an example of the ring device150. The user can use the ring device150to interact with the XR system100and provide various types of XR inputs as further described herein. In some examples, the ring device150can collect sensor measurements to track a location and/or pose of the ring device150in 3D space. In some examples, the location and/or pose can be tracked relative to a location and/or pose of the XR system100in 3D space. In this example, the ring device150includes a structure200(or body) that has a receiving space210that provides a longitudinal access opening disposed on the underside of the structure200to allow at least ingress of a finger of a user, a first surface212(or internal surface) that can provide an engagement surface for a finger inserted through the receiving space210to wear the ring device150, and a second surface214(or external surface). The structure200can also include one or more sensors and/or electronic components as described herein. In this example, the structure200includes a touchpad204for receiving touch inputs, a display206for displaying information from the ring device150and/or the XR system100, and sensors208. In some examples, the sensors208can include and/or can be the same as the IMU152, the position sensor154, the pressure sensor156, and/or the touch sensor158shown inFIG.1. In other examples, the sensors208can include one or more sensors and/or devices that are not shown inFIG.1, such as one or more cameras, light sensors, audio sensors, lights, etc. In some cases, the ring device150can include a touch or pressure sensitive surface and/or surface portion for measuring touch inputs. The receiving space210can be configured to receive a finger of a user. For example, as noted above, the receiving space210can include a longitudinal access opening disposed on the underside of the structure200to allow at least ingress of a finger of a user. The first surface212can provide an engagement or retention surface for the finger of the user. The first surface212can be contoured and/or shaped so as to support/retain the finger of the user within the receiving space210and inhibit or prevent movement of the finger in the longitudinal, lateral, and/or transverse axes/directions relative to the receiving space210. A longitudinal axis is generally parallel to the receiving space210and at least a portion of a finger wearing the ring device150(e.g., a finger retained by the first surface212of the structure200). A lateral axis is normal to the longitudinal axis and a transverse axis extends normal to both the longitudinal and lateral axes. The longitudinal direction is a direction substantially parallel to the longitudinal axis, the lateral direction is a direction substantially parallel to the lateral axis, and the transverse direction is a direction substantially parallel to the transverse axis. The second surface214can include an external surface of the structure200. The external surface can include a top or upper surface of the structure200. In some examples, a user wearing the ring device150on a finger can interact with the second surface214(e.g., using a different finger than the finger wearing the ring device150and/or using any other object) to provide inputs measured by the sensors/components (e.g., touchpad204, display206, sensors208) on the structure200. For example, a user can apply a force to a portion of the second surface214to provide an input measured by the sensors/components on the structure200. In some examples, the user can touch, tap, squeeze, and/or apply pressure to the second surface214to provide an input (e.g., a touch input, a tap input, a squeeze/pressure input, etc.) that can be detected and measured by the touchpad204, the display206, and/or the sensors208. In some examples, the user can provide a spinning or swiping force on the second surface214to generate an input (e.g., a spinning input, a swiping input, etc.) that can be detected and measured by the touchpad204, the display206, and/or the sensors208. In some cases, the second surface214can include a touch or pressure sensitive surface and/or surface portion for measuring touch inputs. In some cases, the receiving space210and/or the first surface212can be contoured and/or shaped to inhibit or prevent the structure200from rotating or spinning about a longitudinal axis of the user's finger and the receiving space210when the user applies a spinning or swiping force on the second surface214. The touchpad204, display206, and/or sensors208can detect and measure the spinning or swiping force (e.g., a direction, magnitude, etc.) even if the structure200does not move or rotate (or movement or rotation of the structure200is substantially inhibited) in response to the spinning or swiping force. In other cases, the receiving space210and/or the first surface212can be contoured and/or shaped to allow the structure200to at least partly rotate/spin about a longitudinal axis of the user's finger and the receiving space210when the user applies a spinning or swiping force on the second surface214. The touchpad204, display206, and/or sensors208can detect and measure the angular change, angular velocity, and/or angular acceleration of the structure200resulting from the spinning or swiping force. In some examples, the ring device150can generate sensor measurements (e.g., via the touchpad204and/or sensors208) and provide the sensor measurements to an electronic device (e.g., XR system100, a mobile device, a television, a set-top box, any device with a user interface and/or any other electronic device) as inputs to an application on the electronic device. The sensor measurements can include measured interactions with the structure200and/or the second surface214(e.g., applied force/pressure, etc.), pose information about the structure200, measured motion (e.g., rotation, velocity, acceleration, etc.), etc. The electronic device can interpret the sensor measurements into inputs to an application on the electronic device. In some examples, the ring device150can generate the sensor measurements and convert (e.g., process, interpret, map, etc.) the sensor measurements into inputs for an application running at the electronic device (e.g., XR system100). The ring device150can use one or more processing devices of the ring device150, such as an application-specific integrated circuit embedded in the structure200, to convert the sensor measurements into inputs for a particular application. The ring device150can provide the inputs to the electronic device for processing by the particular application on the electronic device. In some cases, the touchpad204and/or the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to provide a force such as a spinning or swiping force. In some examples, the touchpad204and/or the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to move the ring device150about a longitudinal axis of the receiving space210. In some cases, the ring device150and/or a portion of the ring device150can rotate relative to the finger wearing the ring device150, in response to a rotating force or gesture. In some examples, the sensors208can be used to provide a spinning and/or rotating input based on measurement movement of the ring device150about a longitudinal axis of the receiving space210and relative to the finger wearing the ring device150. In some examples, the ring device150can emit light using the display206and/or any light-emitting device (not shown) for detection by an electronic device with a camera, such as XR system100. For example, the ring device150can emit light for detection by the electronic device. The electronic device can detect the light using one or more cameras and can use the light to determine motion of the ring device150, such as rotation. In some examples, the ring device150can emit the light in response to a movement (e.g., rotation, etc.) above a threshold and/or a preconfigured interaction with the ring device150(e.g., with the second surface214). FIG.2Billustrates another example of the ring device150. In this example, the ring device150includes a structure220(or body) that has the receiving space210that provides a longitudinal access opening disposed on the underside of the structure220to allow at least ingress of a finger of a user, an engagement surface222that can engage a finger inserted through the receiving space210to wear the ring device150, an upper surface224, and a contact surface226. The structure220can also include one or more sensors and/or electronic components as described herein. In some examples, the sensors208can include and/or can be the same as the IMU152, the position sensor154, the pressure sensor156, and/or the touch sensor158shown inFIG.1. In other examples, the sensors208can include one or more sensors and/or devices that are not shown inFIG.1, such as one or more cameras, light sensors, audio sensors, lights, etc. The receiving space210can be configured to receive a finger of a user. For example, as previously explained, the receiving space210can include a longitudinal access opening disposed on the underside of the structure220to allow at least ingress of a finger of a user. The engagement surface222can provide a surface for engagement or retention of the finger of the user. The engagement surface222can be contoured and/or shaped so as to support/retain the finger of the user within the receiving space210and inhibit or prevent movement of the finger in the longitudinal, lateral, and/or transverse axes/directions relative to the receiving space210. The upper surface224can include a top or partially external surface portion of the structure220. The contact surface226can include another top or external surface portion of the structure220. In some examples, the contact surface226(and/or a portion thereof) can be at least partially on top of and/or adjacent to the upper surface224. In some cases, the contact surface226can be rotatably coupled to a portion of the upper surface224. In some examples, the contact surface226can rotate about a longitudinal axis of the receiving space210and the upper surface224. For example, the contact surface226can rotate relative to the upper surface224in a lateral direction from a longitudinal axis of the receiving space210. The sensors208can measure the rotation (e.g., angular change, angular velocity, angular acceleration, etc.) and provide the measured rotation as input to an electronic device or convert the measured rotation into an input to an electronic device. In some cases, the contact surface226can include a touch or pressure sensitive surface and/or surface portion for measuring touch inputs. In some examples, a user wearing the ring device150on a finger can interact with the contact surface226(e.g., using a different finger than the finger wearing the ring device150and/or using any other object) to provide inputs measured by the sensors208on the structure220. For example, a user can apply a force to a portion of the contact surface226to provide an input measured by the sensors208on the structure220. In some examples, the user can touch, tap, squeeze, and/or apply pressure to the contact surface226to provide an input (e.g., a touch input, a tap input, a squeeze/pressure input, etc.) that can be detected and measured by the sensors208. In some examples, the user can provide a spinning or swiping force on the contact surface226to generate an input (e.g., a spinning input, a swiping input, etc.) that can be detected and measured by the sensors208. In some examples, the user can rotate the contact surface226relative to the upper surface224(and about a longitudinal axis of the receiving space210) to generate an input based on the rotation. The sensors208on the structure220can measure one or more properties of the rotation (e.g., angular change, angular velocity, angular rotation, etc.), which can be used as inputs and/or to generate inputs. In some cases, the receiving space210and/or the engagement surface222can be contoured and/or shaped to inhibit or prevent the structure220from rotating or spinning about a longitudinal axis of the user's finger and the receiving space210when the user applies a spinning or swiping force on the upper surface224. The sensors208can detect and measure the spinning or swiping force (e.g., a direction, magnitude, etc.) even if the structure220does not move or rotate (or movement or rotation of the structure220is substantially inhibited) in response to the spinning or swiping force. In other cases, the receiving space210and/or the engagement surface222can be contoured and/or shaped to allow the structure220and/or the contact surface226to at least partly rotate/spin about a longitudinal axis of the user's finger and the receiving space210when the user applies a spinning or swiping force on the contact surface226. The sensors208can detect and measure the angular change, angular velocity, and/or angular acceleration of the structure220and/or the contact surface226resulting from the spinning or swiping force. In some examples, the ring device150can generate sensor measurements (e.g., via the sensors208) and provide the sensor measurements to an electronic device (e.g., XR system100, a mobile device, a television, a set-top box, any device with a user interface and/or any other electronic device) as inputs to an application on the electronic device. The sensor measurements can include measured interactions with the structure220and/or the contact surface226(e.g., applied force/pressure, etc.), pose information about the structure220, measured motion (e.g., rotation, velocity, acceleration, etc.), etc. The electronic device can interpret the sensor measurements into inputs to an application on the electronic device. In some examples, the ring device150can generate the sensor measurements and convert (e.g., process, interpret, map, etc.) the sensor measurements into inputs for an application running at the electronic device (e.g., XR system100). The ring device150can use one or more processing devices of the ring device150, such as an application-specific integrated circuit embedded in the structure220, to convert the sensor measurements into inputs for a particular application. The ring device150can provide the inputs to the electronic device for processing by the particular application on the electronic device. In some cases, the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to provide a force such as a spinning or swiping force. In some examples, the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to move the ring device150or the contact surface226about a longitudinal axis of the receiving space210. In some cases, the ring device150and/or the contact surface226can rotate relative to the finger wearing the ring device150, in response to a rotating force or gesture. In some examples, the sensors208can be used to provide a spinning and/or rotating input based on measured movement of the ring device150or the contact surface226about a longitudinal axis of the receiving space210and relative to the finger wearing the ring device150. In some examples, the ring device150can emit light using a light-emitting device (not shown) for detection by an electronic device with a camera, such as XR system100. For example, the ring device150can emit light for detection by the electronic device. The electronic device can detect the light using one or more cameras and can use the light to determine motion of the ring device150, such as rotation. In some examples, the ring device150can emit the light in response to a movement (e.g., rotation, etc.) above a threshold and/or a preconfigured interaction with the ring device150(e.g., with the contact surface226). FIG.2Cillustrates an example of the ring device150worn on a finger240of a user interacting with the XR system100. In this example, the ring device150is used to interact with (e.g., provide inputs, etc.) the XR system100. However, the XR system100is shown as a non-limiting example for explanation purposes. In other examples, the ring device150can be used to interact with other electronic devices (e.g., mobile devices, televisions, smart wearable devices, any electronic device with a user interface, etc.). The user can use the ring device150to interact with the XR system100and provide various types of XR inputs as further described herein. In some examples, the ring device150can collect sensor measurements to track a location and/or pose of the ring device150in 3D space. In some examples, the location and/or pose can be tracked relative to a location and/or pose of the XR system100in 3D space. In the example shown inFIG.2, the ring device150includes a touchpad204for receiving touch inputs, a display206for displaying information from the ring device150and/or the XR system100, and sensors208. In some examples, the sensors208can include and/or can be the same as the IMU152, the position sensor154, the pressure sensor156, and/or the touch sensor158shown inFIG.1. In other examples, the sensors208can include one or more sensors and/or devices that are not shown inFIG.1, such as one or more cameras, light sensors, gyroscopes that are separate from a gyroscope of the IMU, accelerometers that are separate from an accelerometer of the IMU, magnetometers that are separate from a magnetometer of the IMU, audio sensors, lights or light emitting devices, transmitters, ultrasonic transmitters/transducers, etc. In some cases, the ring device150can include a touch or pressure sensitive surface and/or surface portion for measuring touch inputs. In some cases, the touchpad204and/or the sensors208can be used to generate one or more measurements based on detected motion of the ring device150, interactions with the ring device150(e.g., force/pressure/touch/rotation/etc.), a detected pose of the ring device150, etc. In some cases, the ring device150can send such measurements to the XR system100as inputs to the XR system100(e.g., input to an XR application on the XR system100). In some cases, the ring device150can convert/interpret (e.g., via an ASIC or any other processing device) the one or more measurements to one or more inputs on a user interface and/or XR application at the XR system100, and send the one or more inputs to the XR system100. In some cases, the touchpad204and/or the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to provide a force such as a spinning or swiping force. In some examples, the touchpad204and/or the sensors208can be used to generate a virtual spinning and/or rotating input by using one or more fingers to move the ring device150about a longitudinal axis of the ring lumen (e.g., receiving space210) while the ring device150remains substantially stationary relative to the finger240. In some cases, the ring device150and/or a portion of the ring device150can rotate relative to the finger240in response to a rotating force or gesture. In some examples, the sensors208can be used to provide a spinning and/or rotating input by moving the ring device150about a longitudinal axis of the ring lumen and relative to the finger240. The XR system100can render content, interfaces, and/or controls to a user wearing the XR system100. The user can use the ring device150to wirelessly interact with the content, interfaces, and/or controls and provide various types of inputs such as selections, object/environment manipulations, navigation inputs (e.g., scrolling, moving, etc.), gestures, etc. Non-limiting examples of interactions with content, interfaces, and/or controls rendered by the XR system100using the ring device150can include item scrolling in any kind of list and/or inventory of options, text scrolling in an object and/or rendered content item (e.g., a browser, a document, an interface, etc.), navigating to a different location and/or page, a data entry (e.g., a text and/or numerical entry, etc.), object and/or environment manipulation (e.g., object and/or environment rotation, translation, placement, and/or scaling), selection events, virtual space creation and/or manipulation, content scrolling, multimedia controls (e.g., start, stop, pause, etc.), physical world measurements, tracking and/or localization inputs and/or calibrations, etc. The ring device150can provide privacy with respect to inputs, interactions, and/or associated data. For example, inputs can be provided via the ring device150discreetly and/or hidden from a field-of-view of a nearby device and/or person to avoid detection and/or recognition. Moreover, inputs can be provided via the ring device150while in space-constrained environments (e.g., tight spaces), while lying down and/or while a user is otherwise unable to provide (and/or has difficulty providing) inputs requiring additional and/or larger ranges of body (e.g., hand, arm, etc.) motion. For example, a user can provide inputs via the ring device150without waving a hand(s) and/or finger(s) in the air and/or away from the user's body. Moreover, in many cases, inputs provided via the ring device150can be easier and/or more intuitive. For example, the input gesture can mimic the type of input provided to the XR system100(e.g., rotating the ring device150to scroll, tapping the ring device150to select, etc.). In some cases, the ring device150can conserve power on the XR system100by power down or off (or providing information used to power down or off) tracking sensors on the XR system100, such as image sensors. FIG.3Aillustrates an example configuration300of a ring device150. In this example, the ring device150includes an inner ring portion310(e.g., upper surface224shown inFIG.2B) and an outer ring portion312(e.g., contact surface226shown inFIG.2B). The inner ring portion310can receive/engage a finger302of a user as shown inFIG.3A. In some examples, the outer ring portion312(and/or a portion thereof) can be at least partially on top of and/or adjacent to the inner ring portion310. In some cases, the outer ring portion312(and/or a portion thereof) can encompass (and/or rotate) a greater distance along a lateral axis from a longitudinal axis of the lumen of the ring device150and at least a portion of the finger302. In some examples, the inner ring portion310and the outer ring portion312can be asymmetric. In some cases, the outer ring portion312can be rotatably coupled to a portion of the upper surface224. The outer ring portion312can spin/rotate relative to the inner ring portion310and around a longitudinal axis of the lumen of the ring device150. The outer ring portion312can spin/rotate relative to the inner ring portion310in response to a force applied to the outer ring portion312, such as a swiping force. In some examples, the outer ring portion312can do a full rotation (e.g., 360 degrees) relative to the inner ring portion310and around a longitudinal axis of the lumen of the ring device150. In other examples, the outer ring portion312can do a partial rotation (e.g., less than 360 degrees) relative to the inner ring portion310and around a longitudinal axis of the lumen of the ring device150. In some cases, the amount of rotation can depend on the amount of rotational force/pressure (e.g., the magnitude and/or continuity) applied to the outer ring portion312. For example, a user can apply a higher magnitude of force/pressure to the outer ring portion312to increase the amount of rotation performed by the outer ring portion312. As another example, a user can apply a lower magnitude but continuous force/pressure to the outer ring portion312to increase the amount of rotation. Similarly, the user can decrease the amount of rotation by decreasing the amount (e.g., magnitude and/or continuity) of force/pressure applied to the outer ring portion312. When the outer ring portion312is rotated, a position sensor (e.g., position sensor154) on the ring device150can determine the relative motion between the inner ring portion310and the outer ring portion312. The position sensor can determine the magnitude of the rotation, the direction of the rotation, and/or the velocity of the rotation. The ring device150can provide rotation information to the XR system100, which the XR system100can convert (e.g., interpret) into a particular XR input. In some cases, the XR input can correspond to the detection of relative motion. In some cases, the XR input can depend on more granular motion information such as the magnitude of rotation, the direction of rotation, and/or the velocity of rotation, as previously explained. For example, different directions of rotation can be converted (e.g., interpreted) into different XR inputs or types of XR input. To illustrate, a rotation in one direction can convert (can be interpreted as) to scrolling in a particular direction and a rotation in a different direction can convert to scrolling in a different direction. In some cases, rotation in one direction can be converted (e.g., interpreted as) to a type of XR input such as scrolling, while rotation in another direction can be converted to a different type of XR input such as a selection event, a different navigation event, etc. As another example, different magnitudes of rotation (e.g., degrees) and/or velocities can convert into different XR inputs and/or different types of XR inputs. To illustrate, a rotation having a threshold magnitude and/or velocity can be converted into an autoscrolling or smooth scrolling event, and a rotation having a magnitude and/or velocity below a threshold can be converted into (e.g., interpreted as, mapped to, etc.) a certain magnitude of scrolling. In some cases, a rotation having a threshold magnitude and/or velocity can be converted into particular type of XR input, such as scrolling, while a rotation having a magnitude and/or velocity below a threshold can be converted into a different type of XR input, such as a selection event. In some implementations, the XR system100can maintain definitions of rotation events and/or parameters, which the XR system100can use to convert a rotation event into an XR input. For example, the XR system100can map one or more magnitudes, velocities, and/or directions of rotation to one or more XR inputs and/or types of XR inputs. The XR system100can use such mapping to convert rotation information received from the ring device150into a particular XR input. In some cases, the XR system100can map specific magnitudes and/or velocities or specific ranges of magnitudes and/or velocities to specific XR inputs and/or types of XR inputs. In some examples, the user can use a finger to rotate the outer ring portion312relative to the inner ring portion310. For example, with reference toFIG.3B, the user can use a different finger304to rotate the outer ring portion312while the ring device150is worn on the finger302. In the example shown inFIG.3B, the different finger304is a thumb on the same hand as the finger302on which the ring device150is worn. However, the user can rotate the outer ring portion312with any finger or combination of fingers on the same or different hand as the finger302wearing the ring device150. In some cases, the user can rotate the outer ring portion312without use of another finger. For example, in some cases, the user can rotate the outer ring portion312by swiping the outer ring portion312with a surface (e.g., a leg, a couch, a seat/chair, a table, a floor, etc.) or pressing the ring device150onto a surface while moving the finger302along the surface. To illustrate, the user can press the ring device150against the user's leg and move the finger302a certain amount along the leg to cause the outer ring portion312to rotate a certain amount. In some cases, the entire ring device150can be rotated relative to the finger wearing the ring device150and about a longitudinal axis of the ring device's lumen. For example, in some cases, the ring device150may rotate relative to the finger and a position sensor can detect the rotation (and/or the magnitude, velocity, and/or direction of the rotation) of the ring device150. The ring device150can also be configured to detect inputs based on one or more other types of motion, force, and/or interactions.FIGS.4A through4Dillustrate example use cases for providing inputs using the ring device150. FIG.4Aillustrates an example use case400for providing an input to the XR system100via the ring device150. In this example, the user can tap a surface of the ring device150to provide an input to the XR system100. For example, the user can wear the ring device150on finger402and use a different finger404to tap a surface of the ring device150. The different finger404in this example is a thumb on the same hand as the finger402on which the ring device150is worn. However, the user can tap a surface of the ring device150with any finger or combination of fingers on the same or different hand as the finger402wearing the ring device150. In some cases, the user can also tap a surface of the ring device150using a different object or surface, such as a leg (e.g., by tapping the ring device150against the leg), a couch, a seat/chair, a table, a floor, etc.). A position sensor (e.g., position sensor154) on the ring device150can detect the tapping and/or one or more characteristics of the tapping such as a magnitude or length of time of the tapping. The ring device150can provide information about the tapping to the XR system100, which can convert the tapping information into an XR input. In some cases, the tapping (and/or a characteristic of tapping such as a magnitude or length of time) can be mapped to one or more XR inputs. For example, the tapping, the magnitude of tapping, and/or a length of time of the tap can be mapped to an XR input event, an input function in a virtual user interface, etc. In some cases, different magnitudes of tapping and/or different lengths of time of tapping can be mapped to different XR inputs and/or XR input types. For example, a tap of a threshold magnitude can be mapped to an XR input, such as a double click, and a tap below the threshold magnitude can be mapped to a different XR input, such as a single click. As another example, a tap where the length of time of the force applied on a surface of the ring device150is below a threshold can be mapped to an XR input while a longer tap where the length of time of the force applied on the surface of the ring device150is above a threshold can be mapped to a different XR input. In some cases, one or more patterns of taps can be converted into one or more XR inputs. For example, a certain sequence of taps can be mapped to one or more XR inputs and a different sequence of taps can be mapped to one or more different XR inputs.FIG.4Billustrates another example use case420for providing an input to the XR system100via the ring device150. In this example, the user is wearing the ring device150on finger402, providing an input by squeezing the ring device150with adjacent finger406and adjacent finger408. A touch sensor (e.g., touch sensor158) on the ring device150can detect the squeezing and/or determine the magnitude and/or the length of time of the squeezing. The ring device150can provide the squeezing information to the XR system100, which can convert the squeezing information into an XR input. In some examples, the squeezing, the magnitude of squeezing, and/or the length of time of the squeezing can be mapped to one or more XR inputs. In some cases, different magnitudes and/or lengths of time of squeezing can be mapped to different XR inputs and/or XR input types. For example, a prolonged squeeze (e.g., above a threshold amount of time) can be mapped to a particular XR input, such as a double click, and a shorter squeeze (e.g., below a threshold amount of time) can be mapped to a different XR input, such as a single click. As another example, a harder squeeze (e.g., above a threshold amount of force/pressure) can be mapped to a particular XR input, and a softer squeeze (e.g., below a threshold amount of force/pressure) can be mapped to a different XR input. FIG.4Cillustrates an example use case440of a ring device150being rotated relative to a finger402wearing the ring device150and about a longitudinal axis of a lumen (e.g., receiving space210) of the ring device150. In this example, the ring device150does not include an outer ring portion and an inner ring portion as shown inFIGS.3A and3B. The user can use a different finger410to rotate the entire ring device150around at least part of the finger402wearing the ring device150and about a longitudinal axis of the lumen of the ring device150. In some examples, the user can rotate the ring device150about the longitudinal axis of the lumen in a lateral direction from the longitudinal axis of the lumen. The different finger410in this example is a thumb on the same hand as the finger402on which the ring device150is worn. However, the user can rotate the ring device150with any finger or combination of fingers on the same or different hand as the finger402wearing the ring device150. When the different finger410rotates the ring device150, a position sensor (e.g., position sensor154) on the ring device150can determine the magnitude, velocity, and/or direction of rotation of the ring device150about the longitudinal axis of the lumen of the ring device150. The ring device150can provide such rotation information to the XR system100, and the XR system100can convert the rotation information into a particular XR input, as previously explained. FIG.4Dillustrates an example use case460for rotating the ring device150using adjacent fingers. In this example, the user can use adjacent finger406and/or adjacent finger408to rotate the ring device150relative to the finger402wearing the ring device150and about a longitudinal axis of the lumen of the ring device150. For example, the user can rotate the ring device150about a longitudinal axis of the lumen in a lateral direction from the longitudinal axis. The user can use the adjacent finger406and/or adjacent finger408to rotate or swipe the ring device150in a particular direction. A position sensor on the ring device150can detect the rotation and provide rotation information to the XR system100. The XR system100can convert the rotation information into one or more XR inputs. The XR system100can use one or more definitions mapping XR inputs to rotation events, as previously explained. In some examples, the ring device150can shown inFIGS.4A-Dcan be used to provide other inputs and/or data in addition to and/or instead of the XR inputs corresponding to the use cases400,420,440, and460described above. For example, the ring device150can emit a light or blink to provide certain information to the XR system100. The XR system100can detect the light/blinking using one or more image sensors, and can use the light/blinking as an input and/or to supplement other inputs. In some examples, the XR system100can use the detected light/blinking to track/estimate a location and/or motion (e.g., rotation) of the ring device150. In other examples, the XR system100can interpret and/or convert the detected light/blinking to an instruction to perform a certain action, such as adjust a state of one or more components of the XR system100(e.g., a power mode of one or more components (turn on, turn off, increase a power mode, decrease a power mode, etc.), a processing performed by the XR system100, etc.), trigger an action by the XR system100(e.g., render an object and/or interface, start or stop an operation, press or activate a button on the XR system100, etc.), process an input to a user interface at the XR system100, supplement an input based on an interaction with the ring device150(e.g., force/pressure, etc.). In other examples, the ring device150can detect audio (e.g., via one or more audio sensors), such as a speech or voice input, and provide the audio to the XR system100and/or an input instruction generated from the audio). The XR system100can use audio from the ring device150to perform a certain action at the XR system100, as previously described. In some cases, the ring device150can be used to generate any other type of input to the XR system100. For example, in some cases, the ring device150can be used to generate a hand gesture (e.g., a fist, a flat palm, pointing a finger/hand, a hand motion, a hand signal, etc.). The hand gesture can be determined by one or more sensors on the XR system100and used to perform a certain action at the XR system100, as previously described. In some examples, the determination of the hand gesture can be aided by data from one or more sensors on the ring device150, such as an IMU, a pressure sensor, etc. FIG.5illustrates an example of a user502providing XR inputs by moving and/or positioning a hand506and/or finger504wearing the ring device150. The user502can move the hand506and/or finger504in any direction in 3D space to generate one or more XR inputs via the ring device150. The movement and/or position of the hand506and/or finger504can be converted into one or more XR inputs. In some cases, the movement of the hand506and/or finger504can be converted into one or more XR inputs based on a direction of movement, a magnitude of movement, a velocity of the movement, a pattern and/or sequence of the movement, a gesture associated with the movement, and/or any other characteristics of the movement. As previously explained, the ring device150can implement sensors (e.g., IMU152, position sensor154, pressure sensor156) that can measure characteristics of the movement. For example, the sensors on the ring device150can estimate the orientation of the finger504and/or hand506in 3D space, a movement of the ring device150, a gesture associated with the finger504and/or hand506, a position of the finger504and/or hand506, etc. This information can be converted into one or more XR inputs. In some cases, this information can be converted into a manipulation of a virtual environment, interface, and/or object(s) presented by the XR system100. For example, in some cases, the sensor information from the ring device150can be used to track the hand506of the user502. The hand tracking can be used to detect a hand gesture and trigger an object manipulation event. The ring device150can provide the sensor information to the XR system100, which can convert the sensor information into an object manipulation, such as moving an object, rotating an object, resizing an object, setting a plane associated with an object and/or environment, etc. As another example, the sensors in the ring device150can detect motion information (e.g., velocity, changes in acceleration, etc.) and provide the motion information to the XR system100. The motion (e.g., velocity, changes in acceleration, etc.) reflected in the motion information can trigger certain events. The XR system100can convert the motion information and implement the triggered events. To illustrate, if the user502moves the hand506at a velocity and/or an acceleration above a threshold, the movement of the hand506can be converted into an event such as, for example, placing an object on a plane in 3D space and/or the virtual environment. In another example, the sensors in the ring device150can detect an orientation of the ring device150provide the orientation information to the XR system100with or without other information such as rotation information (e.g., rotational velocity, rotational acceleration, rotation angle, etc.). The orientation reflected in the orientation information can trigger certain events (e.g., with or without the other information such as the rotation information). The XR system100can convert the orientation information (with or without other information such as the rotation information) and implement the triggered events. In some examples, the ring device150can use one or more sensors, such as ultrasonic transmitters/transducers and/or microphones, for ranging of the hand506. The ranging of the hand506can be used to determine one or more XR inputs. For example, in some cases, ranging information can be used to resize objects with certain pinch gestures (e.g., one or more pinch gestures that gesture grabbing one or more edges of an object). In some cases, the ranging information (and/or other hand tracking information) associated with one or more ring devices can be used to implement resizing events based on certain gestures. For example, instead of finding and pinching the corners of an object, the user502can make a symbolic gesture with the user's hands for “resizing”. In some examples, the symbolic gesture can include a movement or gesture of one or more hands that mimics a motion used to resize an object, mimics a motion to define one or more boundaries/dimensions of an object, matches a preconfigured motion or gesture for resizing objects, etc. In some examples, a measurement of the distance between a ring device on each hand can then be used to affect the resizing of that object. The ring device150can also be used to measure distances in the environment even when the ring device150is out of a field-of-view (FOV) of the XR system100or when the lighting levels in the environment are too low for the XR system100to sufficiently detect the ring device150. For example, the user502can put down the hand506with the ring device150to trigger the ring device150to measure one or more distances in the physical world in an XR application, such as an XR measuring tape application for example. In some examples, the one or more distances to measure can be defined by a movement of the hand506with the ring device150. For example, the user502can move the hand506with the ring device150from a first position to a second position to define a distance to be measured and/or initiate the start and end of the distance measurement. In some cases, the ring device150can use sensors, such as one or more ultrasonic transmitters/transducers and/or microphones, to determine if the user's hands are closer together or farther apart, if any of the user's hands are close to one or more objects, etc. In some examples, the ring device can use a pressure sensor, such as a barometric air pressure sensor, to determine relative changes in the position of the hand506. The XR system100can interpret such changes and/or position into XR inputs, such as selection events. In some cases, the ring device150can use one or more sensors to obtain hand tracking information, which the XR system100can use to track the hands and/or estimate a location of the hands even if the hands are out of the FOV of the XR system100and/or the lighting is too low for the image sensor at the XR system100to detect the hands. For example, if the user's hands move from up to down while outside of the FOV of the XR system100, the XR system100can still obtain an estimate of such motion. In some cases, the XR system100can implement a synthetic animation representing such motion even though such motion occurred outside of the FOV of the XR system100. In some examples, the XR system100can determine XR inputs based on a combination of motion/position information from the ring device150and interactions with the ring device150. For example, the ring device150can send to the XR system100sensor measurements of a hand position and a rotation (e.g., angle, rotational velocity, and/or rotational acceleration) of the ring device150. The XR system100can use the combination of the hand position and the rotation (e.g., the angle, rotational velocity, and/or rotational acceleration) of the ring device150to enable vertical and horizontal adjustments of virtual objects and/or environments. To illustrate, the user can create a plane in space using the hand with the ring device150. The user can then adjust the height and depth of that plane by a spin or scroll action of the ring device150. The hand orientation reported by the ring device150(e.g., vertical and/or horizontal) can determine which component (e.g., height or depth) the user is modifying. As another example, the user can setup a virtual space, such as a virtual office, by “setting” anchor planes based on the hand position measured based on sensor data from the ring device150. The user can scroll through different content elements to position one or more content elements (e.g., a TV screen, weather widget, TV monitor, game, etc.) on one or more anchor planes. The user can also use the ring device150to provide content scrolling and multimedia controls (e.g., start, stop, pause, etc.). For example, if the user has a media application, such as a music application or a video application, the user can start, stop, pause, rewind, forward, and/or otherwise control a content playback on the media application without moving the user's hands. The user can instead control the media application by interacting with the ring device150, such as applying pressure to the ring device150, rotating the ring device150, etc. FIG.6is a flowchart illustrating an example process600for using a ring device (e.g., ring device150) to enhance user interface, input, and/or XR functionalities. At block602, the process600can include detecting, by a wearable device (e.g., ring device150), movement of the wearable device and/or a force applied to a surface(s) of the wearable device. In some examples, the wearable device can include a structure defining a receiving space or lumen (e.g., receiving space210) configured to receive a finger associated with a user. In some examples, the structure can include a first surface configured to contact the finger received via the receiving space. In some examples, the receiving space can include a longitudinal access opening for receiving the finger, and the first surface can be countered or shaped to inhibit or prevent a movement of the finger along a longitudinal, lateral, and/or transverse direction. At block604, the process600can include determining, by the wearable device from one or more sensors on the wearable device, one or more measurements of the movement of the wearable device and/or the force applied to the surface(s) of the wearable device. In some examples, the one or more sensors are integrated into the structure associated with the wearable device. In some examples, the one or more measurements can include a rotation of at least a portion of the wearable device about a longitudinal axis of the receiving space associated with the wearable device. In some cases, the one or more sensors can be configured to detect the rotation of at least a portion of the wearable device about the longitudinal axis of the receiving space. In some cases, the one or more measurements can include a first rotational measurement (e.g., angular change, angular/rotational velocity, angular/rotational acceleration, etc.) associated with a first rotation of the structure about the longitudinal axis of the receiving space of the wearable device and/or a second rotational measurement (e.g., angular change, angular/rotational velocity, angular/rotational acceleration, etc.) associated with a second rotation of a portion of the structure about the longitudinal axis of the receiving space. The first rotation and the second rotation can be relative to the finger contacting the first surface of the wearable device. At block606, the process600can include sending, by the wearable device via a wireless transmitter, to an electronic device (e.g., XR system100), data associated with the one or more measurements. In some examples, the wearable device can send the one or more measurements of the movement to the electronic device. The one or more measurements can represent and/or correspond to an XR input at the electronic device. In some aspects, the process600can include sending, by the wearable device to the electronic device, an XR input associated with an XR application at the electronic device. In some examples, the XR input can be based on one or more measurements from the one or more sensors. For example, the XR input can include the first rotational acceleration and/or the second rotational acceleration. In some examples, the one or more measurements of the movement include one or more rotational measurements, and the one or more rotational measurements include at least one of a rotational angle, a rotational velocity, and/or a rotational acceleration. In some examples, the one or more sensors are configured to detect at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and/or a position of the structure relative to one or more objects. In some cases, the data can include at least one of a magnitude of the touch signal, the orientation of the structure, the position of the structure relative to the one or more objects, and/or a distance between the structure and at least one of the electronic device directly or indirectly coupled to the wearable device and/or a different hand than a respective hand of the finger. In some aspects, the one or more sensors can be configured to detect a touch signal corresponding to one or more fingers contacting a different surface of the structure, an orientation of the structure, and/or a position of the structure relative to one or more objects. In some cases, the one or more measurements can include a magnitude of the touch signal, the orientation of the structure, the position of the structure relative to the one or more objects, and/or a distance between the structure and the electronic device and/or a different hand than a respective hand of the finger. In some examples, the one or more measurements correspond to an additional orientation of the respective hand of the finger, and the XR input is based on the additional orientation of the respective hand and at least one of the rotation and/or the orientation of the structure. In some examples, detecting the movement can include detecting a rotation of at least a portion of the structure about a longitudinal axis of the receiving space and measuring at least one of a first rotation of a first portion of the structure (e.g., upper surface224) about the longitudinal axis of the receiving space and a second rotation of a second portion of the structure (e.g., contact surface226) about the longitudinal axis of the receiving space. In some examples, the second rotation is in a direction opposite to the first rotation. In some aspects, the process600can include sending, by the wearable device to the electronic device, one or more additional measurements from the one or more sensors. In some cases, the one or more additional measurements correspond to an additional orientation of the respective hand of the finger. In some examples, the XR input is based on the additional orientation of the respective hand and at least one of the first rotation, the second rotation, and/or the orientation of the structure. In some cases, an XR input can include scrolling virtual content rendered by the electronic device, scaling an object rendered by the electronic device, rotating the object rendered by the electronic device, moving the object rendered by the electronic device, defining a virtual plane in an environment rendered by the electronic device, and/or placing a virtual object rendered by the electronic device in one or more virtual planes in the environment rendered by the electronic device. In some cases, the XR input can be based on a touch signal corresponding to one or more fingers contacting a second surface of the structure (e.g., second surface214, contact surface226), an orientation of the structure, rotation of the wearable device, a movement of a hand associated with the finger, and/or a position of the structure relative to one or more objects. In some examples, the XR input can be based on one or more properties associated with the one or more measurements. In some cases, the one or more properties can include a magnitude of rotation of the wearable device, a direction of the rotation, a velocity of the rotation, and/or a length of time of a pressure applied to one or more portions of the structure. The one or more properties can be identified by the one or more measurements. In some cases, the XR input can be based on one or more properties associated with the touch signal. The one or more properties can include a magnitude of pressure from the one or more fingers contacting the second surface of the structure, a motion associated with the one or more fingers when contacting the second surface of the structure, a direction of the motion, a length of time of contact between the one or more fingers and the second surface, and/or a pattern of contact of the second surface of the structure by the one or more fingers. The one or more properties can be identified by the one or more measurements. In some examples, the pattern of contact can include a sequence of contacts by the one or more fingers on the second surface. In some cases, the XR input can include modifying a virtual element along multiple dimensions in space. In some examples, the virtual element can include a virtual object rendered by the electronic device, a virtual plane in the environment rendered by the electronic device, and/or the environment rendered by the electronic device. In some examples, an adjustment of a first dimension of the multiple dimensions is defined by at least one of an angular change, a rotational velocity, and/or a rotational acceleration associated with a rotation of the wearable device. In some examples, an adjustment of a second dimension of the multiple dimensions is defined by one or more different measurements in the one or more measurements. In some cases, the one or more different measurements in the one or more measurements can include a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and/or a position of the structure relative to one or more objects. In some cases, the one or more measurements can include motion measurements corresponding to a movement of a hand associated with the finger. In some examples, the XR input can correspond to a request to measure a distance in physical space. The distance can be defined by the movement of the hand. For example, the distance can be defined by a first position of the hand prior to or during the movement of the hand, and a second position of the hand after or during the movement of the hand. In some cases, the wearable device can reduce power consumption and resource usage at the electronic device. For example, the wearable device can offload certain operations such as hand tracking and/or other tracking operations from the electronic device, allowing the electronic device to reduce power consumption and resource usage such as sensor, camera, and/or compute resource usage. In some examples, when tracking operations are offloaded from the electronic device to the wearable device, the electronic device can turn off, or reduce a power mode of, one or more tracking resources such as cameras and/or other sensors that the electronic device would otherwise use to track the user's hands and/or other objects. In some examples, the wearable device can be equipped with various power saving features. For example, in some cases, the wearable device can save power by shutting down after an XR application on the electronic device has stopped and/or been terminated. As another example, the wearable device can remain off or in a lower power mode, and turn on or switch to higher power mode based on one or more user interactions/inputs. For example, the wearable device can remain off or in a lower power mode, and turn on or switch to higher power mode when the wearable device is rotated by a certain amount. In some cases, the wearable device can include a wearable ring. In some cases, the one or more sensors can include a position sensor, an accelerometer, a gyroscope, a magnetometer, a pressure sensor, an audio sensor, a touch sensor, and/or an inertial measurement unit. In some examples, the process600may be performed by one or more computing devices or apparatuses. In one illustrative example, the process600can be performed by the XR system100and/or the ring device150shown inFIG.1and/or one or more computing devices with the computing device architecture700shown inFIG.7. In some cases, such a computing device or apparatus may include a processor, microprocessor, microcomputer, or other component of a device that is configured to carry out the steps of the process600. In some examples, such computing device or apparatus may include one or more sensors configured to capture image data and/or other sensor measurements. For example, the computing device can include a smartphone, a head-mounted display, a mobile device, or other suitable device. In some examples, such computing device or apparatus may include a camera configured to capture one or more images or videos. In some cases, such computing device may include a display for displaying images. In some examples, the one or more sensors and/or camera are separate from the computing device, in which case the computing device receives the sensed data. Such computing device may further include a network interface configured to communicate data. The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphics processing units (GPUs), digital signal processors (DSPs), central processing units (CPUs), and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein. The computing device may further include a display (as an example of the output device or in addition to the output device), a network interface configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The network interface may be configured to communicate and/or receive Internet Protocol (IP) based data or other type of data. The process600is illustrated as a logical flow diagram, the operations of which represent a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. Additionally, the process600may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium may be non-transitory. FIG.7illustrates an example computing device architecture700of an example computing device which can implement various techniques described herein. For example, the computing device architecture700can implement at least some portions of the XR system100shown inFIG.1. The components of the computing device architecture700are shown in electrical communication with each other using a connection705, such as a bus. The example computing device architecture700includes a processing unit (CPU or processor)710and a computing device connection705that couples various computing device components including the computing device memory715, such as read only memory (ROM)720and random access memory (RAM)725, to the processor710. The computing device architecture700can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor710. The computing device architecture700can copy data from the memory715and/or the storage device730to the cache712for quick access by the processor710. In this way, the cache can provide a performance boost that avoids processor710delays while waiting for data. These and other modules can control or be configured to control the processor710to perform various actions. Other computing device memory715may be available for use as well. The memory715can include multiple different types of memory with different performance characteristics. The processor710can include any general purpose processor and a hardware or software service stored in storage device730and configured to control the processor710as well as a special-purpose processor where software instructions are incorporated into the processor design. The processor710may be a self-contained system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device architecture700, an input device745can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device775can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device. In some instances, multimodal computing devices can enable a user to provide multiple types of input to communicate with the computing device architecture700. The communication interface740can generally govern and manage the user input and computing device output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device730is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)725, read only memory (ROM)720, and hybrids thereof. The storage device730can include software, code, firmware, etc., for controlling the processor710. Other hardware or software modules are contemplated. The storage device730can be connected to the computing device connection705. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor710, connection705, output device775, and so forth, to carry out the function. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Specific details are provided in the description above to provide a thorough understanding of the embodiments and examples provided herein. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Individual embodiments may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function. Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure. In the foregoing description, aspects of the application are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative embodiments of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate embodiments, the methods may be performed in a different order than that described. One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“≤”) and greater than or equal to (“≥”) symbols, respectively, without departing from the scope of this description. Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof. The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly. Claim language or other language in the disclosure reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods, algorithms, and/or operations described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves. The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein. Illustrative Examples of the Disclosure Include: Aspect 1. A wearable device comprising: a structure defining a receiving space configured to receive a finger associated with a user, the structure comprising a first surface configured to contact the finger received via the receiving space; one or more sensors integrated into the structure, the one or more sensors being configured to detect a rotation of at least a portion of the structure about a longitudinal axis of the receiving space; and a wireless transmitter configured to send to an electronic device, data based on the detected rotation. Aspect 2. The wearable device of Aspect 1, wherein the data comprises an extended reality input associated with an XR application at the electronic device, and wherein to send the data, the wearable device is configured to send, via the wireless transmitter and to the electronic device, the XR input. Aspect 3. The wearable device of any of Aspects 1 to 2, wherein the data comprises one or more rotational measurements, and wherein the one or more rotational measurements comprise at least one of a rotational angle, a rotational velocity, and a rotational acceleration. Aspect 4. The wearable device Aspect 2, wherein the one or more sensors are configured to detect at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and a position of the structure relative to one or more objects, and wherein the data comprises at least one of a magnitude of the touch signal, the orientation of the structure, the position of the structure relative to the one or more objects, and a distance between the structure and at least one of the electronic device directly or indirectly coupled to the wearable device and a different hand than a respective hand of the finger. Aspect 5. The wearable device of any of Aspects 2 or 4, wherein the wearable device is configured to: send, to the electronic device via the wireless transmitter, one or more measurements from the one or more sensors, the one or more measurements corresponding to an additional orientation of the respective hand of the finger, wherein the XR input is based on the additional orientation of the respective hand and at least one of the rotation and the orientation of the structure. Aspect 6. The wearable device of any of Aspects 1 to 5, wherein, to detect the rotation of at least a portion of the structure about a longitudinal axis of the receiving space, the one or more sensors are configured to measure at least one of a first rotation of a first portion of the structure about the longitudinal axis of the receiving space and a second rotation of a second portion of the structure about the longitudinal axis of the receiving space. Aspect 7. The wearable device of Aspect 6, wherein the second rotation is in a direction opposite to the first rotation. Aspect 8. The wearable device of any of Aspects 1 to 7, wherein the data corresponds to an XR input to an XR application at the electronic device, and wherein the XR input comprises at least one of scrolling virtual content rendered by the electronic device, scaling an object rendered by the electronic device, rotating the object rendered by the electronic device, moving the object rendered by the electronic device, defining a virtual plane in an environment rendered by the electronic device, and placing a virtual object rendered by the electronic device in one or more virtual planes in the environment rendered by the electronic device. Aspect 9. The wearable device of any of Aspects 1 to 8, wherein the data corresponds to an XR input to an XR application at the electronic device, and wherein the data comprises one or more measurements from the one or more sensors, the one or more measurements comprising at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, the rotation, a movement of a hand associated with the finger, and a position of the structure relative to one or more objects. Aspect 10. The wearable device of any of Aspects 8 or 9, wherein the XR input is based on one or more properties associated with the one or more measurements in the data, the one or more properties comprising at least one of a magnitude of the rotation, a direction of the rotation, a velocity of the rotation, and a length of time of a pressure applied to one or more portions of the structure, the one or more properties being identified by the one or more measurements. Aspect 11. The wearable device of any of Aspects 8 to 10, wherein the XR input is based on one or more properties associated with the touch signal, the one or more properties comprising at least one of a magnitude of pressure from the one or more fingers contacting the second surface of the structure, a motion associated with the one or more fingers when contacting the second surface of the structure, a direction of the motion, a length of time of contact between the one or more fingers and the second surface, and a pattern of contact of the second surface of the structure by the one or more fingers, the one or more properties being identified by the one or more measurements. Aspect 12. The wearable device of any of Aspects 1 to 11, wherein the XR input comprises modifying a virtual element along multiple dimensions in space, the virtual element comprising at least one of a virtual object rendered by the electronic device, a virtual plane in an environment rendered by the electronic device, and the environment rendered by the electronic device. Aspect 13. The wearable device of Aspect 12, wherein an adjustment of a first dimension of the multiple dimensions is defined by at least one of an angular change, a rotational velocity, and a rotational acceleration associated with the rotation, wherein an adjustment of a second dimension of the multiple dimensions is defined by the one or more measurements, and wherein the one or more measurements comprise at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and a position of the structure relative to one or more objects. Aspect 14. The wearable device of any of Aspects 8 to 13, wherein the one or more measurements comprise motion measurements corresponding to the movement of the hand associated with the finger, and wherein the XR input corresponds to a request to measure a distance in physical space, the distance being defined by the movement of the hand. Aspect 15. The wearable device of any of Aspects 1 to 14, wherein the wearable device comprises a wearable ring. Aspect 16. The wearable device of any of Aspects 1 to 15, wherein the wearable device comprises a wearable ring including an outer ring and an inner ring, the inner ring defines the receiving space, and the one or more sensors being configured to detect at least one of an angular change, a rotational velocity, and a rotational acceleration of the outer ring about the longitudinal axis of the receiving space. Aspect 17. The wearable device of any of Aspects 1 to 16, wherein the wearable device is configured to be turned on from an off state or switched to higher power mode from a lower power mode when the at least a portion of the structure is rotated by a certain amount. Aspect 18. The wearable device of any of Aspects 1 to 17, wherein the electronic device comprises a mobile device. Aspect 19. The wearable device of Aspect 18, wherein the mobile device comprises one of a head-mounted display, a mobile phone, a portable computer, or a smart watch. Aspect 20. The wearable device of any of Aspects 1 to 19, wherein the one or more sensors comprise at least one of a position sensor, an accelerometer, a gyroscope, a pressure sensor, an audio sensor, a touch sensor, and magnetometer. Aspect 21. A method comprising: detect, via one or more sensors on a wearable device, a rotation of at least a portion of the wearable device about a longitudinal axis of a receiving space associated with the wearable device, the wearable device comprising a structure defining the receiving space, wherein the receiving space is configured to receive a finger associated with a user, and wherein the structure comprises a first surface configured to contact the finger received via the receiving space; and send, to an electronic device via a wireless transmitter of the wearable device, data based on the detected rotation. Aspect 22. The method of Aspect 21, wherein the data comprises an extended reality input associated with an XR application at the electronic device, and wherein to send the data, the wearable device is configured to send, via the wireless transmitter and to the electronic device, the XR input. Aspect 23. The method of any of Aspects 21 to 22, wherein the data comprises one or more rotational measurements, and wherein the one or more rotational measurements comprise at least one of a rotational angle, a rotational velocity, and a rotational acceleration. Aspect 24. The method of any of Aspects 21 to 23, wherein the one or more sensors are configured to detect at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and a position of the structure relative to one or more objects, and wherein the data comprises at least one of a magnitude of the touch signal, the orientation of the structure, the position of the structure relative to the one or more objects, and a distance between the structure and at least one of the electronic device directly or indirectly coupled to the wearable device and a different hand than a respective hand of the finger. Aspect 25. The method of any of Aspects 22 to 24, further comprising: sending, to the electronic device via the wireless transmitter, one or more measurements from the one or more sensors, the one or more measurements corresponding to an additional orientation of the respective hand of the finger, wherein the XR input is based on the additional orientation of the respective hand and at least one of the rotation and the orientation of the structure. Aspect 26. The method of any of Aspects 21 to 25, wherein detecting the rotation of at least a portion of the structure about a longitudinal axis of the receiving space further comprises measuring at least one of a first rotation of a first portion of the structure about the longitudinal axis of the receiving space and a second rotation of a second portion of the structure about the longitudinal axis of the receiving space. Aspect 27. The method of Aspect 26, wherein the second rotation is in a direction opposite to the first rotation. Aspect 28. The method of any of Aspects 21 to 27, wherein the data corresponds to an XR input to an XR application at the electronic device, and wherein the XR input comprises at least one of scrolling virtual content rendered by the electronic device, scaling an object rendered by the electronic device, rotating the object rendered by the electronic device, moving the object rendered by the electronic device, defining a virtual plane in an environment rendered by the electronic device, and placing a virtual object rendered by the electronic device in one or more virtual planes in the environment rendered by the electronic device. Aspect 29. The method of any of Aspects 21 to 28, wherein the data corresponds to an XR input to an XR application at the electronic device, and wherein the data comprises one or more measurements from the one or more sensors, the one or more measurements comprising at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, the rotation, a movement of a hand associated with the finger, and a position of the structure relative to one or more objects. Aspect 30. The method of any of Aspects 28 or 29, wherein the XR input is based on one or more properties associated with the one or more measurements in the data, the one or more properties comprising at least one of a magnitude of the rotation, a direction of the rotation, a velocity of the rotation, and a length of time of a pressure applied to one or more portions of the structure, the one or more properties being identified by the one or more measurements. Aspect 31. The method of any of Aspects 28 to 30, wherein the XR input is based on one or more properties associated with the touch signal, the one or more properties comprising at least one of a magnitude of pressure from the one or more fingers contacting the second surface of the structure, a motion associated with the one or more fingers when contacting the second surface of the structure, a direction of the motion, a length of time of contact between the one or more fingers and the second surface, and a pattern of contact of the second surface of the structure by the one or more fingers, the one or more properties being identified by the one or more measurements. Aspect 32. The method of any of Aspects 28 to 31, wherein the XR input comprises modifying a virtual element along multiple dimensions in space, the virtual element comprising at least one of a virtual object rendered by the electronic device, a virtual plane in an environment rendered by the electronic device, and the environment rendered by the electronic device. Aspect 33. The method of Aspect 32, wherein an adjustment of a first dimension of the multiple dimensions is defined by at least one of an angular change, a rotational velocity, and a rotational acceleration associated with the rotation, wherein an adjustment of a second dimension of the multiple dimensions is defined by the one or more measurements, and wherein the one or more measurements comprise at least one of a touch signal corresponding to one or more fingers contacting a second surface of the structure, an orientation of the structure, and a position of the structure relative to one or more objects. Aspect 34. The method of any of Aspects 28 to 33, wherein the one or more measurements comprise motion measurements corresponding to the movement of the hand associated with the finger, and wherein the XR input corresponds to a request to measure a distance in physical space, the distance being defined by the movement of the hand. Aspect 35. The method of any of Aspects 21 to 34, wherein the wearable device comprises a wearable ring. Aspect 36. The method of any of Aspects 21 to 35, wherein the wearable device comprises a wearable ring including an outer ring and an inner ring, the inner ring defines the receiving space, and the one or more sensors being configured to detect at least one of an angular change, a rotational velocity, and a rotational acceleration of the outer ring about the longitudinal axis of the receiving space. Aspect 37. The method of any of Aspects 21 to 36, further comprising: adjusting a state of the wearable device when the at least a portion of the structure is rotated by a certain amount, wherein adjusting the state comprises turning on one or more components of the electronic device from an off state or switching the one or more components to higher power mode from a lower power mode. Aspect 38. The method of any of Aspects 21 to 37, wherein the electronic device comprises a mobile device. Aspect 39. The method of Aspect 38, wherein the mobile device comprises one of a head-mounted display, a mobile phone, a portable computer, or a smart watch. Aspect 40. The method of any of Aspects 21 to 39, wherein the one or more sensors comprise at least one of a position sensor, an accelerometer, a gyroscope, a pressure sensor, an audio sensor, a touch sensor, and a magnetometer. Aspect 41. A non-transitory computer-readable medium having stored thereon instructions which, when executed by one or more processing devices, cause the one or more processing devices to perform a method according to any of Aspects 21 to 40. Aspect 42. A wearable device comprising means for performing a method according to any of Aspects 21 to 40. Aspect 43. An apparatus comprising: memory; and one or more processors coupled to the memory, the one or more processors being configured to: receive, from a wearable device, data corresponding to a rotation of at least a portion of the wearable device about a longitudinal axis of a receiving space associated with the wearable device, the wearable device comprising a structure defining the receiving space; determine an input based on the data, the input comprising at least one of a user interface input associated with a user interface at the apparatus and an extended reality (XR) input associated with an XR application at the apparatus; and based on the input, control at least one of the user interface and an operation of the XR application. Aspect 44. The apparatus of Aspect 43, wherein the receiving space is configured to receive a finger associated with a user, and wherein the structure comprises a surface configured to contact the finger received via the receiving space. Aspect 45. The apparatus of Aspect 43 or 44, wherein the wearable device comprises a ring. Aspect 46. The apparatus of any of Aspects 43 to 45, wherein the data comprises one or more rotational measurements, and wherein the one or more rotational measurements comprise at least one of a rotational angle, a rotational velocity, and a rotational acceleration. Aspect 47. The apparatus of any of Aspects 43 to 46, wherein the data corresponds to a touch signal associated with one or more fingers contacting a surface of the wearable device, an orientation of the wearable device, and a position of the wearable device relative to one or more objects, and wherein the data comprises at least one of a magnitude of the touch signal, the orientation of the wearable device, the position of the wearable device relative to the one or more objects, and a distance between the wearable device and at least one of the apparatus and a different hand than a respective hand of the finger. Aspect 48. The apparatus of Aspect 47, wherein the data comprises one or more measurements from one or more sensors on the wearable device, the one or more measurements corresponding to an additional orientation of the respective hand of the finger, wherein the XR input is based on the additional orientation of the respective hand and at least one of the rotation and the orientation of the structure. Aspect 49. The apparatus of any of Aspects 43 to 48, wherein the rotation of at least a portion of the wearable device comprises at least one of a first rotation of a first portion of the wearable device about the longitudinal axis of the receiving space and a second rotation of a second portion of the wearable device about the longitudinal axis of the receiving space. Aspect 50. The apparatus of Aspect 49, wherein the second rotation is in a direction opposite to the first rotation. Aspect 51. The apparatus of any of Aspects 43 to 50, wherein the XR input comprises at least one of scrolling virtual content rendered by the apparatus, scaling an object rendered by the apparatus, rotating the object rendered by the apparatus, moving the object rendered by the apparatus, defining a virtual plane in an environment rendered by the apparatus, and placing a virtual object rendered by the apparatus in one or more virtual planes in the environment rendered by the apparatus. Aspect 52. The apparatus of any of Aspects 43 to 51, wherein, to control at least one of the user interface and the operation of the XR application, the one or more processors are configured to: scroll virtual content rendered by the apparatus, scale an object rendered by the apparatus, rotate the object rendered by the apparatus, move the object rendered by the apparatus, define a virtual plane in an environment rendered by the apparatus, and/or place a virtual object rendered by the apparatus in one or more virtual planes in the environment rendered by the apparatus. Aspect 53. The apparatus of any of Aspects 43 to 52, wherein the data comprises one or more measurements from the one or more sensors, the one or more measurements comprising at least one of a touch signal corresponding to one or more fingers contacting a surface of the wearable device, an orientation of the wearable device, the rotation, a movement of a hand associated with the finger, and a position of the wearable device relative to one or more objects. Aspect 54. The apparatus of any of Aspects 43 to 53, wherein the XR input is based on one or more properties associated with the one or more measurements in the data, the one or more properties comprising at least one of a magnitude of the rotation, a direction of the rotation, a velocity of the rotation, and a length of time of a pressure applied to one or more portions of the wearable device, the one or more properties being identified by the one or more measurements. Aspect 55. The apparatus of any of Aspects 43 to 54, wherein the XR input is based on one or more properties associated with a touch signal, the one or more properties comprising at least one of a magnitude of pressure from the one or more fingers contacting a surface of the wearable device, a motion associated with the one or more fingers when contacting the surface of the wearable device, a direction of the motion, a length of time of contact between the one or more fingers and the surface, and a pattern of contact of the surface of the wearable device by the one or more fingers, the one or more properties being identified by the one or more measurements. Aspect 56. The apparatus of any of Aspects 43 to 55, wherein the one or more processor are configured to: modify, based on the XR input, a virtual element along multiple dimensions in space, the virtual element comprising at least one of a virtual object rendered by the apparatus, a virtual plane in an environment rendered by the apparatus, and the environment rendered by the apparatus. Aspect 57. The apparatus of Aspect 56, wherein an adjustment of a first dimension of the multiple dimensions is defined by at least one of an angular change, a rotational velocity, and a rotational acceleration associated with the rotation, wherein an adjustment of a second dimension of the multiple dimensions is defined by the one or more measurements, and wherein the one or more measurements comprise at least one of a touch signal corresponding to one or more fingers contacting a second surface of the wearable device, an orientation of the wearable device, and a position of the wearable device relative to one or more objects. Aspect 58. The apparatus of any of Aspects 43 to 57, wherein the data comprises motion measurements corresponding to movement of a hand associated with the finger, and wherein the XR input corresponds to a request to measure a distance in physical space, the distance being defined by the movement of the hand. Aspect 59. The apparatus of Aspect 58, wherein the one or more processors are configured to: measure the distance in physical space based on the XR input. Aspect 60. The apparatus of any of Aspects 43 to 59, wherein the wearable device comprises a wearable ring including one or more sensors, an outer ring and an inner ring, the inner ring defines the receiving space, and the one or more sensors being configured to detect at least one of an angular change, a rotational velocity, and a rotational acceleration of the outer ring about the longitudinal axis of the receiving space. Aspect 61. The apparatus of any of Aspects 43 to 60, wherein the one or more processors are configured to: based on the data, turn on one or more components of the apparatus from an off state or switching the one or more components to higher power mode from a lower power mode. Aspect 62. The apparatus of any of Aspects 43 to 61, wherein the apparatus comprises a mobile device. Aspect 63. The apparatus of Aspect 62, wherein the mobile device comprises one of a head-mounted display, a mobile phone, a portable computer, or a smart watch. Aspect 64. A method comprising: receiving, by an electronic device and from a wearable device, data corresponding to a rotation of at least a portion of the wearable device about a longitudinal axis of a receiving space associated with the wearable device, the wearable device comprising a structure defining the receiving space; determining an input based on the data, the input comprising at least one of a user interface input associated with a user interface at the electronic device and an extended reality (XR) input associated with an XR application at the electronic device; and based on the input, control at least one of the user interface and an operation of the XR application. Aspect 65. The method of Aspect 64, wherein the receiving space is configured to receive a finger associated with a user, and wherein the structure comprises a surface configured to contact the finger received via the receiving space. Aspect 66. The method of Aspect 64 or 65, wherein the wearable device comprises a ring. Aspect 67. The method of any of Aspects 64 to 66, wherein the data comprises one or more rotational measurements, and wherein the one or more rotational measurements comprise at least one of a rotational angle, a rotational velocity, and a rotational acceleration. Aspect 68. The method of any of Aspects 64 to 67, wherein the data corresponds to a touch signal associated with one or more fingers contacting a surface of the wearable device, an orientation of the wearable device, and a position of the wearable device relative to one or more objects, and wherein the data comprises at least one of a magnitude of the touch signal, the orientation of the wearable device, the position of the wearable device relative to the one or more objects, and a distance between the wearable device and at least one of the electronic device and a different hand than a respective hand of the finger. Aspect 69. The method of Aspect 68, wherein the data comprises one or more measurements from one or more sensors on the wearable device, the one or more measurements corresponding to an additional orientation of the respective hand of the finger, wherein the XR input is based on the additional orientation of the respective hand and at least one of the rotation and the orientation of the structure. Aspect 70. The method of any of Aspects 64 to 69, wherein the rotation of at least a portion of the wearable device comprises at least one of a first rotation of a first portion of the wearable device about the longitudinal axis of the receiving space and a second rotation of a second portion of the wearable device about the longitudinal axis of the receiving space. Aspect 71. The method of Aspect 70, wherein the second rotation is in a direction opposite to the first rotation. Aspect 72. The method of any of Aspects 64 to 71, wherein the XR input comprises at least one of scrolling virtual content rendered by the electronic device, scaling an object rendered by the electronic device, rotating the object rendered by the electronic device, moving the object rendered by the electronic device, defining a virtual plane in an environment rendered by the electronic device, and placing a virtual object rendered by the electronic device in one or more virtual planes in the environment rendered by the electronic device. Aspect 73. The method of any of Aspects 64 to 72, wherein controlling at least one of the user interface and the operation of the XR application comprises scrolling virtual content rendered by the electronic device, scaling an object rendered by the electronic device, rotating the object rendered by the electronic device, moving the object rendered by the electronic device, defining a virtual plane in an environment rendered by the electronic device, and/or placing a virtual object rendered by the electronic device in one or more virtual planes in the environment rendered by the electronic device. Aspect 74. The method of any of Aspects 64 to 73, wherein the data comprises one or more measurements from the one or more sensors, the one or more measurements comprising at least one of a touch signal corresponding to one or more fingers contacting a surface of the wearable device, an orientation of the wearable device, the rotation, a movement of a hand associated with the finger, and a position of the wearable device relative to one or more objects. Aspect 75. The method of any of Aspects 64 to 74, wherein the XR input is based on one or more properties associated with the one or more measurements in the data, the one or more properties comprising at least one of a magnitude of the rotation, a direction of the rotation, a velocity of the rotation, and a length of time of a pressure applied to one or more portions of the wearable device, the one or more properties being identified by the one or more measurements. Aspect 76. The method of any of Aspects 64 to 75, wherein the XR input is based on one or more properties associated with a touch signal, the one or more properties comprising at least one of a magnitude of pressure from the one or more fingers contacting a surface of the wearable device, a motion associated with the one or more fingers when contacting the surface of the wearable device, a direction of the motion, a length of time of contact between the one or more fingers and the surface, and a pattern of contact of the surface of the wearable device by the one or more fingers, the one or more properties being identified by the one or more measurements. Aspect 77. The method of any of Aspects 64 to 76, further comprising modifying, based on the XR input, a virtual element along multiple dimensions in space, the virtual element comprising at least one of a virtual object rendered by the electronic device, a virtual plane in an environment rendered by the electronic device, and the environment rendered by the electronic device. Aspect 78. The method of Aspect 77, wherein an adjustment of a first dimension of the multiple dimensions is defined by at least one of an angular change, a rotational velocity, and a rotational acceleration associated with the rotation, wherein an adjustment of a second dimension of the multiple dimensions is defined by the one or more measurements, and wherein the one or more measurements comprise at least one of a touch signal corresponding to one or more fingers contacting a second surface of the wearable device, an orientation of the wearable device, and a position of the wearable device relative to one or more objects. Aspect 79. The method of any of Aspects 64 to 78, wherein the data comprises motion measurements corresponding to movement of a hand associated with the finger, and wherein the XR input corresponds to a request to measure a distance in physical space, the distance being defined by the movement of the hand. Aspect 80. The method of Aspect 79, further comprising measuring the distance in physical space based on the XR input. Aspect 81. The method of any of Aspects 64 to 80, wherein the wearable device comprises a wearable ring including one or more sensors, an outer ring and an inner ring, the inner ring defines the receiving space, and the one or more sensors being configured to detect at least one of an angular change, a rotational velocity, and a rotational acceleration of the outer ring about the longitudinal axis of the receiving space. Aspect 82. The method of any of Aspects 64 to 81, further comprising, based on the data, turning on one or more components of the electronic device from an off state or switching the one or more components to higher power mode from a lower power mode. Aspect 83. The method of any of Aspects 64 to 82, wherein the electronic device comprises a mobile device. Aspect 84. The method of Aspect 83, wherein the mobile device comprises one of a head-mounted display, a mobile phone, a portable computer, or a smart watch. Aspect 85. An apparatus comprising means for performing a method according to any of Aspects 64 to 84. Aspect 86. A non-transitory computer-readable medium having stored thereon instructions which, when executed by one or more processors, cause the one or more processors to perform a method according to any of Aspects 64 to 84. | 136,826 |
11861066 | DETAILED DESCRIPTION Hereinafter, some example embodiments will be described in detail with reference to the accompanying drawings. The following detailed structural or functional description of example embodiments is provided as an example only and various alterations and modifications may be made to the example embodiments. Accordingly, the example embodiments are not construed as being limited to the disclosure and should be understood to include all changes, equivalents, and replacements within the technical scope of the disclosure. The terminology used herein is for describing various example embodiments only, and is not to be used to limit the disclosure. The singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises/comprising” and/or “includes/including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. Terms, such as first, second, and the like, may be used herein to describe components. Each of these terminologies is not used to define an essence, order or sequence of a corresponding component but used merely to distinguish the corresponding component from other component(s). For example, a first component may be referred to as a second component, and similarly the second component may also be referred to as the first component, without departing from the scope of the disclosure. Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art, and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. Regarding the reference numerals assigned to the elements in the drawings, it should be noted that the same elements will be designated by the same reference numerals, wherever possible, even though they are shown in different drawings. Also, in the description of embodiments, detailed description of well-known related structures or functions will be omitted when it is deemed that such description will cause ambiguous interpretation of the present disclosure. Hereinafter, the example embodiments are described with reference to the accompanying drawings. FIG.1is a diagram illustrating an example of a computer system100according to example embodiments. Referring toFIG.1, the computer system100according to example embodiments may provide a tactile interface for a real-time two-dimensional (2D) tactile input/output interaction of a user10, for example, a visually impaired person. To this end, a first body surface12and a second body surface13may be defined in the user10. The first body surface12and the second body surface13may be present on the same body portion or may be present on different body portions, respectively. The first body surface12and the second body surface13may match and may be separate from each other without being matched. In some example embodiments, the first body surface12and the second body surface13may be separate from each other on a skin surface that encompasses one body portion and thereby defined, and may be provided to face each other based on a plane present in the middle therebetween. In one example embodiment, the first body surface12may be a ventral surface of one body portion and the second body surface13may be a dorsal surface corresponding to the ventral surface on the corresponding body portion. For example, the first body surface12may be a palmar surface of a hand and the second body surface13may be a dorsal surface of the hand. As another example, the first body surface12may be a sole surface of a foot and the second body surface130may be a top surface of the foot. As another example, the first body surface12may be a dorsal surface of one body portion and the second body surface13may be a ventral surface corresponding to the dorsal surface in a corresponding body portion. For example, the first body surface12may be a back surface and the second body surface13may be an abdominal surface. Here, the computer system100may generate a tactile output for the first body surface12of the user10and may sense a tactile input for the second body surface13of the user10. According to example embodiments, the computer system100may include an electronic device110, a tactile output module120, and a tactile input module130. In some example embodiments, at least two of components of the computer system100may be implemented as a single integrated circuitry. That is, components of the computer system100may be implemented as a single apparatus or may be implemented in a plurality of apparatuses in a distributed manner. In some example embodiments, at least one another component may be added to the computer system100. The electronic device110may manage 2D visual information. The electronic device110may control the 2D visual information. Here, the electronic device110may generate a control signal for the 2D visual information based on an input from the user10. For example, the electronic device110may include at least one of a smartphone, a mobile phone, a navigation, a computer, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a tablet PC, a game console, a wearable device, an Internet of things (IoT) device, home appliance, a medical device, and a robot. The tactile output module120may generate a tactile output for the first body surface12of the user10. For example, the tactile output may be a tactile stimulation and the tactile stimulation may include at least one of an electrical stimulation, a vibration stimulation, and a pressure stimulation such as a stabbing stimulation. Here, the tactile output module120may generate the tactile output for the first body surface12based on the 2D visual information. To this end, the tactile output module120may be in contact with the first body surface12. According to an example embodiment, the tactile output module120may be implemented in a form of a board or a sheet. According to another example embodiment, the tactile output module120may be implemented in a form of a fiber that surrounds a body portion having the first body surface12. The tactile input module130may sense a tactile input for the second body surface13of the user10. Here, the tactile input module130may sense the tactile input based on at least one of a touch input and a hover input from the user10for the second body surface130. The tactile input module130may provide tactile information about the tactile input to the electronic device110. Therefore, the electronic device110may generate a control signal for the 2D visual information based on the tactile input. To this end, the tactile input module130may be in contact with at least a portion of the second body surface13or may be separate from the second body surface13. For example, when the first body surface12is a palmar surface of a hand and the second body surface13is a dorsal surface of the hand, the tactile output module120and the tactile input module130may be implemented in a form of a glove. As another example, when the first body surface12is a sole surface of a foot and the second body surface130is a top surface of the foot, the tactile output module120and the tactile input module130may be implemented in a form of a sock. As another example, when the first body surface12is a back surface and the second body surface13is an abdominal surface, the tactile output module120and the tactile input module130may be implemented in a form of a top or an abdominal binder. FIG.2illustrates an example of the electronic device110of the computer system100according to example embodiments. Referring toFIG.2, the electronic device110according to example embodiments may include at least one of an input module210, an output module220, a connecting terminal230, a communication module240, a memory250, and a processor260. In some example embodiments, at least one of components of the electronic device110may be omitted and at least one another component may be added to the electronic device110. In some example embodiments, at least two of components of the computer system100may be implemented as a single integrated circuitry. The input module210may input a signal to be used for at least one component of the electronic device110. The input module210may include at least one of an input device configured to allow the user10to directly input a signal to the electronic device110, and a sensor device configured to sense an ambient change and to generate a signal. For example, the input device may include at least one of a microphone, a mouse, and a keyboard. In an example embodiment, the input device may include at least one of a touch circuitry configured to sense a touch and a sensor circuitry configured to measure strength of a force generated by the touch. The output module220may output information to an outside of the electronic device110. The output module220may include at least one of a display device configured to output 2D visual information and an audio output device configured to output auditory information as an audio signal. For example, the display device may include at least one of a display, a hologram device, and a projector. For example, the display device may be implemented as a touchscreen through assembly to at least one of the touch circuitry and the sensor circuitry of the input module210. For example, the audio output device may include at least one of a speaker and a receiver. The connecting terminal230may be provided for a physical connection between the electronic device110and an external device. According to an example embodiment, the connecting terminal230may include a connector. For example, the connecting terminal230may include a high-definition multimedia interface (HDMI) connector, a universal serial bus (USB) connector, a secure digital (SD) card connector, and an audio connector. Here, the external device may include another electronic device. In some example embodiments, the other electronic device may include at least one of the tactile output module120and the tactile input module130. The communication module240may communicate with the external device in the electronic device110. The communication module240may establish a communication channel between the electronic device110and the external device and may communicate with the external device through the communication channel. Here, the external device may include at least one of a satellite, a base station, a server, and another electronic device. In some example embodiments, the other electronic device may include at least one of the tactile output module120and the tactile input module130. The communication module240may include at least one of a wired communication module and a wireless communication module. The wired communication module may be connected to the external device in a wired manner through the connecting module230and may communicate with the external device in the wired manner. The wireless communication module may include at least one of a near field communication module and a far field communication module. The near field communication module may communicate with the external device through a near field communication scheme. For example, the near field communication scheme may include at least one of Bluetooth, wireless fidelity (WiFi) direct, and infrared data association (IrDA). Here, the far field communication module may communicate with the external device over a network. For example, the network may include at least one of a cellular network, the Internet, and a computer network such as a local area network (LAN) and a wide area network (WAN). The memory250may store a variety of data used by at least one component of the electronic device110. For example, the memory250may include at least one of a volatile memory and a nonvolatile memory. The data may include at least one program and input data or output data related thereto. A program may be stored as software including at least one instruction in the memory250and may include at least one of an operating system (OS), middleware, and an application. The processor260may control at least one component of the electronic device110by executing the program of the memory250. Through this, the processor260may perform data processing or an operation. Here, the processor260may execute the instruction stored in the memory250. According to example embodiments, the processor260may manage 2D visual information. Here, the 2D visual information may include at least one object provided on a predetermined visual plane. For example, the visual plane may represent a background screen and the object may include at least one of an icon, a widget, an item, a character, a button, a slider, and a pop-up window as a screen element provided on the background screen. The processor260may provide the 2D visual information to the tactile output module120. To this end, the processor260may generate the 2D visual information at a preset resolution. For example, the processor260may provide the 2D visual information to the tactile output module120while displaying the 2D visual information on the display device of the output module220. According to example embodiments, the processor260may control 2D visual information. That is, the processor260may control the 2D visual information based on an input from the user10. Here, the processor260may generate a control signal for the 2D visual information based on a tactile input sensed by the tactile input module130. For example, the processor260may receive tactile information about the tactile input from the tactile input module130and may generate a control signal using the tactile information. The processor260may control the 2D visual information according to the control signal. Through this, the processor260may modify the 2D visual information. For example, the processor260may modify the 2D visual information displayed through the display device of the output module220. FIG.3Aillustrates an example of the tactile output module120of the computer system100according to example embodiments, andFIG.3Billustrates an example of the tactile output module120of the computer system100according to example embodiments. Referring toFIG.3A, the tactile output module120may include at least one of a connecting terminal310, a communication module320, a matrix module330, a memory340, and a processor350. In some example embodiments, at least one of components of the tactile output module120may be omitted and at least one another component may be added to the tactile output module120. In some example embodiments, at least two of the components of the tactile output module120may be implemented as a single integrated circuitry. According to an example embodiment, the memory340and the processor350of the tactile output module120may be integrated into the memory250and the processor260of the electronic device110, respectively. In this case, the communication module320may be directly connected to the matrix module330. According to another example embodiment, the tactile output module120may be implemented as a single device with the electronic device110. In this case, the connecting terminal310and the communication module320may be omitted from the tactile output module120, and the memory340and the processor350of the tactile output module120may be integrated into the memory250and the processor260of the electronic device110, respectively. The connecting terminal310may be provided for a physical connection between the tactile output module120and an external device. According to an example embodiment, the connecting terminal310may include a connector. For example, the connecting terminal310may include an HDMI connector or a USB connector. Here, the external device may include at least one of the electronic device110and the tactile input module130. The communication module320may communicate with the external device in the tactile output module120. The communication module320may establish a communication channel between the tactile output module120and the external device and may communicate with the external device through the communication channel. Here, the external device may include at least one of the electronic device110and the tactile input module130. The communication module320may include at least one of a wired communication module and a wireless communication module. The wired communication module may be connected to the external device in a wired manner through the connecting terminal310and may communicate with the external device in the wired manner. The wireless communication module may include at least one of a near field communication module and a far field communication module. The near field communication module may communicate with the external device through a near field communication scheme. For example, the near field communication scheme may include at least one of Bluetooth, WiFi direct, and IrDA. The far field communication module may communicate with the external device through a far field communication scheme. Here, the far field communication module may communicate with the external device over a network. For example, the network may include at least one of a cellular network the Internet, and a computer network such as a LAN and a WAN. The matrix module330may generate a tactile output for the first body surface12of the user10. For example, the tactile output may be a tactile stimulation, and the tactile stimulation may include at least one of an electrical stimulation, a vibration stimulation, and a pressure stimulation such as a stabbing stimulation. To this end, the matrix module330may be in contact with the first body surface12. For example, when the tactile output module120is implemented in a form of a glove, the matrix module330may be implemented to be in contact with a palm surface of a hand of the user10within the glove. As another example, when the tactile output module120is implemented in a form of a sock, the matrix module330may be implemented to be in contact with a sole surface of a foot of the user10within the sock. As another example, when the tactile output module120is implemented in a form of a top or an abdominal binder, the matrix module330may be implemented to be in contact with a back surface of the user10within the top or the abdominal binder. According to example embodiments, the matrix module330may include a substrate and a plurality of stimulation elements arranged in a 2D matrix structure on the substrate. The substrate may support the stimulation elements and the stimulation elements may make substantial contact with the first body surface12. For example, each of the stimulation elements may include at least one of an electrode, a vibration motor, and a linearly moving pin module. The electrode may generate the electrical stimulation on the first body surface12using voltage or current applied to the electrode. The vibration motor may vibrate according to the applied voltage and may generate the vibration stimulation on the first body surface12. For example, the vibration motor may be an eccentric rotating mass (ERM) or a linear resonant actuator (LRA). The pin module may linearly move relative to the first body surface12and may generate the pressure stimulation on the first body surface12. For example, the pin module may be implemented to run using a linear servo motor. The memory340may store a variety of data used by at least one component of the tactile output module120. For example, the memory340may include at least one of a volatile memory and a nonvolatile memory. Data may include at least one program and input data or output data related thereto. The program may be stored as software that includes at least one instruction in the memory340and may include at least one of an OS middleware, and an application. The processor350may control at least one component of the tactile output module120by executing the program of the memory340. Through this, the processor350may perform data processing or operation. Here, the processor350may execute the instruction stored in the memory340. According to example embodiments, the processor350may generate a tactile output for the first body surface12of the user10through the matrix module330based on 2D visual information. Here, a first tactile plane may be defined on the first body surface12by the tactile output module120, in more detail, by the matrix module330, and the first tactile plane may correspond to a visual plane of the 2D visual information. A size of the first tactile plane may be the same as or different from a size of the visual plane. Here, each of stimulation elements in the first tactile plane may serve as a role of each pixel or each dot and a resolution of the first tactile plane may be defined accordingly. Through this, the processor350may generate a tactile stimulation corresponding to at least one object provided on the visual plane in the 2D visual information, on the first tactile plane. That is, the processor350may generate the tactile stimulation on the first tactile plane to match at least one of a size, a position, a form, and a feature of each object on the visual plane. To this end, the processor350may drive at least one of the stimulation elements of the matrix module330. That is, the processor350may select at least one stimulation element from among the stimulation elements within the first tactile plane based on at least one of a size, a position, a form, and a feature of an object on the visual plane and may generate the tactile stimulation by driving the selected stimulation element. For example, referring toFIG.3B, the processor350may generate 2D stimulation information based on 2D visual information. Here, the processor350may generate the 2D stimulation information by changing a resolution of the 2D visual information received from the electronic device110to a predetermined resolution. Here, the resolution of the 2D visual information may be determined based on a size of the visual plane, a resolution of the 2D stimulation information may be determined based on a size of the first tactile plane, and each of the stimulation elements within the first tactile plane may serve as a role of each pixel or each dot. That is, a resolution of the first tactile plane may represent a resolution for the 2D stimulation information. Through this, the processor350may select at least one stimulation element from among the stimulation elements within the first tactile plane using the 2D stimulation information and may generate the tactile stimulation by driving the selected stimulation element. FIG.4illustrates an example of the tactile input module130of the computer system100according to example embodiments. Referring toFIG.4, the tactile input module130may include at least one of a connecting terminal410, a communication module420, a tactile recognition module430, a memory440, and a processor450. In some example embodiments, at least one of the components of the tactile input module130may be omitted and at least one another component may be added to the tactile input module130. In some example embodiments, at least two of the components of the tactile input module130may be implemented as a single integrated circuitry. According to an example embodiment, the memory440and the processor450of the tactile input module130may be integrated into the memory250and the processor260of the electronic device110, respectively. In this case, the communication module320may be directly connected to the tactile recognition module430. According to another example embodiment, the tactile input module130may be implemented into a single device with the electronic device110. In this case, the connecting terminal410and the communication module420may be omitted from the tactile input module130, and the memory440and the processor450of the tactile input module130may be integrated into the memory250and the processor260of the electronic device110, respectively. According to another example embodiment, the tactile input module130may be implemented into a single device with the tactile output module120. In this case, the connecting terminal410and the communication module420may be omitted from the tactile input module130, and the memory440and the processor450of the tactile input module130may be integrated into the memory340and the processor350of the tactile output module120, respectively. The connecting terminal410may be provided for a physical connection between the tactile input module130and an external device. According to an example embodiment, the connecting terminal410may include a connector. For example, the connecting terminal410may include an HDMI connector or a USB connector. Here, the external device may include at least one of the electronic device110and the tactile output module120. The communication module420may communicate with the external device in the tactile input module130. The communication module420may establish a communication channel between the tactile input module130and the external device and may communicate with the external device through the communication channel. Here, the external device may include at least one of the electronic device110and the tactile output module120. The communication module420may include at least one of a wired communication module and a wireless communication module. The wired communication module may be connected to the external device in a wired manner through the connecting terminal410and may communicate with the external device in the wired manner. The wireless communication module may include at least one of a near field communication module and a far field communication module. The near field communication module may communicate with the external device through a near field communication scheme. For example, the near field communication scheme may include at least one of Bluetooth, WiFi direct, and IrDA. The far field communication module may communicate with the external device through a far field communication scheme. Here, the far field communication module may communicate with the external device over a network. For example, the network may include at least one of a cellular network, the Internet, and a computer network such as a local area network (LAN) and a wide area network (WAN). The tactile recognition module430may sense a tactile input for the second body surface13of the user10. Here, the tactile recognition module430may sense the tactile input based on at least one of a touch input and a hover input from the user10for the second body surface130. To this end, the tactile recognition module430may be in contact with at least a portion of the second body surface13or may be separate from the second body surface13. According to an example embodiment, the tactile recognition module430may include a camera, for example, a planar camera and a depth camera, configured to capture an image for the second body surface13. According to another example embodiment, the tactile recognition module430may include an optical tracking module having optical markers configured to attach to the second body surface13and a finger of the user10or a tool, for example, a stylus, for generating a tactile input for the second body surface13. According to another example embodiment, the tactile recognition module430may include a touch sensing module configured to attach to the second body surface13. For example, when the tactile input module130is implemented in a form of a glove, the tactile recognition module430may be provided to be adjacent to the dorsal surface of the hand of the user10within the glove. According to another example embodiment, the tactile recognition module430may include a sensor module that includes a position sensor and a pressure sensor. For example, the position sensor may include a transmitter and at least three receivers. As another example, the pressure sensor may be implemented using a force sensitive resistor (FSR). The memory440may store a variety of data used by at least one component of the tactile input module130. For example, the memory440may include at least one of a volatile memory and a non-volatile memory. Data may include at least one program and input data or output data related thereto. The program may be stored as software that includes at least one instruction in the memory440and may include at least one of an OS middleware, and an application. The processor450may control at least one component of the tactile input module130by executing the program of the memory440. Through this, the processor450may perform data processing or operation. Here, the processor450may execute the instruction stored in the memory440. According to example embodiments, the processor450may sense a tactile input for the second body surface13of the user10through the tactile recognition module430. For example, the tactile input may include at least one of a single touch, a single hover, a multi-touch, and a multi-hover. Here, a second tactile plane may be defined on the second body surface13by the tactile input module130, in more detail, by the tactile recognition module430, and the second tactile plane may correspond to the first tactile plane. A size of the second tactile plane may be the same as a size of the first tactile plane, but may not be limited thereto. Here, a resolution of the second tactile plane may be defined as the same as a resolution of the first tactile plane. The second tactile plane and the first tactile plane may be coplanar or may be individually present to be separate from each other. Through this, the processor450may sense a tactile input on the second tactile plane. The processor450may generate tactile information about the tactile input and may provide the tactile information to the electronic device110. Here, the tactile information may include at least one of a touch status on the second body surface13, that is, identification information about a touch or a hover, at least one touch position on the second tactile plane, and at least one hover position on the second tactile plane. According to an example embodiment, when the tactile recognition module430includes a camera, the processor450may sense a tactile input through a boundary analysis in an image captured by the camera. For example, when the tactile recognition module430includes a planar camera, the processor450may detect a touch position or a hover position and a touch status from a change in a position of a finger of the user10or a tool relative to the second body surface13. Additionally, the processor450may more accurately detect a touch position or a hover position and a touch status from a change in a fingertip or a nail of the user10. As another example, when the tactile recognition module430includes a depth camera, the processor450may detect a touch position or a hover position and a touch status from a position and a depth of a finger of the user10or a tool relative to the second body surface13. According to another example embodiment, when the tactile recognition module430includes an optical tracking module, optical markers of the optical tracking module may be attached to an end portion of a finger of the user10or a tool and the second body surface13or the matrix module330of the tactile output module120. Through this, the processor450may detect a touch position or a hover position and a touch status from a change in a relative position of the end portion of the finger of the user10or the tool relative to the second body surface13or the matrix module330. According to another example embodiment, when the tactile recognition module430includes a touch sensing module, the processor450may detect a touch position or a hover position and a touch status from an electrical change in the touch sensing module. According to another example embodiment, when the tactile recognition module430includes a sensor module that includes a position sensor or a pressure sensor, a transmitter of the position sensor and the pressure sensor may be attached at an end portion of the finger of the user10or a tool and receivers of the position sensor may be distributed and arranged to the second body surface13or the matrix module330. Through this, the processor450may detect a touch position or a hover position from a position of the transmitter by detecting a touch status through the pressure sensor and by detecting a position of the transmitter through a triangulation scheme using positions of the receivers. FIGS.5A,5B,5C,5D,5E,5F,5G, and5Hillustrate examples of an operation characteristic of the computer system100according to example embodiments. Here, description is made based on an example in which the first body surface12is a palmar surface of a hand and the second body surface13is a dorsal surface of the hand. However, it is provided as an example only. Referring toFIGS.5A,5B,5C,5D,5E,5F,5G, and5H, the computer system100according to example embodiments may provide a tactile interface for a real-time 2D tactile input/output interaction of the user10, for example, a visually impaired person. To this end, the user10may place a desired body portion relative to the computer system100. Here, the first body surface12may be in contact with the tactile output module120and, corresponding thereto, the tactile input module130may be provided for the second body surface13. Here, although an example in which the first body surface12is a palmar surface of a hand and the second body surface13is a dorsal surface of the hand is illustrated, it is provided as an example only. That is, the example embodiment may apply even in a case in which the first body surface12is a sole surface of a foot and the second body surface13is a top surface of the foot or a case in which the first body surface12is a back surface and the second body surface13is an abdominal surface. In detail, the tactile output module120may generate a tactile output521for the first body surface12of the user10based on 2D visual information500from the electronic device110. That is, the tactile output module120may generate the tactile output521on a first tactile plane520to match each object511on a visual plane510. Through this, in response to the tactile output521, the user10may generate a tactile input531for the tactile input module130and the tactile input module130may sense the tactile input531. Therefore, the electronic device110may control the 2D visual information500based on the tactile input531. For example, referring toFIG.5A,5B,5C,5D, or5E, the tactile output module120may generate the tactile output521corresponding to the object511associated with an item in a game based on the 2D visual information500related to the game. Through this, the tactile input module130may sense the tactile input531for controlling the object511from the user10and the electronic device110may control the 2D visual information510based on the tactile input531. Referring toFIG.5A, for a real-time airplane shooting game, the tactile output module120may generate the tactile output521corresponding to a form and a direction of each of airplanes and the electronic device110may control at least one of the airplanes based on the tactile input531sensed through the tactile input module130. Referring toFIG.5B, for a mole multi-touch game, the tactile output module120may generate the tactile output521corresponding to moles and the electronic device110may select moles based on the tactile input531sensed through the tactile input module130. Referring toFIG.5C, for a rhythm game of selecting bars descending according to music at an appropriate timing, the tactile output module120may generate the tactile output521corresponding to bars and the electronic device110may select at least one of the bars based on the tactile input531sensed through the tactile input module130. Referring toFIG.5D, for a multi-player game of bouncing a ball to a goal of an opponent, the tactile output module120may generate the tactile output521corresponding to balls and the electronic device110may bounce at least one of the balls based on the tactile input531sensed through the tactile input module130. Referring toFIG.5E, for a snake game of acquiring a target by manipulating a snake of which shape changes, the tactile output module120may generate the tactile output521corresponding to the snake and the target and the electronic device110may move the snake while changing the shape of the snake based on the tactile input531sensed through the tactile input module130. As another example, referring toFIGS.5F,5G, and5H, the tactile output module120may generate the tactile output521corresponding to the object511in a display screen based on the 2D visual information500related to the display screen of the electronic device110. Through this, the tactile input module130may sense the tactile input531for controlling the object511from the user10and the electronic device110may control the 2D visual information510based on the tactile input531. Referring toFIG.5F, when a plurality of icons and widgets are displayed on the display screen, the tactile output module120may generate the tactile output521corresponding to the icons and the widgets. Subsequently, the electronic device110may select one of the icons and the widgets based on a tactile input531sensed through the tactile input module130and may execute a function assigned thereto. Referring toFIG.5G, when at least one of items, texts, buttons, and sliders are displayed on the display screen, the tactile output module120may generate the tactile output521corresponding to at least one of the items, the texts, the buttons, and the sliders. The electronic device110may select one of the items, the buttons, and the sliders based on a tactile input531sensed through the tactile input module130, may execute a function assigned thereto or may select the texts and may convert the texts to audio signals and play back the converted audio signals. Referring toFIG.5H, when a pop-up window is displayed on the display screen, the tactile output module120may generate the tactile output521corresponding to the pop-up window. The electronic device110may convert texts in the pop-up window to audio signals based on a tactile input531sensed through the tactile input module130and play back the converted audio signals or may remove the pop-up window. FIG.6is a flowchart illustrating an example of an operating method of the computer system100according to example embodiments. Referring toFIG.6, in operation610, the computer system100may detect 2D visual information. In detail, the electronic device110may manage the 2D visual information. Here, the 2D visual information may include at least one object provided on a predetermined visual plane. For example, the visual plane may represent a background screen and the object may include at least one of an icon, a widget, an item, a character, a button, a slider, and a pop-up window as a screen element provided on the background screen. The electronic device110, that is, the processor260may provide the 2D visual information to the tactile output module120. To this end, the processor260may generate the 2D visual information at a preset resolution. For example, the processor260may provide the 2D visual information to the tactile output module120while displaying the 2D visual information. In operation620, the computer system100may generate a tactile output for the first body surface12of the user10. In detail, the tactile output module120may generate the tactile output for the first body surface12of the user10based on the 2D visual information. Here, a first tactile plane may be defined on the first body surface12by the tactile output module120and the first tactile plane may correspond to the visual plane of the 2D visual information. A size of the first tactile plane may be the same as or different from a size of the visual plane. Here, each of stimulation elements in the first tactile plane may serve as a role of each pixel or each dot and a resolution of the first tactile plane may be defined accordingly. Through this, the tactile output module120may generate a tactile stimulation corresponding to at least one object provided on a visual plane in the 2D visual information, on the first tactile plane. That is, the tactile output module120may generate a tactile stimulation on the first tactile plane to match at least one of a size, a position, a form, and a feature of each object on the visual plane. To this end, in the tactile output module120, the processor350may drive at least one of the stimulation elements of the matrix module330. That is, the processor350may select at least one stimulation element from among the stimulation elements within the first tactile plane based on at least one of a size, a position, a form, and a feature of an object on the visual plane, and may generate the tactile stimulation by driving the selected stimulation element. In operation630, the computer system100may sense a tactile input for the second body surface13of the user10. In detail, the tactile input module130may sense the tactile input for the second body surface13of the user10. For example, the tactile input may include at least one of a single touch, a single hover, a multi-touch, and a multi-hover. Here, a second tactile plane may be defined on the second body surface13by the tactile input module130and the second tactile plane may correspond to the first tactile plane. A size of the second tactile plane may be the same as a size of the first tactile plane, but may not be limited thereto. Here, a resolution of the second tactile plane may be defined as the same as a resolution of the first tactile plane. The second tactile plane and the first tactile plane may be coplanar or may be individually present to be separate from each other. Through this, the tactile input module130may sense a tactile input on the second tactile plane. The tactile input module130may generate tactile information about the tactile input and may provide the tactile information to the electronic device110. Here, the tactile information may include at least one of a touch status on the second body surface13, that is, identification information about a touch or a hover, at least one touch position on the second tactile plane, and at least one hover position on the second tactile plane. In operation640, the computer system100may generate a control signal for the 2D visual information. In detail, the electronic device110may generate the control signal for the 2D visual information based on the tactile input. For example, the electronic device110may receive tactile information about the tactile input from the tactile input module130and may generate the control signal using the tactile information. The electronic device110may control the 2D visual information according to the control signal. Through this, the electronic device110may modify the 2D visual information. Once the 2D visual information is modified, the computer system100may restart the method and may return to operation610. That is, when the 2D visual information is modified according to the control signal generated based on the tactile input in operation640, the computer system100may restart the method and may repeatedly perform operations610to640. Meanwhile, when the 2D visual information is modified even without detecting the tactile input in operation630, the computer system100may restart the method and may repeatedly perform operations610to640. Therefore, for the 2D visual information that varies in real time, the computer system100may provide a tactile interface for a real-time 2D tactile input/output interaction. According to example embodiments, the computer system100may provide a tactile interface for a real-time 2D tactile input/output interaction of the user10, for example, a visually impaired person. The computer system100may generate the 2D visual information by replacing the 2D visual information with a tactile output, that is, a tactile stimulation on a 2D plane, such that the user10may recognize the 2D visual information as a whole at once. Here, the computer system100may change the tactile output in real time in response to a real-time change in the 2D time visual information. Through this, the user10may immediately recognize an object that varies in real time, for example, a moving object. In addition, the computer system100may sense a tactile input corresponding to the tactile output and accordingly, allow the user10to perform a tactile input/output interaction in real time. Here, the user10may cognitively perform a tactile input/output interaction through body surfaces that spatially correspond to each other, for example, a palmar surface and a dorsal surface of a hand, a sole surface and a top surface of a foot, and a back surface and an abdominal surface. Example embodiments provide an operating method of the computer system100that provides a tactile interface. According to example embodiments, the operating method of the computer system100may include generating a tactile output corresponding to 2D visual information for the first body surface12of the user10through the tactile output module120that is in contact with the first body surface12; sensing a tactile input for the second body surface13of the user10through the tactile input module130; and generating a control signal for the 2D visual information based on the tactile input. According to example embodiments, one of the first body surface12and the second body surface13may be a ventral surface of one body portion of the user10, and the other one of the first surface12and the second body surface13may be a dorsal surface corresponding to the ventral surface in the one body portion. According to example embodiments, the 2D visual information may include at least one object provided on a predetermined visual plane. According to example embodiments, the generating of the tactile output may include generating a tactile stimulation corresponding to the object on a first tactile plane that is defined on the first body surface12by the tactile output module120and corresponds to the visual plane12. According to example embodiments, the tactile output module120may include a plurality of stimulation elements arranged in a 2D matrix structure on the first tactile plane. According to example embodiments, the generating of the tactile stimulation may include generating the tactile stimulation by driving at least one of the stimulation elements. According to example embodiments, each of the stimulation elements may include at least one of an electrode, a vibration motor, and a linearly moving pin module. According to example embodiments, the generating of the tactile stimulation may include selecting at least one simulation element from among the stimulation elements based on at least one of a size, a position, a form, and a feature of the object on the visual plane; and generating the tactile stimulation by driving the selected stimulation element. According to example embodiments, the control signal may include a signal for controlling the object on the visual plane. According to example embodiments, the sensing of the tactile input may include sensing the tactile input on the second tactile plane13that is defined on the second body surface by the tactile input module130and corresponds to the first tactile plane. According to example embodiments, the tactile input may include at least one of a touch status on the second body surface13, at least one touch position on the second tactile plane, and at least one hover position on the second tactile plane. According to example embodiments, the tactile input module130may include at least one of a planar camera, a depth camera, an optical tracking module having optical markers, a touch sensing module, a position sensor, and a pressure sensor. Example embodiments provide the computer system100that provides a tactile interface, the computer system100including the tactile output module120configured to be in contact with the first body surface12of the user10and to generate a tactile output corresponding to 2D visual information for the first body surface12; the tactile input module130configured to sense a tactile input for the second body surface13of the user10; and the processor260configured to generate a control signal for the 2D visual information based on the tactile input. According to example embodiments, one of the first body surface12and the second body surface13may be a ventral surface of one body portion of the user10, and the other one of the first surface12and the second body surface13may be a dorsal surface corresponding to the ventral surface in the one body portion. According to example embodiments, the 2D visual information may include at least one object disposed on a predetermined visual plane. According to example embodiments, the tactile output module120may be configured to generate a tactile stimulation corresponding to the object on a first tactile plane that is defined on the first body surface12by the tactile output module120and corresponds to the visual plane12. According to example embodiments, the tactile output module120may include a plurality of stimulation elements arranged in a 2D matrix structure on the first tactile plane, and may be configured to generate the tactile stimulation by driving at least one of the stimulation elements. According to example embodiments, each of the stimulation elements may include at least one of an electrode, a vibration motor, and a linearly moving pin module. According to example embodiments, the tactile output module120may be configured to select at least one stimulation element from among the stimulation elements based on at least one of a size, a position, a form, and a feature of the object on the visual plane, and to generate the tactile stimulation by driving the selected stimulation element. According to example embodiments, the control signal may include a signal for controlling the object on the visual plane. According to example embodiments, the tactile input module130may be configured to sense the tactile input on a second tactile plane that is defined on the second body surface13by the tactile input module130and corresponds to the first tactile plane. According to example embodiments, the tactile input may include at least one of a touch status on the second body surface13, at least one touch position on the second tactile plane, and at least one hover position on the second tactile plane. According to example embodiments, the tactile input module130may include at least one of a planar camera, a depth camera, an optical tracking module having optical markers, a touch sensing module, a position sensor, and a pressure sensor. The systems and/or apparatuses described herein may be implemented using hardware components, software components, and/or a combination thereof. For example, a processing device and components described herein may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. The processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular; however, one skilled in the art will be appreciated that a processing device may include multiple processing elements and/or multiple types of processing elements. For example, a processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such as parallel processors. The software may include a computer program, a piece of code, an instruction, or some combination thereof, for independently or collectively instructing or configuring the processing device to operate as desired. Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical equipment, virtual equipment, computer storage medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, the software and data may be stored by one or more computer readable storage mediums. The methods according to the example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. Here, the media may be to continuously store a computer-executable program or to temporarily store the same for execution or download. Also, the media may include, alone or in combination with the program instructions, data files, data structures, and the like. The media and program instructions may be those specially designed and constructed for the purposes, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVD; magneto-optical media such as floptical disks; and hardware devices that are specially to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of other media may include recording media and storage media managed by an app store that distributes applications or a site, a server, and the like that supplies and distributes other various types of software. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The terms used herein are used to explain specific embodiments and are not construed to limit the disclosure and should be understood to include various modifications, equivalents, and/or substitutions of the example embodiments. In the drawings, like reference numerals refer to like components throughout the present specification. The singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Herein, the expressions, “A or B,” “at least one of A and/or B,” “A, B, or C,” “at least one of A, B, and/or C,” and the like may include any possible combinations of listed items. Terms “first,” “second,” etc., are used to describe various components and the components should not be limited by the terms. The terms are simply used to distinguish one component from another component. When a component, for example, a first component, is described to be “(functionally or communicatively) connected to” or “accessed to” another component, for example, a second component, the component may be directly connected to the other component or may be connected through still another component, for example, a third component. The term “module” used herein may include a unit configured as hardware, software, or firmware, and may be interchangeably used with the terms “logic,” “logic block,” “part,” “circuit,” etc. The module may be an integrally configured part, a minimum unit that performs at least function, or a portion thereof. For example, the module may be configured as an application-specific integrated circuit (ASIC). According to the example embodiments, each of the components (e.g., module or program) may include a singular object or a plurality of objects. According to the example embodiments, at least one of the components or operations may be omitted. Alternatively, at least one another component or operation may be added. Alternatively or additionally, a plurality of components (e.g., module or program) may be integrated into a single component. In this case, the integrated component may perform one or more functions of each of the components in the same or similar manner as it is performed by a corresponding component before integration. According to the example embodiments, operations performed by a module, a program, or another component may be performed in a sequential, parallel, iterative, or heuristic manner. Alternatively, at least one of the operations may be performed in different sequence or omitted. Alternatively, at least one another operation may be added. While this disclosure includes specific example embodiments, it will be apparent to one of ordinary skill in the art that various alterations and modifications in form and details may be made in these example embodiments without departing from the spirit and scope of the claims and their equivalents. For example, suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure. | 58,399 |
11861067 | DESCRIPTION OF THE PREFERRED EMBODIMENTS Now, an embodiment that employs the tactile-sensation providing device of the present invention will be described below. According to the present invention, it is possible to provide a tactile-sensation providing device that reduces the transmission of vibration to the base part. EMBODIMENT FIG.1is a perspective view that illustrates a tactile-sensation providing device100according to the embodiment.FIG.2is a cross-sectional view taken along line A-A inFIG.1.FIG.3is an exploded view of the tactile-sensation providing device100. An XYZ coordinate system will be defined in the following description. Also, for ease of explanation, in the following description, “plan view” refers to “XY plane view,” and, while the negative Z-axis direction will refer to the lower side or below and the positive Z-axis direction will refer to the upper side or above, these do not represent the relationship universally regarded as “up” and “down.” The tactile-sensation providing device100includes a base110, gap sensors120, an actuator130, a movable part140, an electrostatic sensor150, an operating panel160, a bezel170, and screws175. Also, the following description will be given with referenceFIG.4toFIG.6, in addition toFIG.1toFIG.3.FIG.4is a diagram that illustrates the base110and the gap sensors120.FIG.5is a diagram that illustrates the actuator130and the movable part140.FIG.6is an exploded view that illustrates the actuator130. The tactile-sensation providing device100further includes rubber members180S,180L, and180U, which are elastic bodies (seeFIG.4andFIG.5). Here, the base110and the bezel170are examples of the base part to be attached to an external object so as to lay the foundation for the tactile-sensation providing device100. The magnet134and the holder135of the actuator130are examples of the vibrating body (seeFIG.6). Parts of the actuator130other than the magnet134and the holder135(a top yoke131, a bottom yoke132, drive coils133, springs136, screws137, and washers137A (seeFIG.6)), a movable part140, the electrostatic sensor150, and the operating panel160constitute examples of the vibration-target object. The vibrating arrangement to include the base part and the vibration-target object will be hereinafter referred to as a “first vibrating system.” This vibrating system, composed of examples of the base part (the base110and the bezel170), examples of the vibration-target object (the top yoke131, the bottom yoke132, the drive coils133, the springs136, the screws137, and the washers137A of the actuator130), the movable part140, the electrostatic sensor150, and the operating panel160), and examples of elastic bodies (the rubber members180S,180L, and180U) that connect between the base parts and the vibration-target objects elastically, is an example of the first vibrating system. Also, the vibrating arrangement to include the vibrating body and the vibration-target object will be hereinafter referred to as a “second vibrating system.” The vibrating system, composed of examples of the vibrating body (the magnet134and the holder135of the actuator130) and examples of the vibration-target object (the top yoke131, the bottom yoke132, the drive coils133, the springs136, the screws137and the washers137A of the actuator130, the movable part140, the electrostatic sensor150, and the operating panel160) is an example of the second vibrating system. In other words, the actuator130, the movable part140, the electrostatic sensor150, and the operating panel160constitute an example of the second vibrating system. The base110is made of resin, for example. The base110is a rectangular member in plan view, and a storage part110A recessed from the upper side towards the lower side is formed. Also, the base110has a bottom plate111, side walls112, guides113, step parts114, and projecting parts115. The storage part110A is a space surrounded by the bottom plate111and the side walls112of the base110, and is shaped substantially like a rectangular parallelepiped. The storage part110A houses the gap sensors120, the actuator130, and a lower part of the movable part140. Of these, the gap sensors120are provided on the upper surface of the bottom plate111. The bottom plate111is a rectangular plate-like portion in plan view, and has an opening part111A provided in the center, opening parts111B provided at both end parts in the X direction, and opening parts111C provided at both end parts in the Y direction. A lower end part of the actuator130is inserted in the opening part111A. The actuator130and the bottom plate111are not in contact with each other and have a gap formed therebetween. The lower ends of the movable part140's guides145are inserted in the opening parts111B. In the opening parts111B, the guides145are not in contact with the bottom plate111and have a gap formed therebetween. The side walls112are wall parts that are rectangular and annular in plan view and rise upward from the four sides of the bottom plate111. The guides113are provided on the inner sides of the side walls112extending in the Y direction on both ±X-direction sides. Also, in the boundary portions between the inner sides of the side walls112and the bottom plate111, the step parts114are provided, having upper surfaces located above the bottom plate111and below the upper surfaces112A of the side walls112. Also, in the upper surfaces112A of the side walls112, the projecting parts115, protruding upward from the upper surfaces112A, are provided. With the tactile-sensation providing device100assembled, the guides113are inserted in the grooves145A of the guides145of the movable part140, and the lower ends of the guides145are inserted in the opening parts111B. The guides113are provided for alignment of the base110and the movable part140. With the tactile-sensation providing device100assembled, the guides113of the base110and the guides145of the movable part140do not butt up against each other and have a gap formed therebetween. The step parts114are provided in a rectangular and annular shape in plan view, in the boundary portions between the bottom plate111and the side walls112in the storage part110A. The rubber members180L are provided on the upper surfaces of the step parts114(seeFIG.4andFIG.5). The rubber members180L are small rectangular-parallelepiped members made of rubber, and are each a chunk of elastic rubber. Note that the rubber members180L are by no means limited to being chunks of rubber, and may be formed by including, for example, springs. However, since it is not preferable for the rubber members180L to produce sound themselves, it is then more preferable to form the rubber members180L with rubber or the like, than form them by including metallic springs and the like. FIG.4andFIG.5show eight rubber members180L, for example. Two rubber members180L are provided in each section corresponding to each side of the step parts114which are rectangular and annular. Note that, inFIG.3, the rubber members180L are omitted. With the tactile-sensation providing device100assembled, the rubber members180L are elastically deformed between the upper surfaces of the step parts114and the lower surface of the movable part140, and support the movable part140against the base110elastically. Also, in plan view, the rubber members180U are provided above the same positions as where the rubber members180L are provided (seeFIG.4andFIG.5). The rubber members180U are small rectangular-parallelepiped members made of rubber, and are each a chunk of elastic rubber. Note that the rubber members180U are by no means limited to being chunks of rubber, and may be formed by including, for example, springs. However, since it is not preferable for the rubber members180U to produce sound themselves, it is then more preferable to form the rubber members180U with rubber or the like, than form them by including metallic springs or the like.FIG.4shows the positions of eight rubber members180U in a state in which the tactile-sensation providing device100is assembled. With the tactile-sensation providing device100assembled, the rubber members180U are provided being elastically deformed between the upper surface of the movable part140and an offset surface172of the bezel170, and support between the bezel170and the movable part140elastically. InFIG.4, the bezel170and the movable part140are omitted, and thus the rubber members180U appear to be floating in mid-air. The projecting parts115are wall-like portions that protrude upward from the upper surfaces112A of the side walls112, and are provided in a rectangular and annular shape on the upper surfaces112A in plan view. The projecting parts115are thinner in width than the side walls112in plan view, and are provided on the inner sides of the upper surfaces112A (on the side of the upper surfaces112A facing the storage part110A). The rubber members180S are provided on the inner surfaces of the projecting parts115extending in the Y direction on both ±X-direction sides (seeFIG.2,FIG.4, andFIG.5). The rubber members180S are small rectangular-parallelepiped members made of rubber, and are each a chunk of elastic rubber. Note that the rubber members180S are by no means limited to being chunks of rubber, and may be formed by including, for example, springs. However, since it is not preferable for the rubber members180S to produce sound themselves, it is then more preferable to form the rubber members180S with rubber or the like, than form them by including metallic springs or the like. InFIG.4andFIG.5, for example, four rubber members180S are provided on the inner surface of each projecting part115extending in the Y direction on both ±X-direction sides. Note thatFIG.3omits the rubber members180S. With the tactile-sensation providing device100assembled, the rubber members180S are provided being elastically deformed between the inner surfaces of the projecting parts115extending in the Y direction on both ±X-direction sides, and the side surfaces of the movable part140extending in the Y direction on both ±X-direction sides, and support the movable part140against the base110elastically. The rubber members180S are provided in the gap between the base110and the movable part140in the X direction, thereby supporting the movable part140against the base110such that the movable part140is allowed to vibrate in the ±X directions. The gap sensors120are examples of detection parts that detect pressing on the operating panel160in the −Z direction. The gap sensors120detect the gap with the lower surface of the movable part140in the Z direction. Each gap sensor120is, for example, an optical-type sensor having a built-in light source and a light receiving element, receives the reflected light of light radiated onto the lower surface of the movable part140, and detects the change of position of the movable part140in the −Z direction, based on the change of position the reflected light's point of focus in the light receiving element. When the movable part140changes its position in the −Z direction, the electrostatic sensor150and the operating panel160also change their positions in the −Z direction, so that pressing on the electrostatic sensor150and the operating panel160in the −Z direction can be detected by detecting the change of position of the lower surface of the movable part140in the −Z direction. When the operating panel160is pressed in the −Z direction, the position of the movable part140changes by several tens of μm in the −Z direction. The detection parts that detect pressing on the operating panel160in the −Z direction are by no means limited to the gap sensors120. The detection parts may be non-contact position detection sensors such as electrostatic sensors. The detection parts may be pressure-sensitive sensors that detect the pressures applied to the upper surface of the operating panel160. The actuator130is fixed to the lower surface of the movable part140by screws137. The lower surface of the movable part140is provided with a recess part that is recessed upward, and the actuator130is attached to the recess part. Note that the movable part140need not have a recess part, and the actuator130may be attached to the lower surface. As shown inFIG.6, the actuator130has the top yoke131, the bottom yoke132, the drive coils133, the magnet134, the holder135, the springs136, the screws137, and the washers137A. InFIG.5, the top yoke131is hidden inside the recess part of the lower surface of the movable part140. The top yoke131is a magnetic body, and is a plate-like yoke to be attached to the recess part in the lower surface of the movable part140. In the top yoke131, through holes131A, through which the screws137are inserted in the Z direction, are formed at both ends in the X direction. The bottom yoke132is a magnetic body, and is a U-like yoke in XZ-plane view. The bottom yoke132is preferably the same magnetic body as the top yoke131. Two drive coils133are fixed side by side in the X direction, in the portion of the bottom plate132A of the bottom yoke132. In the inner surface of each side wall132B of the bottom yoke132, a step part132B1is provided so that the thickness of the side wall132becomes thinner upwards in the X direction. The upper ends of the side walls132B of the bottom yoke132are fixed to both end sides of the top yoke131. By this means, the top yoke131and the bottom yoke132constitute a magnetic path that is like a closed loop in XZ-plane view. Also, with the tactile-sensation providing device100assembled, the bottom plate132A of the bottom yoke132, which is located at the bottom of the components of the actuator130, is inserted inside the opening part111A of the bottom plate111of the base110. In this state, the bottom yoke132is not in contact with the base110. Consequently, the actuator130is spaced apart from the base110. In other words, the base110is spaced apart from the actuator130. This is to achieve a structure in which little vibration is transmitted to the base110when the actuator130vibrates. The drive coils133are wound in the XY plane and fixed to the upper surface of the bottom plate132A of the bottom yoke132by bonding, screwing, and so forth. When a clockwise current is applied to the drive coils133in plan view, a magnetic flux to penetrate through the center of the drive coils133downwards is produced. Also, when a counterclockwise current is applied to the drive coils133in plan view, a magnetic flux to penetrate through the center of the drive coils133upwards is produced. The magnet134is a multi-pole magnetizing-type permanent magnet and has four poles (an N pole134A, an S pole134B, an N pole134C, and an S pole134D) arranged from the −X-direction side to the +X-direction side. The boundary between the N pole134A and the S pole134B is offset towards the +X direction, with respect to the center of the −X-side drive coil133in the X direction. Also, the boundary between the N pole134C and the S pole134D is offset towards the −X direction, with respect to the center of the +X-side drive coil133in the X direction. The holder135is a member that is formed with a non-magnetic body and holds the magnet134. Holding the magnet134, the holder135is fixed to the step parts132B1of the side walls132B of the bottom yoke132, via the springs136, screwing, and so forth. The springs136hold the holder135against the bottom yoke132elastically, and can expand and contract in the X direction. The screws137are provided in order to fix the top yoke131to the recess part in the lower surface of the movable part140and to the recess part of the lower surface of the movable part140via the washers137A. Given such an actuator130, when a clockwise current is applied to the drive coils133in plan view, a magnetic flux to penetrate through the center of the drive coils133downwards is produced. Consequently, the upper end sides of the drive coils133become S poles and exert a magnetic attraction force on the N pole134A and the N pole134C, and a force in the +X direction acts on the magnet134. Also, when a counterclockwise current is applied to the drive coils133in plan view, a magnetic flux to penetrate through the center of the drive coils133upwards is produced. Consequently, the upper end sides of the drive coils133become N poles and exert a magnetic attraction force on the S pole134B and the S pole134D, and a force in the −X direction acts on the magnet134. By alternately applying a clockwise current and a counterclockwise current to the drive coils133in plan view, it is possible to make a force in the +X direction and a force in the −X direction act on the magnet134alternately. Also, the magnet134is attached to the bottom yoke132, via the springs136, together with the holder135, and the springs136can expand and contract in the X direction. It then follows that, by alternately applying a clockwise current and a counterclockwise current to the drive coils133in plan view, it is possible to make the magnet134and the holder135vibrate in the X direction with respect to the top yoke131and the bottom yoke132. The top yoke131of the actuator130is fixed to the lower surface of the movable part140, and vibrates the movable part140. The electrostatic sensor150and the operating panel160are mounted on the upper side of the movable part140, and therefore the actuator130vibrates the vibration-target object, which is constituted by the movable part140, the electrostatic sensor150, and the operating panel160. The movable part140is made of resin, for example, and is a member having a rectangular and thin, plate-like shape in plan view. In the movable part140, the actuator130is attached to the lower surface, and the electrostatic sensor150and the operating panel160are provided on the upper surface, in this order. Also, the rubber members180S are provided, in a compressed state, between both side surfaces of the movable part140in the X direction and the inner surfaces of the projecting parts115of the base110on both ±X-direction sides. As for the side surfaces that run along the four sides of the movable part140in plan view, only the side surfaces on both ±X-direction sides are in contact with the base110via the rubber members180S alone. Also, the rubber members180L are provided between the end parts along the four sides of the lower surface of the movable part140and the step parts114of the base110. Consequently, the lower surface of the movable part140is in contact with the base110only through the rubber member180L. Also, the rubber members180U are provided between the end parts along the four sides of the upper surface of the movable part140and the offset surface172of the bezel170. Consequently, the upper surface of the movable part140is in contact with the bezel170only through the rubber member180U. Note that the guides113are inserted in the grooves145A of the guides145of the movable part140for alignment with the base110, and the lower ends of the guides145are inserted in the opening parts110B of the base110, but the guides145and the guides113are not in contact with each other. Consequently, the movable part140is in contact with the base110and the bezel170only via the rubber members180S,180L, and180U. That is, the base110is elastically connected with the movable part140. The electrostatic sensor150is fixed on the upper surface of the movable part140. The upper surface and the side surfaces of the electrostatic sensor150are covered by the operating panel160, and the operating panel160is fixed to the movable part140by screwing or the like, thereby fixing the electrostatic sensor150on the upper surface of the movable part140. A touch pad is an example of the electrostatic sensor150, which detects whether or not the operating panel160is operated by using an operating medium, and detects the position where the operation is performed, based on the change in capacitance. The operating medium is, for example, a living body's finger or hand, or a tool such as a stylus pen. As for the operation on the operating panel160, there are cases where the operating panel160is directly operated by using an operating medium, and cases where a cover or the like is additionally provided on top of the operating panel160and the operating panel160is operated indirectly via the cover or the like. The operating panel160is a resin panel having a rectangular shape in plan view, and is provided so as to cover the upper surface and the side surfaces of the electrostatic sensor150. The electrostatic sensor150detects the operations via the operating panel160. For doing this, the operating panel160is non-metallic, and, for example, made of resin. The bezel170is a frame-like member that is rectangular and annular in plan view, and, as shown inFIG.2, has a cross-section shaped like the letter L. The bezel170has a lower surface171and an offset surface172. The offset surface172is offset inward and upward with respect to the lower surface171. The lower surface171and the offset surface172are both rectangular annular surfaces when viewed from the −Z-direction side. The bezel170is attached to the upper surfaces112A of the side walls112of the base110in a state in which the offset surface172is abutted against the upper surfaces112A via rubber members (not shown), keeping a distance from the operating panel160and surrounding the operating panel160. The bezel170is spaced apart from the operating panel160, and thus is not in contact with the operating panel160. Also, the bezel170is not in contact with the movable part140either. The bezel170is fixed to the base110by the screws175in a state in which the offset surface172is abutted against the upper surfaces112A of the side walls112of the base110. InFIG.3, for example, the four corners of the base110and the bezel170are fixed by using four screws175, but they may be fixed with more screws175or with fewer screws175. This tactile-sensation providing device100is designed so that, as a condition (1), the resonance frequency of the first vibrating system is ⅔ or less of the resonance frequency of the second vibrating system. This is to reduce the vibration that is transmitted to the base110when the actuator130vibrates, while allowing the vibration-target object to be vibrated sufficiently. Also, in order to increase this effect, a condition (2) may be set forth that the mass of the vibrating body be set to be less than or equal to the mass of the vibration-target object. Also, a condition (3) may be set forth that the resonance frequency of the first vibrating system be set to 50 Hz or above. Considering, for example, that the tactile-sensation providing device100may be mounted on a vehicle, it is well known that a travelling vehicle produces road noise (vibration) mainly around the frequency of 50 Hz or below. So, when, for example, the tactile-sensation providing device100is mounted on a vehicle by attaching the base110to the center console or somewhere in the vehicle's interior, the resonance frequency of the first vibrating system is set to 50 Hz or above, to prevent the road noise from being transmitted and added to the vibration of the vibration-target object of the first vibrating system. This is, in other words, to prevent the vibration of the first vibrating system from being affected by the vehicle's vibration due to the road noise. Also, a condition (4) may be set forth that the resonance frequency of the second vibrating system be set in a range of 80 Hz or above to 320 Hz or below. This is because human sensory organs are able to perceive vibration best in the frequency band of 80 Hz to 320 Hz. Also, a condition (5) may be set forth that the vibration-target object be vibrated in the X direction, and the first vibrating system and the second vibrating system be each made a vibrating system that vibrates along the X direction. The X direction is an example of a predetermined direction. Also, a condition (6) may be set forth that the resonance frequency of the first vibrating system be set to ⅓ or less of the resonance frequency of the second vibrating system. This is a condition for setting the resonance frequency of the first vibrating system in a more preferable range than condition (1). This is, furthermore, a condition set forth in order to more effectively reduce the vibration that is transmitted to the base110when the actuator130vibrates, while allowing the vibration-target object to be vibrated more effectively. Also, a condition (7) may be set forth that the quality factor (Q factor) represented by following equation (1), using spring constant K and viscosity loss C of the first vibrating system and mass M of the vibration-target object, be set to 1 or greater and 10 or less: Q=(MK)1/2/C(Equation 1) FIG.7is a diagram that illustrates, schematically, the structure of the tactile-sensation providing device100. The first vibrating system10here includes the base part11, the vibration-target object12, and the elastic body13. The vibration-target object12is connected with the base part11via the elastic body13. This base part11is composed of, for example, the base110and the bezel170. Here, the base part11is shown as one plate-like member. The vibration-target object12is composed of, for example, the top yoke131, the bottom yoke132, the drive coils133, the springs136, the screws137, the washers137A, the movable part140, the electrostatic sensor150, and the operating panel160. Here, in the vibration-target object12, the top yoke131, the bottom yoke132, the drive coils133, the screws137, and the washers137A are shown collectively as a frame-like member, having the opening part12A in the center, in XZ-plane view. The springs136of the vibration-target object12are each shown as a pair of a coil and a damper. Also, the movable part140, the electrostatic sensor150, and the operating panel160of the vibration-target object12are shown as a plate-like member. A frame-like member including the top yoke131, the bottom yoke132, the drive coils133, the screws137, and the washers137A is fixed under the plate-like member including the movable part140, the electrostatic sensor150, and the operating panel160. The elastic body13connects between the base part11and the vibration-target object12elastically, and includes, for example, the rubber members180S,180L, and180U. Here, the rubber members180S,180L, and180U are each shown as a pair of a coil and a damper. Also, the second vibrating system20includes a vibrating body21and a vibration-target object12. The vibrating body21here is composed of, for example, the magnet134and the holder135. Here, the vibrating body21is shown as one member, and the vibrating body21is shown to be held by the springs136in the opening part12A of the vibration-target object12. The vibration-target object12is composed of, for example, the top yoke131, the bottom yoke132, the drive coils133, the springs136, the screws137, the washers137A, the movable part140, the electrostatic sensor150, and the operating panel160, and the second vibrating system20is therefore composed of the actuator130, the movable part140, the electrostatic sensor150, and the operating panel160(see, for example,FIG.6). When a clockwise current is applied to the drive coils133in plan view, as described earlier, a force in the +X direction acts on the magnet134. When a counterclockwise current is applied to the drive coils133in plan view, as described earlier, a force in the −X direction acts on the magnet134. By alternately applying a clockwise current and a counterclockwise current to the drive coils133, forces in the +X and −X directions act alternately on the magnet134, so that it is possible to make the vibration-target object12vibrate in the ±X directions. FIGS.8A and8Bare diagrams that each illustrate the respective relationships between the vibration frequency and the acceleration in the vibration-target object and the base part.FIG.8Ashows the acceleration of the vibration-target object with respect to vibration frequency Fa of the vibration-target object.FIG.8Bshows the acceleration of the base part with respect to vibration frequency Fb of the base part. Vibration frequency Fa of the vibration-target object is the frequency at which the vibration-target object vibrates in the X direction. Also, vibration frequency Fb of the base part is the frequency at which the base part vibrates in the X direction. The characteristics shown inFIGS.8A and8Bare obtained, in a simulation, by setting resonance frequency Fc1of the first vibrating system to four resonance frequencies, namely 50 Hz, 75 Hz, 100 Hz, and 150 Hz, and by setting resonance frequency Fc2of the second vibrating system to 150 Hz. That is, when resonance frequency Fc1of the first vibrating system is 50 Hz, 75 Hz, 100 Hz, and 150 Hz, these are ⅓, ½, ⅔, and whole of resonance frequency Fc2of the second vibrating system, respectively.FIGS.8A and8Bshow simulation results in which the mass of the vibrating body is 0.06 kg, the mass of the vibration-target object is 0.4 kg, and the mass of the base part is 1 kg. Setting resonance frequency Fc1of the first vibrating system to predetermined frequencies such as, for example, four frequencies of 50 Hz, 75 Hz, 100 Hz, and 150 Hz, is made possible mainly by setting the spring constant of the elastic bodies (for example, the rubber members180S,180L, and180U) that elastically connect between the vibration-target object and the base part included in the first vibrating system. Also, given that the vibration direction of the first vibrating system is the X-axis direction, resonance frequency Fc1of the first vibrating system is mainly determined by the spring constant of the rubber members180S, which easily deform elastically in the X direction among the rubber members180S,180L, and180U. This is because the rubber members180L and180U are deformed in shear directions with respect to the X direction, and their spring constants in the X direction are about 1/10 of the spring constant of the rubber member180S in the X direction. Also, strictly speaking, the vibration-target object and the base part (for example, the movable part140, the electrostatic sensor150, and the operating panel160) included in the first vibrating system also have levels of elasticity, but these are negligible compared to the elasticity of the rubber members180S,180L, and180U. Also, setting resonance frequency Fc2of the second vibrating system to a predetermined frequency such as 150 Hz is mainly made possible by setting the vibration characteristics of the actuator130, and setting the size, Young's modulus, and so forth of the movable part140, the electrostatic sensor150, and the operating panel160. In the characteristics illustrated inFIG.8A, the acceleration of the vibration-target object assumes a maximal value when vibration frequency Fa is 40 Hz or above and 65 Hz or below, assumes a minimal value at 50 Hz or above and 160 Hz or below, assumes a second maximal value (peak) of approximately 30 dB at 200 Hz or above and 500 Hz or below, and decreases gradually at still higher frequencies. According to this vibration-target object's acceleration, when vibration frequency Fa has higher frequencies than the peak, the decrease compared to the peak is kept to approximately 6 dB to 8 dB, and the changes of the vibration-target object's acceleration in response to the changes of vibration frequency Fa are relatively insignificant, so that it is possible to achieve a design whereby the vibration-target object can be vibrated sufficiently, and at a desired intensity. That is, the range from around the peak of vibration frequency Fa to higher frequencies is a frequency band suitable for vibrating the vibration-target object. Also, as resonance frequency Fc1is increased from 50 Hz to 75 Hz, to 100 Hz, and then to 150 Hz, the frequencies at which vibration frequency Fa has the maximal value, the minimal value, and then the peak value tend to shift to higher frequencies. Human sensory organs perceive vibration well in the frequency band of 80 Hz to 500 Hz, and perceive vibration even better in the frequency band of 80 Hz to 320 Hz. A vibration pattern in which resonance frequency Fc1is 150 Hz is not preferable because the minimal value occurs at about 150 Hz, and it is difficult to vibrate the vibration-target object in the frequency band which human sensory organs perceive well. On the other hand, a vibration pattern in which, as resonance frequency Fc1decreases from 100 Hz, to 75 Hz, and then to 50 Hz in order, the minimal value of vibration frequency Fa also decreases towards 80 Hz or below, is more preferable because it is easy to vibrate the vibration-target object in the frequency band which human sensory organs perceive well. Also, as resonance frequency Fc1becomes smaller, the peak of vibration frequency Fa decreases in the range of 500 Hz or below, so that it is possible to make use of a wider frequency band in the range from around the peak, which is suitable to vibrate the vibration target object, to higher frequencies. In other words, from the perspective of allowing the vibration-target object to vibrate effectively, resonance frequency Fc1is preferably 100 Hz or below, more preferably 75 Hz or below, and even more preferably 50 Hz or below. Also, the acceleration of the base part shows the characteristics shown inFIG.8B, in which the acceleration of the base part assumes a maximal value when vibration frequency Fb of the base part is 40 Hz or above and 65 Hz or below, assumes a minimal value when vibration frequency Fb at 90 Hz or above and 200 Hz or below, assumes a second maximal value of about 10 dB when vibration frequency Fb at 200 Hz or above and 500 Hz or below, and decreases gradually at still higher frequencies. The acceleration of the base part when resonance frequency Fc1is 50 Hz, 75 Hz, and 100 Hz is 10 dB or less, if vibration frequency Fb is 80 Hz or above and 500 Hz or below. It is therefore made clear that the vibration intensity of the base part attenuates by about 20 dB in comparison to the vibration intensity of the vibration-target object, in the frequency band of 80 Hz to 500 Hz, which human sensory organs perceive well. That is, it is made clear that the vibration of the base part attenuates when resonance frequency Fc2of the second vibrating system is 150 Hz and resonance frequency Fc1of the first vibrating system is 50 Hz, 75 Hz and 100 Hz. Also, as shown inFIG.8B, it is found out that, in the frequency band of 80 Hz to 500 Hz, which human sensory organs perceive well, as resonance frequency Fc1of the first vibrating system becomes smaller, the vibration intensity of the base part decreases. In other words, from the perspective of reducing the transmission of vibration to the base part, resonance frequency Fc1is preferably 100 Hz or below, more preferably 75 Hz or below, and even more preferably 50 Hz or below. From the above, in accordance with above as condition (1), it is made clear that, by setting resonance frequency Fc1of the first vibrating system to be ⅔ or less of resonance frequency Fc2of the second vibrating system, that is, by setting resonance frequency Fc1of the first vibrating system to 100 Hz or below, when the actuator130is driven to vibrate the vibrating body, it is possible to reduce the vibration that is transmitted to the base part, while allowing the vibration-target object to be vibrated sufficiently. Also, it is found out that, by setting resonance frequency Fc1of the first vibrating system to be ½ or less of resonance frequency Fc2of the second vibrating system, that is, by setting resonance frequency Fc1of the first vibrating system to 75 Hz or below, it is possible to reduce the vibration that is transmitted to the base part more effectively, while allowing the vibration-target object to be vibrated more effectively. Furthermore, in accordance with above condition (6), it is made clear that, by setting resonance frequency Fc1of the first vibrating system to be ⅓ or less of resonance frequency Fc2of the second vibrating system, that is, by setting resonance frequency Fc1of the first vibrating system to 50 Hz or below, it is possible to reduce the vibration that is transmitted to the base part even more effectively, while allowing the vibration-target object to be vibrated more effectively. Furthermore, human sensory organs can perceive vibration in the frequency band of 80 Hz to 500 Hz, and perceive vibration best in the frequency band of 80 Hz to 320 Hz. It then follows that resonance frequency Fc2of the second vibrating system has only to be set in the range of 80 Hz or above and 500 Hz or below, but, in accordance with above condition (4), it is more preferable to set resonance frequency Fc2of the second vibrating system in the range of 80 Hz or above and 320 Hz or below. Also, the tactile-sensation providing device100has a vibrating system in which the vibration-target object is vibrated in the X direction, and in which the first vibrating system and the second vibrating system vibrate along the X direction. This is as mentioned earlier as condition (5). Since the actuator130has the drive coils133and the magnet134arranged as shown inFIG.6, it is possible to easily achieve a structure in which the vibration-target object is vibrated in the X direction, and in which the first vibrating system and the second vibrating system vibrate along the X direction. FIG.9is a diagram that illustrates the relationship between vibration frequency Fa and the acceleration of the vibration-target object when the mass of the vibrating body is changed.FIG.10is a diagram that illustrates the relationship between vibration frequency Fb and the acceleration of the base part when the mass of the vibrating body is changed. The characteristics shown inFIG.9andFIG.10are obtained, in simulations, by setting resonance frequency Fc1of the first vibrating system to four resonance frequencies, namely 50 Hz, 75 Hz, 100 Hz, and 150 Hz, and by setting resonance frequency Fc2of the second vibrating system to 150 Hz. FIG.9andFIG.10show five respective simulation results, obtained by increasing the mass of the vibrating body stepwise from 0.01 kg to 0.05 kg, to 0.2 kg, to 0.8 kg, and then to 4 kg. Note that the mass of the base part is set to 10 kg assuming that the tactile-sensation providing device100is mounted in a vehicle, and the mass of the vibration-target object is set to 0.2 kg. As shown inFIG.9, when the mass of the vibrating body is increased stepwise from 0.01 kg to 0.05 kg, to 0.2 kg, to 0.8 kg, and then to 4 kg, the vibration-target object's acceleration changes following changes of resonance frequency Fc1of the first vibrating system only when the mass of the vibrating body is 0.2 kg, and changes little when the mass of the vibrating body is 0.8 kg and 4 kg. This indicates that, when the mass of the vibrating body exceeds 0.2 kg, that is, when the mass of the vibrating body exceeds the mass of the vibration-target object, influence of the spring constant of the elastic bodies (the rubber members180S,180L, and180U) connecting between the base part and the vibration-target object, which is the main factor in determining resonance frequency Fc1of the first vibrating system, is unlikely. Also, as the mass of the vibrating body increases, the acceleration of the vibration-target object decreases on the whole, regardless of vibration frequency Fa. This is likely to be because the larger the mass of the vibrating body is with respect to the mass of the vibration-target object, the greater the energy that is accumulated in the vibrating body is with respect to the energy that is accumulated in the vibration-target object. Therefore, from the perspective of allowing the vibration-target object to vibrate effectively, it is preferable if the mass of the vibrating body is small compared to the mass of the vibration-target object, and it is particularly preferable if the mass of the vibrating body is less than or equal to the mass of the vibration-target object. As shown inFIG.10, when the mass of the vibrating body is increased stepwise from 0.01 kg to 0.05 kg, to 0.2 kg, to 0.8 kg, and then to 4 kg, it is clear that, in all cases of these masses, the acceleration of the base part changes following changes of resonance frequency Fc1of the first vibrating system. Also, as for the band of 80 Hz to 500 Hz, it is made clear that the acceleration of the base part shown inFIG.10is sufficiently reduced with respect to the acceleration of the vibration-target object shown inFIG.9. Consequently, in terms of the relationship to the base110's vibration frequency Fb and acceleration, it is made clear that the mass of the vibrating body can be set to any of 0.01 kg, 0.05 kg, 0.2 kg, 0.8 kg and 4 kg. Therefore, asFIG.9andFIG.10make clear, in accordance with condition (2), it is preferable if the mass of the vibrating body is less than or equal to the mass of the vibration-target object. FIGS.11A and11Bare diagrams that illustrate the respective vibration frequency-vs-acceleration characteristics of the vibration-target object and the base part.FIGS.11A and11Billustrate multiple characteristics obtained by changing the Q factor of the first vibrating system stepwise from 15 to 10, to 5, to 2, and then to 1. Also, the characteristics shown inFIGS.11A and11Bare obtained in simulations by setting resonance frequency Fc1of the first vibrating system to 50 Hz and setting resonance frequency Fc2of the second vibrating system to 150 Hz. Using spring constant K, viscosity loss C, and mass M of the vibration-target object in the first vibrating system, the Q factor of the first vibrating system can be represented by following equation (1): Q=(MK)1/2/C(Equation 1) FIG.11Ashows vibration frequency Fa-vs-acceleration characteristics of the vibration-target object.FIG.11Amakes clear the tendency that, when the Q factor is changed stepwise from 15 to 10, to 5, to 2, and then to 1, larger Q factors achieve greater acceleration. Also, it is clear that, there is almost no difference in acceleration when the Q factor is 15 and 10. These tendencies are particularly obvious in the range of 80 Hz to 500 Hz, which human sensory organs perceive well. Therefore, it is made clear that larger Q factors are preferable from the perspective of the vibration-target object's vibration frequency Fa-vs-acceleration characteristics, that Q factors larger than 10 cease to provide greater effect, and that the vibration-target object can be vibrated sufficiently even when the Q factor is 1. FIG.11Bshows the vibration Fb-vs-acceleration characteristics of the base part.FIG.11Bmakes clear the tendency that, when the Q factor is changed stepwise from 15 to 10, to 5, to 2, and then to 1, the greater the Q factor, the more the base part's acceleration is reduced. This tendency is particularly obvious in the range of 80 Hz to 500 Hz, which human sensory organs perceive well. However, it is found out that, in the band of 50 Hz or below, which is more prone to be affected by road noise, a maximal value to provide high acceleration occurs when the Q factor is 15. Also, it is found out that, in the range of 80 Hz to 500 Hz, the acceleration of the base part shown inFIG.11Bis sufficiently reduced with respect to the acceleration of the vibration-target object shown inFIG.11A. Therefore, while it is preferable to make the Q factor large within the range of 10 or less from the perspective of the base part's vibration Fb-vs-acceleration characteristics, it is nevertheless found out that the vibration of the base part can be reduced even when the Q factor is 1. As described above, as mentioned earlier as condition (7), the results ofFIGS.11A and11Bmake it clear that the Q factor of the first vibrating system is preferably 1 or greater, and 10 or less. As described above, by setting resonance frequency Fc1of the first vibrating system to be ⅔ or less of resonance frequency Fc2of the second vibrating system, it is possible to achieve a structure in which little vibration is transmitted to the base part, while allowing the vibration-target object to be vibrated sufficiently. Therefore, it is possible to provide a tactile-sensation providing device100that reduces the transmission of vibration to the base part, while allowing the vibration-target object to be vibrated sufficiently. Note that, although an example of using an electrostatic sensor150has been described above, it is equally possible to use a touch panel through which light can transmit, instead of the electrostatic sensor150, and, furthermore, provide a display panel on top of the touch panel, and press and operate the GUIs (Graphical User Interfaces) displayed on the display panel. Also, although vibration frequency Fb and the acceleration of the vibration-target object and/or the base part when resonance frequency Fc2of the second vibrating system is set to 150 Hz have been described above with reference toFIG.8AtoFIG.11B, the same is true for cases in which resonance frequency Fc2is not 150 Hz. Also, although cases have been described in the above in which the vibration direction of the actuator130is the X direction, the vibration direction of the actuator130is by no means limited to the X direction. For example, the vibration direction of the actuator130may be the Z direction or any other direction. Although the tactile-sensation providing device according to an example embodiment of the present invention has been described above, the present invention is by no means limited to the embodiment specifically disclosed herein, and a variety of alterations and changes are possible without departing from the scope of the following claims. | 45,790 |
11861068 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Many computing devices, such as mobile devices and wearable devices, employ a point-based user input scheme as a primary means of receiving user input. For example, smart phones often include a touch-sensitive display that can receive point-based touch input from a user. These touch-sensitive displays or touchscreens detect user input data resulting from physical contact with the display such as contact from a user's finger, a stylus, or a pen. The user input data can include positional data, force data (e.g., force of a touch on the touchscreen), temporal data (e.g., timing of touchscreen touches or events), and other data. Other input schemes include the use of motion capture to track positional information, such as a motion of a user's hand, fingers, or a stylus in two or three dimensions. Conventional user interface designs employ visual user interface elements positioned on the display that the user visually identifies and manipulates. Over time, users become adept at interacting with these user interfaces through repeated practice. One drawback to this type of approach is that a change in layout or positioning of an interactive user interface element reduces user efficiency as a user relearns how to interact with the new design. Another drawback to this type of approach is that providing more functionality uses more display spaces (e.g., more buttons to perform more functions). Designers of these conventional user interfaces often face a trade-off between a spacious, easy-to-use design and providing robust functionality with many user interface elements. In various example embodiments, a radial slide gesture provides an intuitive, space-efficient technique for receiving and interpreting user input. In an example embodiment, a gesture navigation system receives user input data that indicates a continuous physical user interaction (herein, also referred to as a gesture) associated with a display screen of a user device. For instance, the user device may be a smart phone that includes a touchscreen display to track, monitor, or otherwise capture touch data from user input. The gesture navigation system detects an initial point from the user input data such as when the user begins a particular gesture. Subsequently, the gesture navigation system detects a current point from the user input data. The gesture navigation system then determines a radius distance based on the initial point and the current point. For instance, the radius distance is determined from a circle that includes the current point and is centered about the initial point. The gesture navigation system selects an action based on the radius distance. For example, if the radius distance falls within a particular range, the gesture navigation system selects an action corresponding to the particular range. Once the gesture navigation system selects the action, the gesture navigation system performs or invokes the selected action upon termination of the gesture (e.g., the user lifting their finger from the touchscreen display to indicate completion or end of the gesture). In some embodiments, the gesture navigation system presents a visual indication of an available action as the current point of the gesture transgresses a boundary of a particular range for the available action. For example, the visual indication can include a description of the action that is available to perform (e.g., a textual or graphical description). In further example embodiments, the gesture navigation system detects an initiation of the radial slide gesture based on a press and hold gesture. For example, the user may touch the touchscreen of the user device and substantially refrain from moving the current point of the touch for a threshold period of time to initiate the radial slide gesture. After the gesture navigation system detects the press and hold gesture, the gesture navigation system can detect the user performing the radial slide gesture to select a particular action among multiple actions. In this way, the radial slide gesture is first affirmatively initiated by the user to prevent an undesired or accidental selection of an action via the radial slide gesture. In certain embodiments, a portion of the user interface is deactivated after the radial slide gesture is initiated to prevent user interface interactions other than the radial slide gesture. In still further embodiments, the gesture navigation system receives an indication of a designated user interface element and the gesture navigation system performs the selected action in association with the designated user interface element. For instance, the designated user interface element may be an indication of a particular individual on a friends list. In this instance, the selected action may be to send a message or initiate a chat session with the particular individual. In some embodiments, the gesture navigation system determines the designated user interface element based on the initial point. In this way, the radial slide gesture can be used to provide multiple actions for a user interface element located anywhere on the display. Accordingly, techniques described herein allow for a multiple-action user input that is intuitive and provides interface design freedom since any location on the display can be designated for one or more actions. These techniques are motor skill driven and demand less reliance on the user visually interpreting a user interface. As a result, efficiencies developed by the user can more easily be retained since the radial slide gesture operates similarly across different display layouts. Thus, the radial slide gesture improves user experience by providing a multitude of actions while being intuitive and predictable, even for unfamiliar user interface layouts. FIG.1is a network diagram depicting a network system100having a client-server architecture configured for exchanging data over a network, according to one embodiment. For example, the network system100may be a messaging system where clients communicate and exchange data within the network system100. The data may pertain to various functions (e.g., sending and receiving text and media communication, determining geolocation, etc.) and aspects associated with the network system100and its users. Although illustrated herein as client-server architecture, other embodiments may include other network architectures, such as peer-to-peer or distributed network environments. As shown inFIG.1, the network system100includes a social messaging system130. The social messaging system130is generally based on a three-tiered architecture, comprising an interface layer124, an application logic layer126, and a data layer128. As is understood by skilled artisans in the relevant computer and Internet-related arts, each module, system, or engine shown inFIG.1represents a set of executable software instructions and the corresponding hardware (e.g., memory and processor) for executing the instructions. To avoid obscuring the inventive subject matter with unnecessary detail, various functional modules and engines that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.1. Of course, additional functional modules and engines may be used with a social messaging system, such as that illustrated inFIG.1, to facilitate additional functionality that is not specifically described herein. Furthermore, the various functional modules and engines depicted inFIG.1may reside on a single server computer, or may be distributed across several server computers in various arrangements. Moreover, although the social messaging system130is depicted inFIG.1as a three-tiered architecture, the inventive subject matter is by no means limited to such an architecture. As shown inFIG.1, the interface layer124comprises a interface module (e.g., a web server)140, which receives requests from various client-computing devices and servers, such as client devices110each executing a client application112, and third party servers120each executing a third party application122. In response to received requests, the interface module140communicates appropriate responses to requesting devices via a network104. For example, the interface module140can receive requests such as Hypertext Transfer Protocol (HTTP) requests, or other web-based Application Programming Interface (API) requests. The client devices110can execute conventional web browser applications or applications (also referred to as “apps”) that have been developed for a specific platform to include any of a wide variety of mobile computing devices and mobile-specific operating systems (e.g., IOS™, ANDROID™, WINDOWS® PHONE). In an example, the client devices110are executing the client application112. The client application112can provide functionality to present information to a user106and communicate via the network104to exchange information with the social messaging system130. Each client device110can comprise a computing device that includes at least a display and communication capabilities with the network104to access the social messaging system130. The client devices110comprise, but are not limited to, remote devices, work stations, computers, general purpose computers, Internet appliances, hand-held devices, wireless devices, portable devices, wearable computers, cellular or mobile phones, personal digital assistants (PDAs), smart phones, tablets, ultrabooks, netbooks, laptops, desktops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, network PCs, mini-computers, and the like. The user106can be a person, a machine, or other means of interacting with the client devices110. In some embodiments, the user106interacts with the social messaging system130via the client devices110. As shown inFIG.1, the data layer128has a database server132that facilitates access to an information storage repository or database134. The database134is a storage device that stores data such as member profile data, social graph data (e.g., relationships between members of the social messaging system130), and other user data. An individual can register with the social messaging system130to become a member of the social messaging system130. Once registered, a member can form social network relationships (e.g., friends, followers, or contacts) on the social messaging system130and interact with a broad range of applications provided by the social messaging system130. The application logic layer126includes one or more application logic module150, which, in conjunction with the interface module140, generates various user interfaces using data retrieved from various data sources or data services in the data layer128. The application logic module150can be used to implement the functionality associated with various applications, services, and features of the social messaging system130. For instance, a social messaging application can be implemented with the application logic module150. The social messaging application provides a messaging mechanism for users of the client devices110to send and receive messages that include text and media content such as pictures and video. The client devices110may access and view the messages from the social messaging application for a specified period of time (e.g., limited or unlimited). In an example, a particular message is accessible to a message recipient for a predefined duration (e.g., specified by a message sender) that begins when the particular message is first accessed. After the predefined duration elapses, the message is deleted and is no longer accessible to the message recipient. Of course, other applications and services may be separately embodied in their own application server module. As illustrated inFIG.1, the social messaging system130includes a gesture navigation system160. In various embodiments, the gesture navigation system160can be implemented as a standalone system and is not necessarily included in the social messaging system130. In some embodiments, the client devices110include a portion of the gesture navigation system160(e.g., a portion of the gesture navigation system160included independently or in the client application112). In embodiments where the client devices110includes a portion of the gesture navigation system160, the client devices110can work alone or in conjunction with the portion of the gesture navigation system160included in a particular application server or included in the social messaging system130. FIG.2is a block diagram200of the gesture navigation system160. The gesture navigation system160is shown to include a communication module210, a user interface module220, a gesture module230, an action module240, an invocation module250, and a sensor module260. All or some of the modules210-260communicate with each other, for example, via a network coupling, shared memory, and the like. Each module of the modules210-260can be implemented as a single module, combined into other modules, or further subdivided into multiple modules. Other modules not pertinent to example embodiments can also be included, but are not shown. The communication module210provides various communications functionality. For example, the communication module210can facilitate performing a particular action by communicating with the social messaging system130or the third party server120. The communication module210exchanges network communications with the database server132, the client devices110, and the third party server120. The information retrieved by the communication module210includes data associated with the user106(e.g., member profile data from an online account or social network service data) or other data to facilitate the functionality described herein. The user interface module220provides various presentation and user interface functionality operable to interactively present information to and receive information from the user (e.g., user106). For instance, the user interface module220is utilizable to present a visual indication of an action to be performed or alter a user interface to emphasize a particular aspect of the user interface. In various embodiments, the user interface module220presents or causes presentation of information (e.g., visually displaying information on a screen, acoustic output, haptic feedback, etc.). The process of interactively presenting information is intended to include the exchange of information between a particular device and the user. The user may provide input to interact with the user interface in many possible manners, such as alphanumeric, point based (e.g., cursor or tactile), or other input (e.g., touch screen, light sensor, infrared sensor, biometric sensor, microphone, gyroscope, accelerometer, or other sensors). The user interface module220provides many other user interfaces to facilitate functionality described herein. The term “presenting” as used herein is intended to include communicating information or instructions to a particular device that is operable to perform presentation based on the communicated information or instructions. The gesture module230provides functionality to detect aspects of a particular gesture from the user input data. For example, the gesture module230detects an initial point, a current point, a terminal point, satisfaction of a time threshold, satisfaction of a distance threshold, a discontinuity in a gesture, and other aspects of a particular gesture. In a specific example, the gesture module230detects a completion or termination of a gesture based on the user releasing their finger from a touchscreen display. The action module240provides functionality to select and determine actions associated with a particular gesture. For example, the action module240identifies a particular action to perform based on a characteristic of a particular gesture such as a distance of a slide or drag gesture. In some examples, the action module240determines the action to perform based on the gesture in conjunction with a designated user interface element. For instance, the action module240determines a specific action for a particular designated user interface element and a different action for a different user interface element. The invocation module250provides functionality to invoke a particular action. For example, an action may be performed locally (e.g., local file management) at the user device or invoked by communicating with a server or system (e.g., sending a message to a member of the social messaging service) such as the social messaging system130. In a specific example, the invocation module250establishes a chat session between the user of the user device and another member of the social messaging service. The sensor module260provides various sensor functionality such as touchscreen display input monitoring. In a specific example, the sensor module260monitors touchscreen display input such as positional information (e.g., x and y coordinates), force information, and timing information (e.g., a timeline associated with the positional or force information). The sensor module260can monitor sensor data via event driven update (e.g., receive touch data as it occurs), polling at a particular sampling rate, or continuously monitor output from a particular sensor. FIG.3is a user interface diagram300depicting an example gesture being performed on an example device (e.g., a smart phone) displaying example user interface310. In the user interface310, a user320is performing a radial slide gesture350to cause invocation of an action associated with user interface element330. In the diagram300, the sensor module260receives user input data that indicates the user320is physically touching a touchscreen display of the user device as shown by touch340. The gesture module230detects an initial point and a current point of the radial slide gesture350and determines a radius distance for the radial slide gesture350. The action module240determines an action based on the radius distance of the radial slide gesture350. For example, the action module240selects a first, second, or third action,370,380, or390respectively, when the radius distance is within a particular range corresponding to one of those actions. For example, the actions370,380, and390may perform a function associated with the designated user interface element330. In a specific example, the first action370may ‘like’ or designate some item as a ‘favorite’ of the user, the second action380may initiate a text-based chat, and the third action390may initiate a video chat with a particular user associated with the designated user interface element330. Put another way, concentric circles centered about the initial point can define boundaries for different ranges. The action module240selects an action for a particular range when the current point is within the boundaries for a particular range (e.g., greater than a first concentric circle boundary and less than a second concentric circle boundary that is consecutive to the first concentric circle boundary). After the action module240selects the action, the invocation module250performs, or facilitates performing, the selected action in response to detecting a completion of the radial slide gesture350. For example, the gesture module230detects termination or completion of a radial slide gesture350when the user320releases or lifts their finger from the touchscreen. In some embodiments, the user cancels the radial slide gesture350(shown in the diagram300as cancel gesture360), i.e., gives an instruction to perform no action, by terminating the radial slide gesture350within a distance of the initial point or beyond a particular distance or off screen. In some example embodiments, the gesture module230initiates the radial slide gesture350or radial slide mode only after detecting a ‘hold’ gesture. For instance, the user320touches the touchscreen at the location of the user interface element330and substantially refrains from moving the current point of the touch340for a threshold period of time (e.g., 0.75 seconds). That is to say, the gesture module230detects the hold gesture when a hold time measure, beginning when the gesture module230detects the user first touching the touchscreen or when the current point is substantially not moving, exceeds the threshold period of time. After the threshold period of time expires, the radial slide gesture350is initiated. In some example embodiments, the gesture module230resets a hold time measure if the user moves the current point beyond a hold threshold distance (e.g., 0.2 inches or outside a boundary of a particular user interface element such as a virtual button) such as when the user does not substantially refraining from moving the current point. In other embodiments, the gesture module230does not reset the hold time measure and the hold gesture cannot be completed once the user moves the current point beyond a hold threshold distance. In these embodiments, the user performs the hold gesture by first lifting their finger from the touchscreen to reset the hold time measure and subsequently touching the touchscreen and perform another hold gesture. In some example embodiments, the user interface module220causes presentation of a visual indication that the radial slide mode is active after the hold gesture has been completed (e.g., a change in background color of a user interface on the touchscreen or a graphical or textual description indicating a current mode or successful completion of the hold gesture). In some instances, the user320performs the radial slide gesture350as a continuation, or in succession, of the hold gesture (e.g., the user320remains in continuous physical contact with the touchscreen throughout the hold gesture and the radial slide gesture350). Thus, the gesture module230can detect various combinations of successive gestures that are performed in series or, in some instances, gestures that are performed in parallel. The purpose of first initiating the radial slide gesture350is to prevent undesired user interface interaction and allow the user320to normally interact with the underlying user interface without triggering an action invocation via a particular radial slide gesture. Once the radial slide mode is initiated, the gesture module230can detect the user320performing a slide radially outward from the user interface element330. The hold gesture may also function to designate a particular user interface element located at the initial point. The action module240then identifies action based on the designated user interface element. In a specific example, the user interface310includes a list of contacts. The user320performs the radial slide gesture350to invoke an action associated with one of the contacts in the list of contact. For instance, the user320initiates a chat session, sends a message, or removes a contact from the list by performing the radial slide gesture350with an initial point being located at a particular contact in the list of contacts. In this way, the user320can quickly perform one of multiple actions associated with a user interface element anywhere on the display without the need of a button or another conventional input technique being on the display. FIG.4is a flow diagram illustrating an example method400for receiving and interpreting a user input such as a gesture. The operations of method400are performed by components of the gesture navigation system160, and are so described below for the purposes of illustration. At operation410, the sensor module260receives user input data that indicates a continuous physical user interaction or gesture associated with a display screen of the user device. For example, the sensor module260continuously or periodically receives or retrieves user input data from a touchscreen. In various embodiments, the user input data indicates positional, force, and temporal data resulting from the user physically touching the touchscreen. To illustrate the concepts of operation410,FIG.5is a diagram500which depicts user input data detected by the sensor module260and the gesture module230. In the diagram500, a user504performs a physical touch506on a touchscreen502. In an example embodiment, the sensor module260receives the user input data that comprises a series of points in time corresponding to physical user input (e.g., the touch506). For instance, points508and510are example points of a particular user gesture that the user504is currently performing at the touchscreen502. In the diagram500, the points508are points that have already occurred and points510are points that may occur when the user504completes the gesture. In an embodiment, each of the points508and510correspond to user input data such as touch data512. As shown in the diagram500, the touch data512comprises coordinate data for points (e.g., x and y coordinates), force data (e.g., pressure the user504may be applying to the touchscreen502at a particular point), temporal data (e.g., a timestamp for each point or indication of an order in which the points occurred in time). Although the diagram500shows the user input data or touch data512including coordinate points, it will be appreciated that the sensor module260can receive other types of positional data or touch data and the gesture module230can derive coordinate data from the other types of data. Turning back toFIG.4, at operation420, the gesture module230detects an initial point or initial position from the user input data. In an embodiment, the initial point is a spatial point corresponding to a beginning of the continuous physical user interaction or gesture. That is to say, the initial point is a starting point on the touchscreen of a particular user gesture. For example, the initial point is the location on the touchscreen display where the user starts the radial slide gesture. At operation430, the gesture module230detects a current point from the user input data. In an embodiment, the current point is a spatial point corresponding to a current state of the continuous physical user interaction or gesture. In some embodiments, the sensor module260receives the user input data in real time or substantially real time. The current point corresponds to the current state of the user gesture or user interaction with the user device at a current moment in time. For instance, the current point is the location on the touchscreen display that the user is currently touching. To illustrate the concepts of operations420and430,FIG.6is a diagram600depicting aspects of various gestures608. Similar toFIG.5discussed above, in the diagram600, a user604performs a physical touch606on a touchscreen602. In various example embodiments, each gesture of the various gestures608has an initial point, current point, and terminal point. As shown in the diagram600, an example gesture of the various gestures608has initial point610, current point612, and terminal point614. In an embodiment, the initial point610is a location on the touchscreen602where the user604first made physical contact to initiate a particular gesture. Although, in other embodiments, the initial point610is not necessarily the location at which the user604first made physical contact, but may be a location at which the user604initiated a particular gesture (e.g., via a press and hold gesture). For instance, the user first makes initial contact with the touchscreen602at a particular point and executes a hold gesture at another point on the touchscreen602while remaining in continuous physical contact with the touchscreen602. In this instance, the initial point is the point or location at which the user executed the hold gesture to initiate the radial slide gesture and is not necessarily the point where the user first made contact with the touchscreen602(although in some cases the point of first contact by the user is the same point as where the hold gesture is performed by the user). The current point612is the location on the touchscreen602where the user604is currently making physical contact with the touchscreen602. As shown in the diagram600, the current point612can be co-located with the touch606. The terminal point614is the location on the touchscreen602where the user604terminates or completes a particular gesture. For example, the user604terminates a particular gesture by releasing or lifting their finger or stylus from the touchscreen602. The terminal point will coincide with the current point at a moment in time when the gesture is terminated. Turning again toFIG.4, at operation440, the gesture module230determines a radius distance based on the initial point and the current point. In an example embodiment, the radius distance is a radius of a circle that includes the current point and is centered about the initial point. The radius distance is independent of angular information. That is to say, the angle formed between a reference line and a line extending through the initial point and the current point has no bearing on the radius distance. Put another way, the user can perform the radial slide gesture at a variety of angles with respect to a reference line of the display to select the same action since the distance of the radial slide gesture is determinative of the action and not necessarily the angle at which the radial slide gesture is performed. At operation450, the action module240selects an action from among multiple actions based on the radius distance, or radial distance, being within a particular range among successive ranges along a straight line that starts at the initial point and extends through the circle. For example, the action module240accesses a lookup table (stored locally at the user device or remotely at a server) that includes actions corresponding to particular radius distances and identifies the action for the radius distance by performing a lookup for a particular action using the determined radius distance. In some embodiments, the action module240performs the lookup of the action using a piece of data extracted from the user input data (e.g., the designated user interface element that indicates a member of the social messaging service) in conjunction with the radius distance to determine the action. In further embodiments, the action module240receives or detects an indication of a designated user interface element. For instance, the gesture module230may detect the indication of the designated user interface element based on the initial point (e.g., the user interface element corresponding to the initial point). In other embodiments, the user specifies the designated user interface element, or multiple designated user interface elements, prior to performing the radial slide gesture. For instance, the user may select one or multiple items from a particular list and subsequently perform the radial slide gesture to invoke an action associated with the selected items. In a specific example, the user may select multiple individuals on a friends list and subsequently perform the radial slide gesture to initiate a group chat or identify those individuals as favorites among those on the friends list. In various embodiments, the action module240identifies candidate actions available for a particular designated user interface element and dynamically determines the successive ranges for each of the candidate actions. For example, if the designated user interface element is an individual on a friends list of the user, the action module240may identify available communication modalities with the individual (e.g., the action module240may indicate that text-based chatting and text messaging are available but video or voice communications are not currently available for the individual). In this example, the action module240identifies the available communication modalities as the candidate actions. In other embodiments, the multiple actions are predefined for a type of designated user interface element or for a particular user interface. For instance, the multiple actions can comprise text messaging, voice streaming, or video streaming, for a particular designated user interface element that is associated with a member of the social messaging service. In an example embodiment, each range among the successive ranges corresponds to a particular action among the multiple actions. In an example embodiment, the successive ranges are a series of consecutive ranges along a straight line. Since the successive ranges are independent of angular information, the successive ranges can also be conceptualized as regions formed by concentric circle boundaries. That is to say, the concentric circles are the boundaries for the successive ranges. In some instances, the length of each of the successive ranges is uniform. In other embodiments, the length of each of the successive ranges is not necessarily uniform. For example, each range of the successive ranges may become larger, or smaller, (e.g., exponentially with respect to distance from the initial point) as the range becomes further away from the initial point (e.g., the widest range being furthest from the initial point). In some embodiments, the lengths of the successive ranges are predefined. In other embodiments, the action module240determines the length of each of the successive ranges dynamically. For example, the action module240determines the length of a particular range based on the initial point. In a specific example, if an initial point is near the edge of a display, the action module240may utilize ranges of longer length as there is more space (in the direction away from the edge) for the user to perform the radial slide gesture. In another example, the action module240determines the lengths of respective ranges of the successive ranges based on the designated user interface element. For instance, the designated user interface element may be associated with a certain set of actions, and the action module240determines the lengths of ranges based on a count of actions in the set of actions. In further embodiments, the action module240determines the radius distance is within a no-action range. For example, the no-action range extends from the initial point to a specified distance. While the radius distance is within the no-action range, the action module does not select an action to perform. The purpose of the no-action range is to provide a way for the user to cancel or stop the radial slide gesture from performing an action. Thus, the invocation module250performs no action in response to detecting the termination or completion of the continuous physical user interaction while the radius distance is within the no-action range. In other embodiments, the no-action range could be beyond the edge of the display or an actual radial distance (e.g., 0.25-0.3 inches). To illustrate the concepts of operations440and450,FIG.7is a diagram700depicting various gestures on an example touchscreen702. It will be appreciated that the illustrative elements of the diagram700are shown for the purposes of understanding the concepts described herein and are not shown to the user. That is to say, touch data710,712, and714are used by the gesture navigation system160internally and the regions, arrows, and graphs of the diagram700are for illustrative purposes and are not part of a particular user interface. As shown in the diagram700, example gestures704,706, and708are respectively associated with the touch data710,712, and714. For example, the touch data710for the gesture704shows a graph of distance versus time, the distance in the graph for the touch data710being the distance between the initial point and the current point. As indicated by the touch data710, at first, the distance changes very little with time, and then the distance changes at a fairly steady rate with time. Such touch data (e.g., touch data710) may be characteristic of the hold gesture immediately followed by the radial slide gesture. The gesture module230detects various characteristics or attributes of the gesture from touch data such as the touch data710. In various embodiments, the gesture module230determines the radius distance of the radial slide gesture, and the action module240selects a particular action based on the determined radius distance. For example, areas or regions716,718, and720with boundaries722,724,726, and728may each correspond to a different action. In a specific example, the action module240selects a particular action corresponding to the region718when the radius distance falls between the boundaries722and726. In this example, the user interface module220causes presentation of a visual indication of the available action for the region718(e.g., a textual or graphical description of what the action does). The purpose of the visual indication is to convey to the user what the currently selected action does to assist the user in deciding whether to perform the currently selected action. The multiple actions may include a wide variety of different actions that can be performed locally at the user device, remotely at a particular server, or a combination thereof. For example, the multiple actions include favoriting, liking, tagging, deleting or removing from a list, reordering a list, making a purchase, selecting an option for a particular purchase, sending a message, initiating a chat session, modifying an image captured by the user device, altering a system or user interface preference or option (e.g., a quality of a video render), and so on. Returning toFIG.4, at operation460, the invocation module250performs the selected action in response to detecting a termination of the continuous physical user interaction while the radius distance is within the particular range for the selected action. The invocation module250can perform or facilitate performing the action locally at the user device or remotely by exchanging information with another server, device, or system. In a specific example, the invocation module250establishes a chat session between the user of the user device and another member of the social messaging service. In various embodiments, the user interface module220causes presentation of an indication that indicates a successful or failed completion or invocation of the selected action in response to the invocation module250successfully or unsuccessfully invoking the selected action. FIG.8is a flow diagram800illustrating example operations for presenting a visual indication of an action associated with a gesture. As described above, at operation420, the gesture module230detects an initial point from the user data. Subsequently, at operation430, the gesture module230detects a current point from the user input data. In some embodiments, the operations ofFIG.8are performed subsequent to the operation430. At operation810, the gesture module230determines when the radius distance of the radial slide gesture transgresses a particular boundary for a particular range among the successive ranges. That is to say, when the path of the radial slide gesture falls within a new range of the successive ranges (shown as “yes” in the diagram800), the user interface module220causes presentation of a visual indication associated with the new range. Conversely, if the gesture module230determines the radius distance has not transgressed the particular boundary, then the gesture navigation system160simply proceeds to operation440(shown as “no” in the diagram800) skipping operating820. At operation820, after the gesture module230determine the radius distance transgresses the particular boundary, the user interface module220causes presentation of a visual indication of the selected action in response to the current point transgressing a boundary of the particular range. For example, the visual indication may indicate a function of an action associated with the new range (e.g., textual or graphical description of the action). In an embodiment, the visual indication remains on the display while the action is available to the user (e.g., until the user moves the touch to transgress another boundary or until the user completes the radial slide gesture by, for example, releasing their finger from the touchscreen). In some embodiments, the user interface module220causes presentation of a visual indication on the user interface of the user device that indicates the selected action cannot be performed or that no action can be performed. For example, the action module240may determine that there are no actions available for a particular designated user interface element and in response to this determination, the user interface module220presents the visual indication to indicate that no actions can be performed. In another example, the action module240determines that a particular action among the multiple actions is not available to be performed, and in response to this determination, the user interface module220presents a visual indicate to indicate that the particular action cannot be performed. FIG.9is a user interface diagram900depicting an example visual indication920of an action displayed on an example touchscreen910. In an example embodiment, the user interface module220causes presentation of the visual indication920overlaid on top of a user interface of the touchscreen910. In some embodiments, the user interface module220does not obscure a portion of the user interface or a particular user interface element with the visual indication920. For instance, a designated user interface element930may remain unobscured by the visual indication920, as shown in the diagram900. In further embodiments, the user interface module220deactivates a portion of the user interface, or at least one user interface element of the user interface, after detecting the initial point or when the radial slide gesture is initiated to prevent user interaction with the deactivated portion of the user interface. This is to prevent undesired interaction with user interface elements while the user is performing the radial slide gesture. FIG.10is a flow diagram1000illustrating example operations for detecting certain aspects of a gesture. As described above, at operation410, the sensor module260receives the user input data. At operation420, the gesture module230detects the initial point from the user data. In some embodiments, operation420includes the additional operations ofFIG.10. At operation1010, the gesture module230detects a hold gesture at the initial point from the user input data. For instance, the hold gesture can comprise the user touching the touchscreen at a particular location and substantially refraining from moving the current point of the touch for a threshold period of time. At operation1020, the gesture module230determines if a threshold period of time has expired. In various embodiments, after the gesture module230determines the threshold period of time has expired, the radial slide gesture is initiated (shown as “yes” in the diagram1000) or, stated another way, a radial slide mode begins. If the threshold period of time has not expired and the user has either terminated the hold gesture (e.g., lifting the user's finger from the touchscreen) or moved the current point away from the initial point (e.g., moved by more than a hold threshold distance), the gesture navigation system160does not perform subsequent operations and returns to operation410to receive more user input data (shown as “no” in the diagram1000). In some instances, the user performs the radial slide gesture350as a continuation of, or in succession of, the hold gesture (e.g., the user remains in continuous physical contact with the touchscreen throughout the hold gesture and the radial slide gesture350). The purpose of first initiating the radial slide gesture is to prevent undesired user interface interaction and allow the user to normally interact with the underlying user interface. Once the radial slide mode is initiated, the gesture module230can detect the current point from the user input data at the operation430, as shown in the diagram1000. To illustrate the concepts ofFIG.10,FIG.11is a user interface diagram1100depicting example data used to detect a radial slide gesture1120at a touchscreen1110. The regions1130,1140, and1150correspond to different actions that the user can select by performing the radial slide gesture1120. In some embodiments, the user can perform the radial slide gesture1120outward from the initial point and then return inward towards the initial point to cycle to a previous action. As described above for other illustrative figures, it will be appreciated that the illustrative elements of the diagram1100are shown for the purposes of understanding the concepts described herein and are not shown to the user. That is to say, the touch data1160is used by the gesture navigation system160internally and the regions, arrows, and graphs of the diagram1100are for illustrative purposes and are not part of a particular user interface. The touch data1160shows example user input data that the sensor module260may receive in response to the gesture1120. Graph line1162(the dotted line in the graph of the touch data1160) shows the change in the radius distance of the radial slide gesture1120versus time. Threshold1164is a time threshold. For example, the gesture module230may first detect a hold gesture prior to detecting an instance of a radial slide gesture (e.g., the radial slide gesture350,1120as described above). As shown by the touch data1160, if the graph line1162remains substantially in the same location for the threshold1164period of time, the gesture module230detects the hold gesture and proceeds to detect the radial slide gesture1120. The thresholds1166,1168, and1170are distance thresholds that correspond to the region1130,1140, and1150. As shown by the touch data1160, the radial slide gesture1120was terminated or completed in-between the distance threshold1168and1170, which corresponds to an action associated with the region1140. FIG.12is a user interface diagram1200depicting an example of performing an action associated with an item1204that is shown on a touchscreen1202using a radial slide gesture1206. The touchscreen1202may display an article, digital magazine story, or another piece of digital media from a digital media channel. In an example embodiment, the user can interact with a particular item from the digital media via the radial slide gesture1206. For example, the user may wish to purchase a particular item shown in the touchscreen1202. In the diagram1200, the user can initiate a purchase flow by touching and holding the item in the touchscreen1202at a location of the item of interest (e.g., the item1204) and subsequently performing the radial slide gesture1206. In an embodiment, different attributes of the purchase (e.g., quantity, color, size, using a particular payment account to pay for the purchase) can be selected based on the radius distance of the radial slide gesture1206. For example, if the user completes the radial slide gesture1206within the boundaries of1208and1210, the user may initiate a purchase flow for the item with least expensive options (e.g., slowest shipping and fewer accessories). In another example, if the user completes the radial slide gesture1206outside the boundary1210, the user may initiate a purchase flow for the item with more expensive options (e.g., a fastest shipping or, in the case of concert tickets, better seating). Accordingly, the radial slide gesture1206provides a high degree of interactivity with the digital media without visually obscuring the content of the digital media. FIG.13is a user interface diagram1300depicting an example use case of the techniques described above. In the diagram1300, a user1304performs a physical touch1306at a particular point on a touchscreen display1302. For example, the touchscreen display1302may be displaying digital media such as a digital magazine article, a video, a photograph, a text message, an interactive webpage, or another type of digital media. In these examples, the user1304performs an action associated with the digital media by performing a hold gesture followed by a radial slide gesture1308. For instance, the user1304physically touches anywhere on the digital media and substantially refrains from moving the current point of the physical touch for a threshold period of time (e.g., 0.75 seconds), and then performs the radial slide gesture1308to select a particular action. In a specific example, the user1304invokes an action such as favoriting, sharing, or sending to friends or contacts on the social messaging service the digital media by performing the slide gesture1308to within a particular range such as indicated by favorite1310and share1312. In this way, the user1304can share a particular piece of digital media by directly interacting with the digital media on the touchscreen display1302. FIG.14is a diagram1400illustrating an optional example embodiment of initiating a live video stream of media data being captured in real time by a user device1420of a user1410. Although the diagram1400depicts the gesture navigation system160in the social messaging system130, in other example embodiments, the gesture navigation system160, or a portion of the gesture navigation system160, can be implemented in the user device1420. In embodiments where the user device1420includes a portion of the gesture navigation system160, the user device1420can work alone or in conjunction with the portion of the gesture navigation system160included in a particular application server or included in the social messaging system130. In the diagram1400, the user device1420is capturing media data from a sensor of the user device1420that is communicatively coupled to the social messaging system130via the network104. The media data comprises, for example, audio data alone, video data alone, audio/video data, or other data of the user device1420. For instance, the audio/video data is captured by a camera sensor and a microphone of the user device1420. In various example embodiments, the user device1420records the audio/video data meaning that the user device1420stores the audio/video data locally at the user device1420, remotely at the social messaging system130, or at a particular third-party server. The user410can initiate a live stream (e.g., a live broadcast) of the captured audio/video by performing a gesture1440with a physical touch1430on a touchscreen of the user device1420. In some example embodiments, the user1410switches between a recording mode or recording session that records the audio/video data and a streaming mode or live streaming session that live steams the audio video data to a plurality of other user devices1450. For instance, the user1410switches from a recording mode to a live streaming mode by performing the gesture1440while the user device1420is currently in the recording mode. In various example embodiments, the live stream of the audio/video data being captured by the device1420is communicated or transmitted to the social messaging system130and subsequently communicated, transmitted, or broadcast to the plurality of other user devices1450via the network104. In an example embodiment, the plurality of other user devices1450includes devices of particular members of the social messaging system130. In some embodiments, the particular members of the social messaging system are friends or contacts of the user1410on the social messaging system130. In other embodiments, the particular members of the social messaging system are subscribed to receive the live stream of the audio/video data being captured by the user device1420(e.g., subscribed to a particular channel where the live stream is available). In further example embodiments, the live stream is publicly available, exclusively available to certain users, or a combination thereof. In various example embodiments, the live stream is being broadcast in substantially real time. A real time stream or live stream is intended to include streams that are delivered (e.g., received and presented to a particular device) to a destination (e.g., the plurality of other user devices1450) after a delay interval (e.g., due to transmission delay or other delays such as being temporarily stored at an intermediate device) between the instant that the audio/video data is captured by the user device1420and a delivery time that the audio/video data is delivered to the destination. For instance, the audio/video data being captured by the user device1420and live streamed to the plurality of other user devices1450can be buffered at the user device1420, at the social messaging system130, or another intermediate device and delivered to the destination after a buffering delay. In some example embodiments, the user device1420is live streaming the audio/video data and recording the audio/video data at the same time. In other example embodiments, the user device1420is live streaming the audio/video data without the audio/video data being recorded or stored. In these embodiments, the gesture navigation system160or the social messaging system130provides the user1410with an option to store the streamed audio/video data during the live streaming session or after the live streaming session is over. For instance, the user1410may initiate live streaming only of the audio/video data and then select an option to store or discard the audio/video data that was live streamed after the live streaming session has stopped. FIG.15is a flow diagram illustrating an example method1500for switching between a recording session and live streaming session. The operations of method1500are performed by components of the gesture navigation system160, and are so described below for the purposes of illustration. At operation1510, similar to operation410described above in connection withFIG.4, the sensor module260receives user input data that indicates a continuous physical user interaction (a gesture) associated with a display screen of the user device. For example, the user performs a gesture or a combination of gestures such as a press and hold gesture followed by a slide or drag gesture on a touch-sensitive display of a mobile device of the user. In a specific example, the user performs a hold gesture on a particular user interface element on a touchscreen of the user device. At operation1520, the invocation module250initiates a recording session. For example, the gesture module230extracts a gesture characteristic from the user input data (e.g., a particular user interface element designated by a hold gesture), the action module240selects an action to initiate the recording session based on the extracted gesture characteristic, and then the invocation module250invokes or initiates the recording session. The recording session records or stores media data captured by one or more sensors of the user device such as a microphone and a camera sensor. The media data can comprise audio/video recording, audio only recording, video only recording, or recording of data from another device sensor or other device data (e.g., user interface image data that is currently being displayed on the user device). In an example embodiment, the media data is stored locally on the user device, remotely on a server, or a combination thereof. At operation1530, the sensor module260receives user additional user input data that indicates a slide gesture. For example, the user performs the slide gesture in succession to the hold gesture described above in operation1510. At operation1540, the gesture module230extracts a gesture characteristic of the slide gesture from the additional user input data. In an example embodiment, the slide gesture is the radial slide gesture as described above. In this embodiment, gesture module230extracts the radius distance or radial distance from the slide gesture. In another embodiment, the slide gesture designates a particular user interface element. For example, the gesture module230designates the particular user interface element in response to determining the slide gesture terminated (e.g., the user lifting their finger from the touchscreen or the user refraining from moving the current point of the slide gesture for a threshold period of time) at the particular user interface element. At operation1550, the action module240determines the extracted gesture characteristic satisfies a condition and the invocation module250initiates a live streaming session. The invocation module250invokes or initiates the live streaming session that broadcasts, transmits, or otherwise communicates the media data being captured by the user device to other user devices (e.g., the plurality of other user devices1450). For example, if the gesture characteristic is a radius distances and the action module240determines the radius distance is within a range corresponding to a live stream action, the invocation module250initiates the live streaming session. In another example, if the gesture characteristic is a designated user interface element corresponding to a live stream action, the invocation module250initiates the live streaming session. FIG.16is a user interface diagram1600depicting an example of initiating a recording session with an option to switch to a live streaming session. In the diagram1600, a user1604performs a physical touch1614of a touchscreen1602. In an example embodiment, the sensor module260receives the user input data resulting from the physical touch1614and the gesture module230extracts a gesture characteristic from the user input data, the action module240identifies a particular action based on the gesture characteristic, and the invocation module250invokes the particular action. In a specific example, the user1604activates a user interface element1616by performing the physical touch1614at an area encompassed by the user interface element1616. In this example, activating the user interface element1616initiates the recording session or recording mode and the media data captured by the user device is stored or recorded. In an example embodiment, the user interface module220causes presentation of an indication1606that indicates a current mode of the user device. In various example embodiments, after the invocation module250initiates the recording session, the user interface module220causes presentation of various user interface elements1608corresponding to options. For instance, a user interface element1610corresponds to an option to initiate a streaming session during the recording session. In an example embodiment, the user1604performs a slide gesture1612to one of the various user interface elements1608to invoke a particular action. The user interface module220can cause presentation of the various user interface elements1608according to a predefined scheme or dynamically based on the user input data. For example, the user interface module220can arrange the various user interface elements1608to encircle an initial point of a particular gesture. In further example embodiments, the user interface module220may modify the arrangement of the various user interface elements1608based on how close the initial point of the gesture is to the edge of the touchscreen1602or another characteristic extracted from the user input data. In various example embodiments, each of the various user interface elements1608includes an indication of an action. The action module240may dynamically determine the action corresponding to respective ones of the various user interface elements1608(e.g., based on a user interface element designated by an initial point of a particular gesture). FIG.17is a user interface diagram1700depicting an example of switching from a recording session to a streaming session. In the diagram1700, a user1704performs a physical touch1710of a touchscreen1702. As the user1704moves the current point of the physical touch1710towards a user interface element1708, the user interface module220alters or modifies the user interface element1708. For example, the user interface module220changes an opacity, a color, or a size of the user interface element1708based on a distance between the current point of the physical touch1710and the position of the user interface element1708on the touchscreen1702. In further embodiments, the user interface module220causes presentation of information associated with the live streaming session such as connectivity information of the user device (e.g., signal strength, roaming, connection speed, and so on) and may provide an indication of whether the invocation module250can initiate the live streaming session based on the connectivity information. In the diagram1700, the user1704designates the user interface element1708to initiate a streaming session. For example, the user1704performs a slide gesture from user interface element1712to the user interface element1708to switch from the recording session to the streaming session. In some example embodiments, the user interface module220causes presentation of an indication1706of the current mode of the user device such as being in a streaming session. FIG.18is a user interface diagram1800depicting an example of initiating a sustained recording session with a gesture1810. In the diagram1800, a user1804performs a gesture1810with a physical touch1808of a touchscreen1802. In an example embodiment, the gesture navigation system160detects various combinations of gestures of the user1804to initiate a recording session and adjust parameters of the recording session. For example, the user1804touching and holding a user interface element1806initiates the recording session while the user1804holds down on the user interface element1806. The user1804terminates the recording session by ending the hold (e.g., lifting the user's1804finger from the touchscreen1802). In some embodiments, the user1804‘locks’ the recording session (a sustained recording session that does not terminate by lifting the user's1804finger from the touchscreen1802) by performing an upward slide gesture, such as the gesture1810, on the touchscreen1802after initiating the recording session. For instance, if the gesture module230detects the user1804performing a slide gesture with a slide distance greater than a particular threshold (an example threshold is indicated by point1812ofFIG.18) then the action module240selects an action to initiate the sustained recording session and the invocation module250initiates the sustained recording session. In still further embodiments, the user1804adjusts, alters, or otherwise modifies parameters or settings of the recording session, such as a zoom level, by performing a slide gesture along an axis (e.g., a horizontal slide, a vertical slide, or a slide along another axis) subsequent to initiating the sustained recording session. For instance, the user1804sliding to point1814zooms out while the user1804sliding to point1816zooms in. In a specific example, the user1804performs a hold gesture of the user interface element1806to initiate the recording session, performs an upward slide gesture to initiate the sustained recording session, and then performs a horizontal slide gesture to zoom in or zoom out. FIG.19is a flow diagram1900illustrating an example method for adjusting, altering, or otherwise modifying parameters, characteristics, or settings of the recording session, the live streaming session, or a media presentation session via a particular gesture. The operations of method1900are performed by components of the gesture navigation system160, and are so described below for the purposes of illustration. At operation1910, similar to operation410and operation1510described above, the sensor module260receives user input data that indicates a continuous physical user interaction or gesture associated with a display screen of the user device. For example, the sensor module260detects the user performing a particular gesture comprising a physically touch, or multiple physical touches, on the touchscreen of the user device. At operation1920, the gesture module230extracts a gesture characteristic of the particular gesture from the user input data. For example, the gesture module230determines an axis distance of the particular gesture such as a distance between an initial point of the gesture and a current point of the gesture along a particular axis (e.g., a horizontal distance, a vertical distance, or a distance along another axis). At operation1930, the action module240determines an action based on the gesture characteristic and the invocation module250invokes the action. For example, the action can comprise modifying a video characteristic or attribute of a video such as a zoom level, video quality level, camera focus, camera exposure setting, flash settings, switching between available cameras, and so forth. In a specific example, the action module240determines a zoom level according to the axis distance. For instance, the action module240determines that the action comprises an increase in zoom level corresponding to an increase in the axis distance and a decrease in zoom level corresponding to a decrease in the axis distance. FIG.20is a user interface diagram2000depicting an example of initiating the live streaming session and modifying a characteristic of the live streaming session via a particular gesture. In the diagram2000, a user2004initiates a live streaming session by performing a gesture2010comprising a physical touch2012on a touchscreen2002. For example, the user2004may perform the gesture2010, such as the radial slide gesture describe above, to select an option to initiate a recording session2008or a live streaming session2006. In this example, the user2004selects the option to initiate a live streaming session2006by terminating the gesture2010at the point2014(e.g., the user2004lifting their finger from the touchscreen2002at the point2014or holding at the point2014for a threshold period of time). Subsequently, the user2004can perform a slide gesture in succession to the radial slide gesture to adjust a zoom level of the video for the live stream. For instance, the user2004sliding across the touchscreen2002towards the point2016zooms out and sliding towards point2018zooms in. In this way, the gesture navigation system160detects multiple gestures allowing the user2004to adjust various settings of the live streaming session. FIG.21is a user interface diagram2100depicting a further example of adjusting, altering, or otherwise modifying parameters, characteristics, or settings of the recording session, the live streaming session, or a media presentation session via a particular gesture. In the diagram2100, a user2104is performing a particular gesture comprising physical touch2110and physical touch2108. The user2104performs a hold gesture with the physical touch2110by substantially refraining from moving the physical touch2110from point2112. While the user2104is performing the hold gesture with the physical touch2110, the user2104performs a vertical slide gesture, or a slide gesture along another axis, by moving the physical touch2108vertically. In an example embodiment, sliding the physical touch2108towards point2118zooms out while sliding the physical touch2108towards point2116zooms in. The purpose of using the hold gesture in conjunction with the vertical slide gesture is to help prevent undesired user interaction (e.g., an accidental touch of the touchscreen by the user2104). Although, in alternative example embodiments, the user2104performs the slide gesture to effectuate zooming, or another type of adjustment, without performing the hold gesture with the physical touch2110. In these embodiments, the user2104effectuates zooming of the video using a single finger. Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules can constitute either software modules (e.g., code embodied on a machine-readable medium) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and can be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) can be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In some embodiments, a hardware module can be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module can include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module can be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module can include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware modules become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations. Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules can be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module can perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module can then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules can also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors. Similarly, the methods described herein can be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method can be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented modules can be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented modules are distributed across a number of geographic locations. The modules, methods, applications and so forth described in conjunction withFIGS.1-21are implemented in some embodiments in the context of a machine and an associated software architecture. The sections below describe representative software architecture and machine (e.g., hardware) architecture that are suitable for use with the disclosed embodiments. Software architectures are used in conjunction with hardware architectures to create devices and machines tailored to particular purposes. For example, a particular hardware architecture coupled with a particular software architecture will create a mobile device, such as a mobile phone, tablet device, and the like, while yet another combination produces a server computer for use within a cloud computing architecture. Not all combinations of such software and hardware architectures are presented here as those of skill in the art can readily understand how to implement the inventive subject matter in different contexts from the disclosure contained herein. FIG.22is a block diagram2200illustrating a representative software architecture2202, which may be used in conjunction with various hardware architectures herein described.FIG.22is merely a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture2202may be executing on hardware such as machine2300ofFIG.23that includes, among other things, processors2310, memory/storage2330, and I/O components2350. A representative hardware layer2204is illustrated and can represent, for example, the machine2300ofFIG.23. The representative hardware layer2204comprises one or more processing units2206having associated executable instructions2208. Executable instructions2208represent the executable instructions of the software architecture2202, including implementation of the methods, modules and so forth ofFIGS.1-21. Hardware layer2204also includes memory and storage modules2210, which also have executable instructions2208. Hardware layer2204may also comprise other hardware2212which represents any other hardware of the hardware layer2204, such as the other hardware illustrated as part of machine2300. In the example architecture ofFIG.22, the software architecture2202may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture2202may include layers such as an operating system2214, libraries2216, frameworks/middleware2218, applications2220and presentation layer2244. Operationally, the applications2220or other components within the layers may invoke application programming interface (API) calls2224through the software stack and receive a response, returned values, and so forth illustrated as messages2226in response to the API calls2224. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide the frameworks/middleware2218, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system2214may manage hardware resources and provide common services. The operating system2214may include, for example, a kernel2228, services2230, and drivers2232. The kernel2228may act as an abstraction layer between the hardware and the other software layers. For example, the kernel2228may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services2230may provide other common services for the other software layers. The drivers2232may be responsible for controlling or interfacing with the underlying hardware. For instance, the drivers2232may include display drivers, camera drivers, BLUETOOTH® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. In an example embodiment, the operating system2214includes sensor service2233that can provide various sensor processing services such as low-level access to touchscreen input data or other user sensor data. The libraries2216may provide a common infrastructure that may be utilized by the applications2220or other components or layers. The libraries2216typically provide functionality that allows other software modules to perform tasks in an easier fashion than to interface directly with the underlying operating system2214functionality (e.g., kernel2228, services2230or drivers2232). The libraries2216may include system libraries2234(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries2216may include API libraries2236such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPREG4, H.264, MP3, AAC, AMR, JPG, or PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries2216may also include a wide variety of other libraries2238to provide many other APIs to the applications2220and other software components/modules. In an example embodiment, the libraries2216include input libraries2239that provide input tracking, capture, or otherwise monitor user input such as touchscreen input that can be utilized by the gesture navigation system160. The frameworks/middleware2218(also sometimes referred to as middleware) may provide a higher-level common infrastructure that may be utilized by the applications2220or other software components/modules. For example, the frameworks/middleware2218may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware2218may provide a broad spectrum of other APIs that may be utilized by the applications2220or other software components/modules, some of which may be specific to a particular operating system or platform. In an example embodiment, the frameworks/middleware2218include an image touch input framework2222and a motion capture framework2223. The touch input framework2222can provide high-level support for touch input functions that can be used in aspects of the gesture navigation system160. Similarly, the motion capture framework2223can provide high-level support for motion capture and other input user input detection. The applications2220include built-in applications2240or third party applications2242. Examples of representative built-in applications2240may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, or a game application. Third party applications2242may include any of the built-in applications2240as well as a broad assortment of other applications. In a specific example, the third party application2242(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. In this example, the third party application2242may invoke the API calls2224provided by the mobile operating system such as operating system2214to facilitate functionality described herein. In an example embodiment, the applications2220include a messaging application2243that includes the gesture navigation system160as part of the messaging application2243. In another example embodiment, the applications2220include a stand-alone application2245that includes the gesture navigation system160. The applications2220may utilize built-in operating system functions (e.g., kernel2228, services2230or drivers2232), libraries (e.g., system libraries2234, API libraries2236, and other libraries2238), frameworks/middleware2218to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as presentation layer2244. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. Some software architectures utilize virtual machines. In the example ofFIG.22, this is illustrated by virtual machine2248. A virtual machine creates a software environment where applications/modules can execute as if they were executing on a hardware machine (such as the machine2300ofFIG.23, for example). The virtual machine2248is hosted by a host operating system (operating system2214inFIG.23) and typically, although not always, has a virtual machine monitor2246, which manages the operation of the virtual machine2248as well as the interface with the host operating system (i.e., operating system2214). A software architecture executes within the virtual machine2248such as an operating system2250, libraries2252, frameworks/middleware2254, applications2256or presentation layer2258. These layers of software architecture executing within the virtual machine2248can be the same as corresponding layers previously described or may be different. FIG.23is a block diagram illustrating components of a machine2300, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.23shows a diagrammatic representation of the machine2300in the example form of a computer system, within which instructions2316(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine2300to perform any one or more of the methodologies discussed herein can be executed. For example, the instructions2316can cause the machine2300to execute the flow diagrams ofFIGS.4,8,10,15, and19. Additionally, or alternatively, the instructions2316can implement the communication module210, the user interface module220, the gesture module230, the action module240, the invocation module250, or the sensor module260ofFIG.2, and so forth. The instructions2316transform the general, non-programmed machine into a particular machine programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine2300operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine2300may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine2300can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions2316, sequentially or otherwise, that specify actions to be taken by the machine2300. Further, while only a single machine2300is illustrated, the term “machine” shall also be taken to include a collection of machines2300that individually or jointly execute the instructions2316to perform any one or more of the methodologies discussed herein. The machine2300can include processors2310, memory/storage2330, and I/O components2350, which can be configured to communicate with each other such as via a bus2302. In an example embodiment, the processors2310(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) can include, for example, processor2312and processor2314that may execute instructions2316. The term “processor” is intended to include multi-core processor that may comprise two or more independent processors (sometimes referred to as “cores”) that can execute instructions contemporaneously. AlthoughFIG.23shows multiple processors2310, the machine2300may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory/storage2330can include a memory2332, such as a main memory, or other memory storage, and a storage unit2336, both accessible to the processors2310such as via the bus2302. The storage unit2336and memory2332store the instructions2316embodying any one or more of the methodologies or functions described herein. The instructions2316can also reside, completely or partially, within the memory2332, within the storage unit2336, within at least one of the processors2310(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine2300. Accordingly, the memory2332, the storage unit2336, and the memory of the processors2310are examples of machine-readable media. As used herein, the term “machine-readable medium” means a device able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions2316. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions2316) for execution by a machine (e.g., machine2300), such that the instructions, when executed by one or more processors of the machine2300(e.g., processors2310), cause the machine2300to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. The I/O components2350can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components2350that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components2350can include many other components that are not shown inFIG.23. The I/O components2350are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components2350can include output components2352and input components2354. The output components2352can include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components2354can include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components2350can include biometric components2356, motion components2358, environmental components2360, or position components2362among a wide array of other components. For example, the biometric components2356can include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components2358can include acceleration sensor components (e.g., an accelerometer), gravitation sensor components, rotation sensor components (e.g., a gyroscope), and so forth. The environmental components2360can include, for example, illumination sensor components (e.g., a photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., a barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components2362can include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication can be implemented using a wide variety of technologies. The I/O components2350may include communication components2364operable to couple the machine2300to a network2380or devices2370via a coupling2382and a coupling2372, respectively. For example, the communication components2364include a network interface component or other suitable device to interface with the network2380. In further examples, communication components2364include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices2370may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components2364can detect identifiers or include components operable to detect identifiers. For example, the communication components2364can include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via the communication components2364, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth. In various example embodiments, one or more portions of the network2380can be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network2380or a portion of the network2380may include a wireless or cellular network, and the coupling2382may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling2382can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. The instructions2316can be transmitted or received over the network2380using a transmission medium via a network interface device (e.g., a network interface component included in the communication components2364) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, the instructions2316can be transmitted or received using a transmission medium via the coupling2372(e.g., a peer-to-peer coupling) to devices2370. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions2316for execution by the machine2300, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 101,072 |
11861069 | DETAILED DESCRIPTION The present invention relates to wearable devices, and more particularly, to a wearable camera system. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features described herein. Traditional camera solutions include digital pocket cameras (e.g., DSLRs), mounted cameras, smartphones, and camera eyeglasses. Traditional camera solutions typically utilize one of three methods for framing a shot. A first method utilizes optical viewfinders which suffer from the drawbacks including but not limited to requiring a user's eye to be in close proximity to an optical window which takes time and is inconvenient. A second method utilizes digital viewfinders which suffer from the drawbacks including but not limited to requiring a digital screen, having a screen size and weight that limits how portable any solution can be, requiring software to display and update the screen which can be slow, requiring power and extra hardware for the screen, and providing an unnatural interface that requires the user to look at a digital representation of what they can see with their own eyes in front of them. A third method utilizes no viewfinder so the user cannot frame a shot before taking it. In all of the above methods, a user needs to retrieve a camera from storage, look through a viewfinder or struggle to frame a shot, and find and know how to operate the shutter button/mechanism. The camera is typically sized and weighted to be held in the user's hand. However, there is no standard shutter button/mechanism location across traditional camera solutions and retrieving the camera from storage and looking through a viewfinder takes time. These issues are problematic when a user wants to capture media (photos, videos, etc.) related to a moment very quickly. A system and method in accordance with the present invention addresses the aforementioned issues by providing a wearable camera system (wearable camera device) that is gesture operated and mounted/worn on the wrist of the user. The gesture operated wearable camera device eliminates size, storage, retrieval, and operating knowledge constraints associated with traditional digital cameras. The user of the wearable camera device does not need to retrieve any device or place it into storage (because the device is already worn around the user's wrist), does not need to find and know how to operate the shutter button, and does not need to setup any device by holding or using a mount. In addition, the gesture operated wearable camera device eliminates constraints associated with traditional mounted cameras. The user of the wearable camera device does not need to use a mount that requires additional time and effort and a receptacle and has a limited range of movement, does not need to predetermine the scope of the framing or use pre-mounting techniques, and does not need to rely on digital or optical viewfinders for framing. In addition, the gesture operated wearable camera device eliminates constraints and the limited range of motion associated with traditional smartphone cameras. The user of the gesture operated camera system does not need to access or unlock (e.g., using a pattern or passcode) the device before capturing an image, does not need to utilize a specific photo application to capture the image, and does not need to use a device that that is not optimized for photo taking. In addition, the gesture operated wearable camera device eliminates constraints associated with traditional camera eyeglasses. The user of the gesture operated wearable camera device does not need to utilize a lens fixed to the user's eye or any type of special eyewear, does not obscure the field of view with various actions, and does not include unintuitive camera triggers such as voice commands and winks. The system and method in accordance with the present invention provides a wearable camera device that utilizes gesture recognition to allow for the user to take high quality photos/video/media quickly and efficiently. The photos and/or video are taken by easily framing the photo at different angles by simply using the user's hand, without the need for a viewfinder, or any contact with hardware in the palm or fingers of the user's hand, or the use of a second/free hand to trigger a shutter button. In one embodiment, the user of the wearable camera device raises an empty hand, gestures with the same hand as if taking a photo with a traditional camera, and captures a photo (or video) within a known framed area. The wearable camera device has the advantage of reducing the amount of time required to prepare, frame, and capture a media (photo/video). In one embodiment, the wearable camera device comprises a wristband device (band component) and a camera that is either coupled to the wristband device (so that the user could potential upgrade or interchange the type of the camera) or integrated within the wristband device. The wristband device includes a plurality of embedded sensors including but not limited to microelectromechanical systems (MEMS) devices, gyroscopes, accelerometers, pressure sensors, optical sensors, biometric, electromagnetic sensors, and motion sensors that detect user movements and gestures and the wristband also includes a processor device/unit to analyze and classify the detected gestures. In another embodiment, the wearable camera device comprises a glove-like device that is in contact with the user's fingers and a mounted camera coupled to the glove-like device. In this device, the wearable camera device does not include a wristband device portion and the sensor and processor and computing components are instead housed in the glove-like device that is worn by the user like a typical glove. In one embodiment, the camera is small in size and lightweight. The camera includes a camera lens. In one embodiment, the camera lens is any of thumbnail sized, nano sized, the size of the tip of a pen, the size of a traditional camera lens located on a smartphone, and other small sizes. In another embodiment, the camera lens is smaller than thumbnail sized. In one embodiment, the camera is interchangeable with various types of lens so that the user can upgrade the wearable camera device. In one embodiment, the wristband device is interchanged with different colors and patterns and jewelry pieces to enable customizability. In one embodiment, the camera with camera lens is coupled to the wristband device and positioned near the wrist in a manner that allows the camera lens to fluidly, automatically, and continuously follow the movement and/or rotation of the user's wrist. In one embodiment, the camera lens is flexibly fixed to the wristband device near the outer edge of the user's hand. This allows the camera lens to move in line with the wrist when rotated or when the hand is moved. In another embodiment, the camera lens is rigidly fixed to a portion of the wristband device and is actuated electromechanically by electrical signals from the wristband device that are generated in response to wrist or hand movements that are detected by MEMS devices and sensors embedded within the wristband device. The wristband device recognizes arm, wrist, hand, and finger gestures based on a variety of techniques by monitoring, filtering and detecting, classifying, and processing muscle movements, tendon movements, bone movements, wrist shape changes, hand shape shapes, finger shape changes, and/or bioelectromagnetics states, other user movements and changes to provide for sensor signal detection, feature extraction, and gesture recognition functionalities. In one embodiment, the wristband device utilizes wrist contour biometrics and a contour mapping mechanism as a primary input for gesture recognition. In another embodiment, additional sensors that detect additional inputs and sensor data are utilized by the wristband device to determine the gestures and perform the gesture recognition. In one embodiment, firmware running on an embedded system in the wristband device can monitor, filter, feature extract, classify, and interpret recognized gestures, and then transmit the recognized gestures as camera firmware commands to the camera to control the camera firmware, hardware components, and electromechanical functions of the camera. At a certain time point (N), the user's hand that is wearing the wristband device, the user's fingers, and the user's wrist are all oriented in particular orientations. When the hand, wrist, or fingers of the user are moved or change position, then the muscles/tendons contract and/or relax, and the bones and skin of the user also move or change/shift positions. These changes can result in physical anatomical changes on the surface of the wrist or cause bioelectromagnetics changes and the changes are recorded by the contour mapping mechanism to control the camera. In one embodiment, the wristband device or band includes sensor arrays that are spaced apart from each other at predetermined distances and with known pitches so that the sensor arrays completely circumvent the user's wrist. In another embodiment, the wristband device includes sensor arrays that do not completely circumvent the user's wrist and instead are focused on a certain location or region. The sensor arrays either measure distance to the surface of the wrist or measure the pressure that the wrist exerts at each sensor position. Analog-to-digital converters (ADCs) then convert sensor signals detected by the sensor arrays to an array of values. The array of values that are collected from the user's wrist by the wristband device are a representation of the state of the user's wrist, hand, and fingers at the time point N. The array of values are then filtered by the wristband device using a plurality of filters including but not limited to median filters and infinite impulse (IIR) filters to reduce/eliminate noise and motion artifacts, and extract/retain features that meet certain thresholds within a known tolerance criteria (i.e., distance or pressure measurement+/−some acceptable error at each sensor position). Once the wristband device filters the signals to extract features, the features are classified using sensor data classification. In one embodiment, the wristband device includes a sensor data classification unit that includes pre-defined or pre-determined classifiers that contain features sets per each recognized gesture (e.g., particular arrangement of the user's wrist, hand, and/or fingers). In one embodiment, the pre-defined classifiers and feature sets are pre-loaded in the wristband device's firmware from reduced training sets that are previously collected from sufficiently large and diverse training population data. In another embodiment, the pre-defined classifiers and feature sets are collected in real-time from a cloud-based database that stores data related to all of the users that are using the wearable camera device. In another embodiment, the pre-defined classifiers and feature sets are trained based on the current wearer/user of the wearable camera device over a predetermined time period. Training includes but is not limited to machine learning techniques, device feedback, and user feedback. Once the wristband device classifies a set of extracted features as matching at least one known gesture from a plurality of recognized gestures (e.g., user gestures related to controlling the camera of the wearable camera device), firmware of the wristband device issues a command/callback to either the firmware of the wristband device to perform an action by the wristband device (e.g., dismissing an alert or notification, uploading data to cloud storage, etc.) or to the firmware of the camera to perform a camera-related action by the camera (e.g., taking a photo, controlling camera shutter, changing camera modes, actuating the camera lens, etc.). In one embodiment, the wristband device detects an array of sensor data values associated with user movements at a frequency of K. Based on the detected array of values and subsequent feature extraction and gesture recognition, a sequence of commands at time point N are extracted from the recognized gestures and are classified at events N+1/K, N+1/K*2, N+1/K*3, etc. These events are processed continuously in a loop to provide constant command control to the wearable camera device. Other embodiments may use additional sensors including but not limited to Biometric electroencephalography (EEG) magnetoencephalography (MEG) sensors, and electromyography (EMG) sensors in combination with pressure and optical sensors to reduce noise, false positive features, or misclassification of gestures and commands In one embodiment, the wristband device monitors movements from the user's arm, wrist, hand, fingers, and thumb. When the user is ready to take a photo or video, the user's hand with the wearable camera device is raised and positioned to frame the photo/video. Once the user's hand is raised and the photo/video is framed by the user's hand, a photo/video (media) can be captured by a plurality of finger movements and gestures recognized by the wearable camera device. In one embodiment, the index finger and thumb are extended, approximately 90 degrees from each other, effectively creating one corner of a camera frame. In another embodiment, varying angles and positions between the user's fingers or the index finger and thumb are possible to create the camera frame while increasing usability and minimizing potential user fatigue. For example, the user can hold up all four fingers and extend the thumb approximately 90 degrees from the four fingers to frame the camera or the user can merely hold up one finger to frame the photo/video. One of ordinary skill in the art readily recognizes that a plurality of user movements and gestures can be associated with a plurality of wearable camera device functions and that would be within the spirit and the scope of the present invention. In the embodiment where the user has extended his/her index finger and thumb approximately 90 degrees from each other, the position of the shutter plane is always substantially parallel to the plane created by the index finger and the thumb—even when the user's wrist is rotated. In this embodiment, the wristband device can detect the extended index finger and thumb by monitoring, processing, classifying, and recognizing muscle, tendon, and wrist movements, wrist contours, hand shapes, and movements using feature extraction and gesture recognition to control the camera. Once the wristband device has detected that the user has extended his/her index finger and thumb approximately 90 degrees from each other (or that the user has positioned his/her hand/fingers in another orientation associated with certain gestures), the wristband device transmits an instruction/command/call to the camera to open the shutter and wait in a ready state to take/capture a photo or video. In one embodiment, the user gestures with the index finger and simulates a button press by slightly contracting the extended finger. In another embodiment, the user gestures with his/her finger to “flick” or “point” to the subject. The wristband device recognizes these gestures as a camera trigger gesture by monitoring, classifying, and extracting features from the finger and wrist movements using the plurality of embedded sensors that collects/detects muscle, tendon, bioelectromagnetics, and anatomical contour changes and the processor device that utilizes algorithms (e.g., machine learning classifiers) to analyze and classify the detected movements. In this embodiment, the wristband device triggers the camera to capture a photo or video in response to the detected movements that are classified as a camera trigger gesture. In another embodiment, the wristband device detects other gestures for camera-related functions such as capturing photos/videos, changing/selecting operation modes, zooming in/out, etc. In one embodiment, the operation modes include but are not limited to adjusting the lens position by rotating the wristband device around the wrist. In one embodiment, the wristband device has a light emitting user interface (UI) that is sized to be displayed on the user's wrist and that can be manipulated either by the detected user movements/gestures or by direct user touch gestures on a display screen of the UI. The UI can be either smaller than traditional video displays, similarly sized, or larger. The UI can be an LCD display, LED display, or another type of display unit. In one embodiment, the wearable camera device or the wristband device itself includes a communication device (e.g., WiFi or Bluetooth receiver and transmitter) that includes any of WiFi and Bluetooth communication capabilities so that the sensor movement data that is detected and analyzed as well as the captured photos and videos can be wirelessly communicated to another device (e.g., smartphone, laptop, etc.) or to a cloud-computing storage system by the communication device. The captured photos and videos can be automatically transmitted (or transmitted according to a predetermined schedule) to the device or cloud-computing storage system so that the user's information is seamlessly backed up. In one embodiment, the wristband device can be used in conjunction with a second wristband device on the user's opposing hand. The first and the second wristband devices can communicate different modes, operations, and data to each other and also work in conjunction as a multi-camera system. The multi-camera system can include additional gesture recognitions and features to enable advanced photo and video taking including but not limited to panorama photos/videos and 3D photos/videos. The system and method in accordance with the present invention provide a gestured operated wrist-mounted camera system (wearable camera device) that is an unobtrusive accessory-type wristband camera that can be worn by the user for 24 hours a day so that the user can capture moments at any time or place. Therefore, the wearable camera device is convenient and quickly produces high quality and framed camera shots. The need for a physical viewfinder (either optical or digital), a shutter button, and command and control buttons is eliminated. Readying, framing, sizing, and shutter operations do not require physical buttons. The user can also personalize the wearable camera device by selecting a plurality of different styles, colors, designs, and patterns associated with the wristband. The form factor of the wearable camera device provides camera hardware that is integrated into the wristband device so there is no need for costly hardware (physical or digital screen) or a viewfinder which reduces the footprint (size, power, complexity). The wristband device is form-fitting and adjustable and can include a small adjustable positioned camera and lens. In another embodiment, the wristband device includes a display unit that can serve as a viewfinder for added functionality. In one embodiment, the camera is coupled to the wristband device at the junction of the pisiform and ulna. In another embodiment, the camera is coupled to the wristband device at or near the edge of an abductor digit minima These positions allow for the plane of the field of view created naturally by the index finger and thumb (or other user's hand/finger orientations) to be parallel with the camera lens even when the wrist is maneuvered and rotated at any angle (eliminating any potential obstructions). In another embodiment, the position of the camera lens is electromechanically moveable to align with the movement of the hand and wrist. The wearable camera device is able to monitor, recognize and interpret arm, wrist, hand, and finger gestures, movements, and changes by monitoring, detecting, processing, and classifying muscle movements, tendon movements, hand shapes, and wrist contours using feature extractions and gesture recognition. In one embodiment, the gestures that the wearable camera device can detect to initialize, ready the camera, and take photos and/or videos include but are not limited to a semicircle shape, circle shape, OK action, hang loose action, swipe left across index finger, swipe right across index finger, tap or double tap between fingers, tap or double tap across index finger, 90 degree framing, corner box framing, double view framing, framing, switching modes of operation (camera to video, etc.), traversing modes and options, selecting options, triggering a shutter, starting capture, motion capture, encoding, and zooming in/out. The wristband device can include features outside of camera functionality, processing, and wireless communication including but not limited to a clock, timer, and Internet/email capabilities. In one embodiment, the wristband device includes an adjustable or rotatable (around the wrist) camera lens position and multiple lenses on the wristband (e.g., front, rear, facing, etc). In one embodiment, the wristband device includes perforations along the outside of the band that can emit light (e.g., light-emitting diodes or LEOs) to notify the user with different patterns, shapes, and designs that act as an interface display to provide feedback information to the user (e.g., photo taken, photo uploaded to cloud storage, error in taking photo, etc.). The LED display and patterns can be programmable by the user or predetermined. In one embodiment, the wristband device is touch enabled to allow for touch gestures to be recognized directly on the wristband device (or the display unit of the wristband device) in addition to muscle, tendon, and bioelectromagnetics recognition. In one embodiment, the wristband device includes the ability to tap, drag, and flick (away) the emitted light based notifications and objects around the outer portion of the wristband. This allows the user to manipulate and adjust the position of the emitted display, to control the notifications once they are no longer relevant to the user, and to respond to certain notifications with various input, touch, and gesture responses. In one embodiment, the wristband device includes the ability to manipulate modes of operation, options, and the interface display by performing a gesture with only one hand (palm free). In one embodiment, the wristband device includes at least one accelerometer to aid in framing the shots and with orientation. In one embodiment, the wristband device includes a vibration sensor that can vibrate as part of the interface display that provides feedback to the user (in addition to the LED notifications). In one embodiment, the wristband device includes a speaker with audio output as part of the interface display so that additional feedback (in audio format) can be provided. In one embodiment, the wristband device includes a plurality of sensors that are embedded within and connected via circuitry to detect various data from the user and the user's surrounding environment. The plurality of sensors can include but are not limited to any of or any combination of MEMS devices, gyroscopes, accelerometers, torque sensors, weight sensors, pressure sensors, magnetometers, temperature sensors, light sensors, cameras, microphones, GPS, wireless detection sensors, altitude sensors, blood pressure sensors, heart rate sensors, biometric sensors, radio frequently identification (RFID), near field communication (NFC), mobile communication, Wi-Fi, strain gauges, fingerprint sensors, smell sensors, gas sensors, chemical sensors, color sensors, sound sensors, acoustic sensors, ultraviolet sensors, electric field sensors, magnetic field sensors, gravity sensors, wind speed sensors, wind direction sensors, compass sensors, geo-locator sensors, polarized light sensors, infrared emitter sensors, and photo-reflective sensors. In one embodiment, the wristband device includes a processor device that analyzes the detected sensor data (from the plurality of sensors) using a sensor data classification unit that utilizes a plurality of algorithmic processes. The plurality of algorithmic processes can include but is not limited to any of or any combination of back propagation, bayes networks, machine learning, deep learning, neural networks, fuzzy mean max neural networks, hidden Markov chains, hierarchical temporal memory, k nearest neighbor (KNN), adaboot, and histogram analysis. In one embodiment, the user utilizes a plurality of wristband devices worn on both hands. In this embodiment, wireless communication and synchronization between the plurality of wristband devices can provide multi-camera functionality including but not limited to any of new framing options, larger framing areas, 30 capability, and 360 degree capability. In one embodiment, the wearable camera device includes a capability of connecting to various networks (public or private). To describe the features of the present invention in more detail, refer now to the following description in conjunction with the accompanying Figures. FIG.1illustrates a system100for capturing media in accordance with an embodiment. The media can include photos, video, and/or other types of media. The system100is a wearable camera device that includes a wristband device150and a camera134coupled to the wristband device150. The camera134comprises a lens, a barrel, and an actuator. In one embodiment, the wristband device150includes a plurality of components including but not limited to any of a power supply102, a random access memory (RAM)104, microcontroller (MCU)106, read-only memory (ROM)108, a clock110, a storage/memory device112(including but not limited to Flash memory), a radio114(including but not limited to Bluetooth and WiFi), an antenna116, a haptic/vibrator sensor118, an accelerometer120, a gyroscope122, an analog-to-digital converter (ADC)124, pre-amplifier/filter/amplifier/bias device126, a plurality of external sensors128(e.g., EKG, EEG, MEG, EMG, pressure, optical, etc.), an in-system programming component (ISP)130, and an internal sensor132. In another embodiment, the embedded accelerometer120and gyroscope122are included in the plurality of external sensors. In another embodiment, the wristband device includes a processor, a memory device, an application, and a transmitter/receiver device. The plurality of external sensors128detect user movements (e.g., muscle or tendon movements) and the plurality of components of the wristband device150determine a plurality of gestures by monitoring, classifying, and extracting features from the detected user movements. The plurality of gestures control various actions that are executed by either the wristband device150itself or the camera134including but not limited to taking pictures or videos, scrolling through various options, and scrolling through various modes. FIG.2illustrates a system200for sensor data classification by a wearable camera device in accordance with an embodiment. The system200includes an application component202, an operating system (OS) component204, and a plurality of drivers206. In one embodiment, the application component202includes inputs/outputs (I/O), a digitizer/classifier, a gesture detection component, a sensor data classification unit, and a commands component. In one embodiment, the OS204includes a scheduling/memory management component and a message passing component. In one embodiment, the each driver of a plurality of drivers206may include communication drivers, sensor drivers, general purpose I/O (GPIO) drivers, and file system (FS) drivers. In one embodiment, the system200detects user movements using the sensors of the wristband device and processes the detected user movements using the application component202and the sensor data classification unit to determine the gestures and associated commands that control the camera. FIG.3illustrates a method300for capturing media in accordance with an embodiment. The media can include photos, video, and/or other types of media. The method300comprises providing a wristband device that includes at least one sensor, via step302, coupling a camera to the wristband device, via step304, determining at least one gesture using the at least one sensor, via step306, and controlling the camera by using the at least one gesture, via step308. In one embodiment, the method further includes detecting, by the at least one sensor, user movements by using any of muscle, tendon, bioelectromagnetics, and anatomical contour changes (using the contour mapping mechanism) of a user. A processor of the wristband device then analyzes the user movements using the sensor data classification unit to extract features from the user movements and to determine at least one gesture from the extracted features using various classifiers. For example, if the detected user movement is determined to be the at least one gesture of a user extending an index finger and thumb approximately 90 degrees from each other, the wristband device will instruct/control the camera to open a shutter and await the photo/video capture. Once the wristband device determines an additional gesture when the user simulates a button pressing by slightly contracting the index finger, the wristband device transmits an instruction to the camera to trigger the shutter thereby capturing the media (photo/video). In one embodiment, a communication device is coupled to both the wristband device and the camera, wherein the communication device transmits data from both the wristband device (e.g., gesture classifications) and the camera (e.g., photos and videos) to another device that comprises any of a smartphone, a laptop, a desktop, and a cloud-based server system. FIG.4illustrates a method400for capturing media in accordance with another embodiment. The media can include photos, video, and/or other types of media. The method400comprises movements of a user using the wearable camera device occurring, via step402, voltage sensors detecting the user movements, via step404, the voltage sensors recording the detected user movements as a signal, via step406, the signal passing through a pre-amplifier, via step408, the pre-amplified signal passing through a plurality of filters to remove noise and additional motion artifacts, via step410, the filtered signal passing through an amplifier, via step412, and the amplified signal passing through an analog-to-digital converter (ADC), via step414. The filtered, amplified, and converted signal has features extracted from it, via step416, and these feature extractions result in gesture determinations and associated instructions, callbacks, and notifications, via step418. The callbacks and notifications are registered by the wearable camera device, via step420, and the instructions or callbacks are transmitted from the wristband device to the camera for execution, via step422, which sets up certain camera controls and functions (e.g., taking a photo/video), via step424. The controls and functions are encoded, via step426, which results in the triggering of the associated camera function (e.g., data capture), via step428, and the wearable camera device then transmits the captured media, via step430. FIG.5illustrates a user point of view of a wearable camera device500in accordance with an embodiment. The wearable camera device500includes a wristband device502that is coupled to a camera (or the camera is embedded within the wristband device502). The wristband device502also includes other hardware components and a muscle, tendon, finger gesture recognizer or sensor detection device. The wearable camera device500is shown from the user's point of view when framing a photo/video and includes a frame of view504that serves as the user's viewfinder (in place of traditional digital or optical viewfinders). In this embodiment, the wearable camera device500does not include a traditional optical or digital viewfinder which allows the user greater flexibility in taking nature photos and videos by using the frame of view504. In another embodiment, a viewfinder is displayed on a display unit/screen (e.g., LCD/LED screen) of the user interface (UI) of the wristband device. In this embodiment, as the user focuses on a subject using his/her fingers/hand to frame the subject, the user can verify the correct frame has been captured by checking the display unit of the wristband device that displays the signal from the camera lens. In the diagram500, the user's index finger and thumb are extended approximately 90 degrees from each other to frame the subject. In another embodiment, varying angles and positions between the user's fingers or the index finger and thumb are possible to create the camera frame while increasing usability and minimizing potential user fatigue. Once the user extends his/her fingers in this shape (index finger and thumb 90 degrees from each other), the wristband's internal sensors detect the user movements and shapes (orientation of the index finger relative to the thumb) as a muscle movement. The detected muscle movement is determined to be a certain gesture that readies the camera focus and shutter. The wearable camera device500then awaits another detected user movement that is determined to be a certain gesture that will trigger another camera action (such as taking the photo/video). In another embodiment, the user wearing the wearable camera device faces his/her palm outwards and towards the subject with all four fingers raised (and possibly extending the thumb approximately 90 degrees from the four fingers or resting the thumb up against the index finger). In this embodiment, if the user lowered one or more of the four raised fingers, then the wristband device would detect user movements, determine a gesture from the detected user movements, and then transmit a command/instruction to the camera based upon the determined gesture to carry out a camera action including but not limited to triggering a camera shutter action. In another embodiment, the user wears the wearable camera device (or specifically the wristband device portion) at a position that is rotated 180 degrees or the opposite of the normal wearing position. In this embodiment, the camera lens is facing the user thereby allowing the user to take “selfie” style captures using similar user movements and gestures (e.g., lowering one of the fingers that are raised, etc.). FIG.6illustrates a side view of the wearable camera device600in accordance with an embodiment. The wearable camera device600has components similar to the wearable camera device500ofFIG.5including a wristband device602and a camera coupled to the wristband device602. The camera includes a camera lens604focused on an object608. The frame of view606is focused on the object608. The user's index finger and thumb are in the same position (extended approximately 90 degrees from each other) and so the camera is once again in a ready position for when another user movement and associated gestured is determined. FIG.7illustrates a subject point of view of the wearable camera device700in accordance with an embodiment. The wearable camera device700has components similar to the wearable camera device500ofFIG.5including a wristband device702and a camera coupled to the wristband device702. The camera includes a camera lens704. The user's index finger and thumb are in the same position (extended approximately 90 degrees from each other) and so the camera is once again in a ready position for when another user movement and associated gestured is determined. FIG.8illustrates a user point of view of the wearable camera device800in accordance with an embodiment. The wearable camera device800has components similar to the wearable camera device500ofFIG.5including a wristband device802and a camera coupled to or integrated within the wristband device802. The user's index finger and thumb have moved from the position of being extended approximately 90 degrees from each other to a button pressing simulation movement806. In this embodiment, the button pressing simulation movement806is when the user's index finger is slightly lowered. In another embodiment, a different type of user movement can be associated with the button pressing simulation movement806. Once the user's index finger moves, the wristband device502′s sensors detect the user movement and the wristband device502′s internal components and sensor data classification unit determine that a specific gesture associated with the detected user movement has occurred. The determined gesture prompts the camera to take a picture/photo of an object808in the distance that is framed by a frame of view804. The wristband device of the wearable camera device detects a plurality of sensor signals and determines a plurality of gestures using the plurality of sensor signals. The plurality of sensor signals can represent either a single gesture (e.g., “clicking” motion of one finger, etc.) or a set of gestures in a specific sequence (e.g., sliding a finger to the left or right and tapping, etc.) that the firmware of the wristband device constitutes as a single action. Once the gesture is determined from the detected sensor signals, the firmware of the wristband device sends out an associated command to control the camera.FIGS.9-11represent additional sets of gestures that the wristband device can determine as a single action. FIG.9illustrates a user point of view of the wearable camera device900in accordance with an embodiment. The wearable camera device900has components similar to the wearable camera device500ofFIG.5including a wristband device902and a camera coupled to or integrated within the wristband device902. The wristband device902has determined a gesture904that comprises the user tapping or double tapping the thumb. The gesture904triggers the wearable camera device900to display the target/subject of the photo/video on the wristband device902. In this embodiment, the wristband device902includes a user interface display that can display the subject that is being photographed by the camera to ensure that the user has focused the camera correctly upon the target/subject that the user wants to photograph/video. FIG.10illustrates a user point of view of the wearable camera device1000in accordance with an embodiment. The wearable camera device1000has components similar to the wearable camera device500ofFIG.5including a wristband device1002and a camera coupled to or integrated within the wristband device1002. The wristband device1002has determined a gesture1004that comprises the user swiping left with the thumb. The gesture1004triggers the wearable camera device1000to change various modes of the camera and the mode change is displayed on the user interface of the wristband device1002. In this embodiment, the wristband device1002includes a user interface display that can display the varying modes. In one embodiment, the modes are varied using alternating LED patterns. In another embodiment, the modes are varied and the text (e.g., “Photo Taking”, “Video Taking”, “Night-Time Photo”, etc.) is displayed on the user interface display. FIG.11illustrates a user point of view of the wearable camera device1100in accordance with an embodiment. The wearable camera device1100has components similar to the wearable camera device500ofFIG.5including a wristband device1102and a camera coupled to or integrated within the wristband device1102. The wristband device1102has determined a gesture1104that comprises the user swiping right with the thumb. The gesture1104triggers the wearable camera device to also change various modes of the camera (in the opposite direction as swiping left so essentially the user can scroll left and right through various options) and the mode change is displayed on the user interface of the wristband device1102. In this embodiment, the wristband device1102includes a user interface display that can display the varying modes. In one embodiment, the modes are varied using alternating LED patterns. In another embodiment, the modes are varied and the text (e.g., “Photo Taking”, “Video Taking”, “Night-Time Photo”, etc.) is displayed on the user interface display. FIG.12illustrates a user point of view of the wearable camera device1200in accordance with an embodiment. The wearable camera device1200has components similar to the wearable camera device500ofFIG.5including a wristband device1202and a camera coupled to or integrated within the wristband device1202. The wristband device1202has determined a gesture1204that comprises the user holding his/her fingers in a “C” shape. The gesture1204triggers the wearable camera device1200to select the mode that the user has scrolled to (by swiping left or right). In another embodiment, the user can input various other hand gestures and shapes that are associated with mode selection. FIG.13illustrates a user point of view of the wearable camera device1300in accordance with an embodiment. The wearable camera device1300has components similar to the wearable camera device500ofFIG.5including a wristband device1302and a camera coupled to or integrated within the wristband device1302. The wristband device1302has determined a gesture1304that comprises the user's fingers snapping. The gesture1304triggers an “OK” action to the wearable camera device1300. In one embodiment, the “OK” action represents the user setting or acknowledging a particular setting or command that may be displayed to the user as feedback on the device interface output (e.g., LEOs, user interface display, etc.). FIG.14illustrates a user point of view of the wearable camera device1400in accordance with an embodiment. The wearable camera device1400has components similar to the wearable camera device ofFIG.5including a wristband device1402and a camera coupled to or integrated within the wristband device1402. The wristband device1402has determined a gesture1404that comprises the user tapping or double tapping the index finger and the middle finger both to the thumb. The gesture1404triggers an “OK” action to the wearable camera device1400. In one embodiment, the “OK” action represents the user setting or acknowledging a particular setting or command that may be displayed to the user as feedback on the device interface output (e.g., LEOs, user interface display, etc.). One of ordinary skill in the art readily recognizes that other user movements can be associated with an “OK” action and that would be within the spirit and scope of the present invention. FIG.15illustrates a user point of view of a multi-wearable camera device system1500in accordance with an embodiment. The multi-wearable camera device system1500includes a first wearable camera device1502coupled to a second wearable camera device1504. The first and the second wearable camera devices1502-1504enhance the user's ability to frame a target/subject1506. In addition, the first and the second wearable camera devices1502-1504communicate with each other and enable the user to select from various dual wristband modes. In one embodiment, the multi-wearable camera device system1500syncs and stiches together image and video capture thereby creating panorama or wide angle capture. In another embodiment, the multi-wearable camera device system1500provides syncing and capturing of30image or video captures,30stereoscope vision, and30zooming. FIG.16illustrates a user interface display of a wearable camera device1600in accordance with an embodiment. The wearable camera device1600includes a user interface display1602. In one embodiment, the user interface display1602comprises LEOs that display various patterns such as the pattern1604based upon camera settings and notifications. In one embodiment, the LEOs are integrated via laser cut holes cut into the user interface display1602. The user can use hand gestures (waiving across the user interface display1602) or touch gestures (pressing on the user interface display1602) to respond to the various notifications or delete them. One of ordinary skill in the art readily recognizes that the wearable camera device can include varying types of display units and user interfaces and that would be within the spirit and scope of the present invention. FIG.17illustrates a method1700for capturing media using gesture recognition by a wearable camera device in accordance with an embodiment. The media can include photos, video, and/or other types of media. The method1700represents two time points (N and N+1). The user of the wearable camera device starts in position A (with the index finger and thumb approximately 90 degrees from each other) at time point N and ends in position B (with the index finger slightly lowered in a “clicking” motion) at time point N+1. The wearable camera device detects the user's hand positioning and movements using a plurality of embedded sensors and determines various gestures from the detected positioning/movements, via step1702. To determine the gestures from the detected user movements, the wearable camera device utilizes a contour mapping mechanism that provides a contour map of the anatomical contours of the user's wrist that is wearing the wristband device of the wearable camera device, via step1704. The wristband device (band and embedded sensors) position around the user's wrist is denoted by a solid line and the contour of the user's wrist is denoted by a dotted line. When the user's hand is in position A at time point N, the contour map of the user's wrist is in wrist contour shape one (1) and when the user's hand shifts in position from position A to position B at time point N+1, the contour map of the user's wrist is in wrist contour shape two (2). After determining the change in the contour map via step1704, the wristband device classifies the contour changes using feature extraction and associated classifiers from contour shapes, via step1706. The key features of the classifier are plotted on a graph that depicts a distance in millimeters (mm) on the y-axis and that depicts the sensor position around the circumference of the wrist and based upon the contour map on the x-axis. When the user's hand is in position A at time point N, the plotted graph displays the wrist contour shape1in a first sensor position and when the user's hand is in position B at time point N+1, the plotted graph displays the wrist contour shape2in a second sensor position. After classifying the sensor positions to determine the gesture, the wearable camera device the determined gesture is associated with a certain command that is then transmitted to the camera, via step1708. For example, when the wearable camera device determines the first sensor position (that is associated with a first gesture), the camera receives a “camera ready” command (because that command is associated with the first gesture) and when the wearable camera device determines the second sensor position (after the user has changed orientation of his/her hand which is associated with a second gesture), the camera receives a “shutter press” command (because that command is associated with the second gesture) and the photo/video is captured by the wearable camera device. One of ordinary skill in the art readily recognizes that a variety of contour shapes and sensor positions can be associated with a variety of gestures and subsequently with a variety of commands and that would be within the spirit and scope of the present invention. FIG.18illustrates a method1800for capturing media using gesture recognition by a wearable camera device in accordance with an embodiment. The media can include photos, video, and/or other types of media. The wearable camera device monitors the hand of a user wearing the wearable camera device (that includes a wristband device and a camera coupled to or integrated within the wristband device), via step1802. When the user moves a portion of the hand, the fingers, and/or the wrist (user movements), the user movements cause muscle, skin, tendons, and bone to move as well allowing the wearable camera device to detect the user movements as sensor values, via step1804, by using embedded sensors within the wristband device of the wearable camera device at a time point N, via step1806. The detected sensor values are converted into a signal using an ADC and stored as an array of values (AV), via step1808. After storage, filters (e.g., median filter) are applied to the array of values (AV), via step1810. Steps1802-1810represent the “monitoring” phase of the method1800utilized by the wearable camera device. The monitoring phase is repeated by the wearable camera device to create enough training data for certain gestures (G, G1, etc.), via step1812. Once enough training data is created (or pre-downloaded into the wearable camera device), common features are identified for each gesture (G, G1, etc.) that are each associated with certain gesture classifiers (C, C1, etc.) that are each associated with certain actions (A, A1, etc.), via step1814. The wearable camera device receives the determined gestures and creates a gesture classifier for each, via step1816. Steps1812-1816represent the “classification” phase of the method1800utilized by the wearable camera device. After the classification phase has been completed by the wearable camera device and enough training data has been created, via step1816, the wearable camera device compares the filtered AV values to the gesture classifiers (created via step1816) using matching methodologies including but not limited to machine learning and k-nearest neighbor algorithms (KNN), via step1818. If the features from the filtered AV values match a certain gesture classifier, the wearable camera device calls the action (transmits the command to the camera) associated with that certain gesture classifier, via step1820. For example, if filtered AV values (or sensor data) matches a certain gesture G1that is associated with classifier Cl that is associated with action Al, then the wearable camera device will call and transmit a command/instruction to the camera for action A1. Steps1818-1820represent the “recognition” phase of the method1800utilized by the wearable camera device. After the recognition phase has been completed by the wearable camera device, the action command or call (e.g., camera shutter) is received by the camera, via step1822, and the camera firmware initiates camera encoding of the camera controls and functions, via step1824. The action command or call is encoded, via step1826, and the action is executed (e.g., the camera captures the image), via step1828. Once the image (or video) is captured, the wearable camera device transmits the media to a local or remote storage, via step1830, and a user can view the media, via step1832. One of ordinary skill in the art readily recognizes that the wearable camera device can associated a plurality of sensor values with gestures, classifiers, and actions and that would be within the spirit and scope of the present invention. FIG.19illustrates a subject point of view of a wearable camera device1900in accordance with an embodiment. The wearable camera device1900includes a wristband device1902(or band) that wraps around the wrist of the user and a camera1904that is coupled to or integrated within the wristband device1902. The camera1904includes a camera lens1906and a camera sub assembly. In one embodiment, the camera lens1906is mechanically and flexibly affixed to the wristband device1902near the bottom of the user's palm. The flex fixing camera lens1906mechanically moves with the movements/gestures of the user's wrist and palm. In another embodiment, the camera lens1906is rigidly fixed to a portion of the wristband device1902that is controllable by electromechanical actuation based on movements/gestures. The subject point of view represents the view from a subject that is looking straight at the camera lens1906. The subject point of view shows the palm and various fingers (middle, ring, pinky) of the user. The wristband device1902includes a plurality of components including but not limited to sensors, display unit, hardware platform, battery, storage, and radio. In one embodiment, the wristband device1902rotates around the user's wrist enabling the camera lens1906to either face outward towards the subject or face inward towards the user for “selfie” picture capturing capability. A system and method in accordance with the present invention discloses a wearable camera system (wearable camera device) for capturing media (photo/video). The wearable camera device comprises a wristband device that includes at least one sensor. In one embodiment, the wearable camera device also comprises a camera coupled to the wristband device. In another embodiment, the camera is integrated within the wristband device as one overall device. The camera is controlled by at least one gesture determined from user movements and sensor data detected by the at least one sensor. The wristband device includes additional hardware components including but not limited to a processor that analyzes the user movements detected by the at least one sensor and a memory device that stores the various data (detected user movements, determined gestures, etc.). The at least one sensor detects the user movements by detecting any of muscle, tendon, bioelectromagnetics, and anatomical contour changes of the user. In one embodiment, the processor analyzes the detected user movements using a sensor data classification unit that utilizes filters and algorithms that extract features from the user movements to determine the at least one gesture. The sensor data classification unit utilizes any of back propagation, Bayes networks, neural networks, and machine learning to determine the user gestures. In one embodiment, the at least one sensor is any of a gyroscope, an accelerometer, a pressure sensor, a temperature sensor, and a light sensor. In one embodiment, the camera is controlled without using any of an optical viewfinder or a digital viewfinder and instead uses a natural viewfinder created by the user's natural fingers. In one embodiment, the camera includes a lens that is small (e.g., thumbnail sized) and the lens is positioned to follow the rotational movement of the user. This ensures that the lens is never obstructed and is positioned on the user's wrist in a way that enables clear targeting of the photo/video/media subjects that the user wants to capture. The wearable camera device can determine a plurality of gestures that trigger various camera actions. In one embodiment, the gesture includes a user extending an index finger and thumb approximately 90 degrees from each other to create one corner of a camera frame. In this embodiment, the camera shutter's plan is substantially parallel to a plane created by the extending of the index finger and thumb. In another embodiment, the user can determine various hand gestures and actions and correlate each of these hand gestures and actions to specific camera actions. In another embodiment, based upon a continuously updated database or software updates, the wearable camera device determines additional gestures and associated camera actions. In one embodiment, once the user extends the index finger and thumb approximately 90 degrees from each other in a certain gesture and the wearable camera device determines the certain gesture, the wristband device of the wearable camera device instructs/controls/transmits a message to the camera to open the shutter and await a photo/video/media capture gesture from the user. In one embodiment, the user provides a photo/video/media capture gesture by simulating a button pressure. In another embodiment, the user provides a photo/video/media capture gesture by another gesture that is either predetermined or inputted by the user such as snapping of the fingers. In one embodiment, the at least one gesture or plurality of gestures that the wearable camera device can determine includes any of initializing camera function, semicircle shapes, circle shapes, OK action, hang loose action, swipe left across index finger, swipe right across index finger, tap between fingers, double tap between fingers, witching modes, selecting options, zooming, triggering a shutter to start capture, and triggering a shutter to start motion capture. In one embodiment, the wearable camera device includes a communication device (e.g., transmitter/receiver device) coupled to both the wristband device and the camera (or just one of either the wristband device and the camera). In this embodiment, the communication device transmits data from either or both the wristband device (e.g., user movements, gestures, etc.) and the camera (e.g., photos, videos, media, etc.) to another device that comprises any of a smartphone and a cloud-based server system. In one embodiment, the wristband device includes a user interface display that comprises a plurality of light emitting diodes (LEDs) that produces various patterns associated with camera actions and notifications and alerts for the user. As above described, a system and method in accordance with the present invention utilizes a wristband device that includes a plurality of sensors and a processor and a camera mounted to the wristband device to provide a gesture operated wrist- mounted camera system (wearable camera device). The wearable camera device is an unobtrusive accessory-type wristband camera that can be worn by the user for 24 hours a day so that the user can take photos and videos at any time and with ease. The wearable camera device is convenient (in accessibility, size, shape, weight, etc) and quickly produces high quality and framed camera shots and videos once user movements and gestures associated with various commands are detected by the wearable camera device. A system and method for operating a wrist-mounted camera system utilizing gestures has been disclosed. Embodiments described herein can take the form of an entirely hardware implementation, an entirely software implementation, or an implementation containing both hardware and software elements. Embodiments may be implemented in software, which includes, but is not limited to, application software, firmware, resident software, microcode, etc. The steps described herein may be implemented using any suitable controller or processor, and software application, which may be stored on any suitable storage location or computer-readable medium. The software application provides instructions that enable the processor to perform the functions described herein. Furthermore, embodiments may take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer-readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium may be an electronic, magnetic, optical, electromagnetic, infrared, semiconductor system (or apparatus or device), or a propagation medium (non-transitory). Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include DVD, compact disk-read-only memory (CD- ROM), and compact disk- read/write (CD-R/W). Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims. | 60,637 |
11861070 | DETAILED DESCRIPTION Various implementations and details are described with reference to examples for presenting and controlling graphical elements and virtual elements in AR, VR, XR, or a combination thereof, using hand gestures. For example, a relaxed hand cradles an apparently graspable menu icon, such as a ball. Active hand tracking detects the opening of the hand, causing an opening event which is closely correlated with the physical action of opening the fingers of the hand. Closing the hand causes a closing event. Examples include a method of controlling a graphical element in response to hand gestures detected with an eyewear device. The eyewear device comprising a camera system, an image processing system, and a display. The method includes capturing frames of video data with the camera system and detecting a hand in the captured frames of video data with the image processing system. The method further includes presenting on the display a menu icon at a current icon position, in accordance with the detected current hand location. The method includes detecting a series of hand shapes in the captured frames of video data and determining whether the detected hand shapes match any of a plurality of predefined hand gestures stored in a hand gesture library. In response to a match, the method includes executing an action in accordance with the matching hand gesture. For example, the method includes detecting a first series of hand shapes and then determining, with the image processing system, whether the detected first series of hand shapes matches a first predefined hand gesture (e.g., an opening gesture) among the plurality of predefined hand gestures. In response to a match, the method includes presenting on the display one or more graphical elements adjacent the current icon position. The method further includes detecting a second series of hand shapes, determining whether the detected second series of hand shapes matches a second predefined hand gesture (e.g., a closing gesture), and, in response to a match, removing the one or more graphical elements from the display. Although the various systems and methods are described herein with reference to capturing sill images with an eyewear device, the technology described may be applied to selecting and capturing still images from a sequence of frames of video data that were captured by other devices. The following detailed description includes systems, methods, techniques, instruction sequences, and computing machine program products illustrative of examples set forth in the disclosure. Numerous details and examples are included for the purpose of providing a thorough understanding of the disclosed subject matter and its relevant teachings. Those skilled in the relevant art, however, may understand how to apply the relevant teachings without such details. Aspects of the disclosed subject matter are not limited to the specific devices, systems, and method described because the relevant teachings can be applied or practice in a variety of ways. The terminology and nomenclature used herein is for the purpose of describing particular aspects only and is not intended to be limiting. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. The terms “coupled” or “connected” as used herein refer to any logical, optical, physical, or electrical connection, including a link or the like by which the electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected system element. Unless described otherwise, coupled or connected elements or devices are not necessarily directly connected to one another and may be separated by intermediate components, elements, or communication media, one or more of which may modify, manipulate, or carry the electrical signals. The term “on” means directly supported by an element or indirectly supported by the element through another element that is integrated into or supported by the element. The term “proximal” is used to describe an item or part of an item that is situated near, adjacent, or next to an object or person; or that is closer relative to other parts of the item, which may be described as “distal.” For example, the end of an item nearest an object may be referred to as the proximal end, whereas the generally opposing end may be referred to as the distal end. The orientations of the eyewear device, other mobile devices, associated components and any other devices incorporating a camera, an inertial measurement unit, or both such as shown in any of the drawings, are given by way of example only, for illustration and discussion purposes. In operation, the eyewear device may be oriented in any other direction suitable to the particular application of the eyewear device; for example, up, down, sideways, or any other orientation. Also, to the extent used herein, any directional term, such as front, rear, inward, outward, toward, left, right, lateral, longitudinal, up, down, upper, lower, top, bottom, side, horizontal, vertical, and diagonal are used by way of example only, and are not limiting as to the direction or orientation of any camera or inertial measurement unit as constructed or as otherwise described herein. Advanced AR technologies, such as computer vision and object tracking, may be used to produce a perceptually enriched and immersive experience. Computer vision algorithms extract three-dimensional data about the physical world from the data captured in digital images or video. Object recognition and tracking algorithms are used to detect an object in a digital image or video, estimate its orientation or pose, and track its movement over time. Hand and finger recognition and tracking in real time is one of the most challenging and processing-intensive tasks in the field of computer vision. The term “pose” refers to the static position and orientation of an object at a particular instant in time. The term “gesture” refers to the active movement of an object, such as a hand, through a series of poses, sometimes to convey a signal or idea. The terms, pose and gesture, are sometimes used interchangeably in the field of computer vision and augmented reality. As used herein, the terms “pose” or “gesture” (or variations thereof) are intended to be inclusive of both poses and gestures; in other words, the use of one term does not exclude the other. The term “bimanual gesture” means and describes a gesture performed with both hands. One hand may be relatively stationary, while the other hand is moving. In some bimanual gestures, both hands appear relatively stationary; the gesture occurs in small movements between the fingers and surfaces of the two hands. Although the two hands may operate in relative opposition to perform a bimanual gesture, the term includes gestures made by both hands operating together, in tandem. Additional objects, advantages and novel features of the examples will be set forth in part in the following description, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims. Reference now is made in detail to the examples illustrated in the accompanying drawings and discussed below. FIG.1Ais a side view (right) of an example hardware configuration of an eyewear device100which includes a touch-sensitive input device or touchpad181. As shown, the touchpad181may have a boundary that is subtle and not easily seen; alternatively, the boundary may be plainly visible or include a raised or otherwise tactile edge that provides feedback to the user about the location and boundary of the touchpad181. In other implementations, the eyewear device100may include a touchpad on the left side. The surface of the touchpad181is configured to detect finger touches, taps, and gestures (e.g., moving touches) for use with a GUI displayed by the eyewear device, on an image display, to allow the user to navigate through and select menu options in an intuitive manner, which enhances and simplifies the user experience. Detection of finger inputs on the touchpad181can enable several functions. For example, touching anywhere on the touchpad181may cause the GUI to display or highlight an item on the image display, which may be projected onto at least one of the optical assemblies180A,180B. Double tapping on the touchpad181may select an item or icon. Sliding or swiping a finger in a particular direction (e.g., from front to back, back to front, up to down, or down to) may cause the items or icons to slide or scroll in a particular direction; for example, to move to a next item, icon, video, image, page, or slide. Sliding the finger in another direction may slide or scroll in the opposite direction; for example, to move to a previous item, icon, video, image, page, or slide. The touchpad181can be virtually anywhere on the eyewear device100. In one example, an identified finger gesture of a single tap on the touchpad181, initiates selection or pressing of a graphical user interface element in the image presented on the image display of the optical assembly180A,180B. An adjustment to the image presented on the image display of the optical assembly180A,180B based on the identified finger gesture can be a primary action which selects or submits the graphical user interface element on the image display of the optical assembly180A,180B for further display or execution. As shown, the eyewear device100includes a right visible-light camera114B. As further described herein, two cameras114A,114B capture image information for a scene from two separate viewpoints. The two captured images may be used to project a three-dimensional display onto an image display for viewing with 3D glasses. The eyewear device100includes a right optical assembly180B with an image display to present images, such as depth images. As shown inFIGS.1A and1B, the eyewear device100includes the right visible-light camera114B. The eyewear device100can include multiple visible-light cameras114A,114B that form a passive type of three-dimensional camera, such as stereo camera, of which the right visible-light camera114B is located on a right corner110B. As shown inFIGS.1C-D, the eyewear device100also includes a left visible-light camera114A. Left and right visible-light cameras114A,114B are sensitive to the visible-light range wavelength. Each of the visible-light cameras114A,114B have a different frontward facing field of view which are overlapping to enable generation of three-dimensional depth images, for example, right visible-light camera114B depicts a right field of view111B. Generally, a “field of view” is the part of the scene that is visible through the camera at a particular position and orientation in space. The fields of view111A and111B have an overlapping field of view304(FIG.3). Objects or object features outside the field of view111A,111B when the visible-light camera captures the image are not recorded in a raw image (e.g., photograph or picture). The field of view describes an angle range or extent, which the image sensor of the visible-light camera114A,114B picks up electromagnetic radiation of a given scene in a captured image of the given scene. Field of view can be expressed as the angular size of the view cone; i.e., an angle of view. The angle of view can be measured horizontally, vertically, or diagonally. In an example configuration, one or both visible-light cameras114A,114B has a field of view of 100° and a resolution of 480×480 pixels. The “angle of coverage” describes the angle range that a lens of visible-light cameras114A,114B or infrared camera410(seeFIG.2A) can effectively image. Typically, the camera lens produces an image circle that is large enough to cover the film or sensor of the camera completely, possibly including some vignetting (e.g., a darkening of the image toward the edges when compared to the center). If the angle of coverage of the camera lens does not fill the sensor, the image circle will be visible, typically with strong vignetting toward the edge, and the effective angle of view will be limited to the angle of coverage. Examples of such visible-light cameras114A,114B include a high-resolution complementary metal-oxide-semiconductor (CMOS) image sensor and a digital VGA camera (video graphics array) capable of resolutions of 480p (e.g., 640×480 pixels), 720p, 1080p, or greater. Other examples include visible-light cameras114A,114B that can capture high-definition (HD) video at a high frame rate (e.g., thirty to sixty frames per second, or more) and store the recording at a resolution of 1216 by 1216 pixels (or greater). The eyewear device100may capture image sensor data from the visible-light cameras114A,114B along with geolocation data, digitized by an image processor, for storage in a memory. The visible-light cameras114A,114B capture respective left and right raw images in the two-dimensional space domain that comprise a matrix of pixels on a two-dimensional coordinate system that includes an X-axis for horizontal position and a Y-axis for vertical position. Each pixel includes a color attribute value (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); and a position attribute (e.g., an X-axis coordinate and a Y-axis coordinate). In order to capture stereo images for later display as a three-dimensional projection, the image processor412(shown inFIG.4) may be coupled to the visible-light cameras114A,114B to receive and store the visual image information. The image processor412, or another processor, controls operation of the visible-light cameras114A,114B to act as a stereo camera simulating human binocular vision and may add a timestamp to each image. The timestamp on each pair of images allows display of the images together as part of a three-dimensional projection. Three-dimensional projections produce an immersive, life-like experience that is desirable in a variety of contexts, including virtual reality (VR) and video gaming. FIG.1Bis a perspective, cross-sectional view of a right corner110B of the eyewear device100ofFIG.1Adepicting the right visible-light camera114B of the camera system, and a circuit board.FIG.1Cis a side view (left) of an example hardware configuration of an eyewear device100ofFIG.1A, which shows a left visible-light camera114A of the camera system.FIG.1Dis a perspective, cross-sectional view of a left corner110A of the eyewear device ofFIG.1Cdepicting the left visible-light camera114A of the three-dimensional camera, and a circuit board. Construction and placement of the left visible-light camera114A is substantially similar to the right visible-light camera114B, except the connections and coupling are on the left lateral side170A. As shown in the example ofFIG.1B, the eyewear device100includes the right visible-light camera114B and a circuit board140B, which may be a flexible printed circuit board (PCB). A right hinge126B connects the right corner110B to a right temple125B of the eyewear device100. In some examples, components of the right visible-light camera114B, the flexible PCB140B, or other electrical connectors or contacts may be located on the right temple125B or the right hinge126B. A left hinge126B connects the left corner110A to a left temple125A of the eyewear device100. In some examples, components of the left visible-light camera114A, the flexible PCB140A, or other electrical connectors or contacts may be located on the left temple125A or the left hinge126A. The right corner110B includes corner body190and a corner cap, with the corner cap omitted in the cross-section ofFIG.1B. Disposed inside the right corner110B are various interconnected circuit boards, such as PCBs or flexible PCBs, that include controller circuits for right visible-light camera114B, microphone(s), low-power wireless circuitry (e.g., for wireless short range network communication via Bluetooth™), high-speed wireless circuitry (e.g., for wireless local area network communication via Wi-Fi). The right visible-light camera114B is coupled to or disposed on the flexible PCB140B and covered by a visible-light camera cover lens, which is aimed through opening(s) formed in the frame105. For example, the right rim107B of the frame105, shown inFIG.2A, is connected to the right corner110B and includes the opening(s) for the visible-light camera cover lens. The frame105includes a front side configured to face outward and away from the eye of the user. The opening for the visible-light camera cover lens is formed on and through the front or outward-facing side of the frame105. In the example, the right visible-light camera114B has an outward-facing field of view111B (shown inFIG.3) with a line of sight or perspective that is correlated with the right eye of the user of the eyewear device100. The visible-light camera cover lens can also be adhered to a front side or outward-facing surface of the right corner110B in which an opening is formed with an outward-facing angle of coverage, but in a different outwardly direction. The coupling can also be indirect via intervening components. As shown inFIG.1B, flexible PCB140B is disposed inside the right corner110B and is coupled to one or more other components housed in the right corner110B. Although shown as being formed on the circuit boards of the right corner110B, the right visible-light camera114B can be formed on the circuit boards of the left corner110A, the temples125A,125B, or the frame105. FIGS.2A and2Bare perspective views, from the rear, of example hardware configurations of the eyewear device100, including two different types of image displays. The eyewear device100is sized and shaped in a form configured for wearing by a user; the form of eyeglasses is shown in the example. The eyewear device100can take other forms and may incorporate other types of frameworks; for example, a headgear, a headset, or a helmet. In the eyeglasses example, eyewear device100includes a frame105including a left rim107A connected to a right rim107B via a bridge106adapted to be supported by a nose of the user. The left and right rims107A,107B include respective apertures175A,175B, which hold a respective optical element180A,180B, such as a lens and a display device. As used herein, the term “lens” is meant to include transparent or translucent pieces of glass or plastic having curved or flat surfaces that cause light to converge or diverge or that cause little or no convergence or divergence. Although shown as having two optical elements180A,180B, the eyewear device100can include other arrangements, such as a single optical element (or it may not include any optical element180A,180B), depending on the application or the intended user of the eyewear device100. As further shown, eyewear device100includes a left corner110A adjacent the left lateral side170A of the frame105and a right corner110B adjacent the right lateral side170B of the frame105. The corners110A,110B may be integrated into the frame105on the respective sides170A,170B (as illustrated) or implemented as separate components attached to the frame105on the respective sides170A,170B. Alternatively, the corners110A,110B may be integrated into temples (not shown) attached to the frame105. In one example, the image display of optical assembly180A,180B includes an integrated image display. As shown inFIG.2A, each optical assembly180A,180B includes a suitable display matrix177, such as a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, or any other such display. Each optical assembly180A,180B also includes an optical layer or layers176, which can include lenses, optical coatings, prisms, mirrors, waveguides, optical strips, and other optical components in any combination. The optical layers176A,176B, . . .176N (shown as176A-N inFIG.2Aand herein) can include a prism having a suitable size and configuration and including a first surface for receiving light from a display matrix and a second surface for emitting light to the eye of the user. The prism of the optical layers176A-N extends over all or at least a portion of the respective apertures175A,175B formed in the left and right rims107A,107B to permit the user to see the second surface of the prism when the eye of the user is viewing through the corresponding left and right rims107A,107B. The first surface of the prism of the optical layers176A-N faces upwardly from the frame105and the display matrix177overlies the prism so that photons and light emitted by the display matrix177impinge the first surface. The prism is sized and shaped so that the light is refracted within the prism and is directed toward the eye of the user by the second surface of the prism of the optical layers176A-N. In this regard, the second surface of the prism of the optical layers176A-N can be convex to direct the light toward the center of the eye. The prism can optionally be sized and shaped to magnify the image projected by the display matrix177, and the light travels through the prism so that the image viewed from the second surface is larger in one or more dimensions than the image emitted from the display matrix177. In one example, the optical layers176A-N may include an LCD layer that is transparent (keeping the lens open) unless and until a voltage is applied which makes the layer opaque (closing or blocking the lens). The image processor412on the eyewear device100may execute programming to apply the voltage to the LCD layer in order to produce an active shutter system, making the eyewear device100suitable for viewing visual content when displayed as a three-dimensional projection. Technologies other than LCD may be used for the active shutter mode, including other types of reactive layers that are responsive to a voltage or another type of input. In another example, the image display device of optical assembly180A,180B includes a projection image display as shown inFIG.2B. Each optical assembly180A,180B includes a laser projector150, which is a three-color laser projector using a scanning mirror or galvanometer. During operation, an optical source such as a laser projector150is disposed in or on one of the temples125A,125B of the eyewear device100. Optical assembly180B in this example includes one or more optical strips155A,155B, . . .155N (shown as155A-N inFIG.2B) which are spaced apart and across the width of the lens of each optical assembly180A,180B or across a depth of the lens between the front surface and the rear surface of the lens. As the photons projected by the laser projector150travel across the lens of each optical assembly180A,180B, the photons encounter the optical strips155A-N. When a particular photon encounters a particular optical strip, the photon is either redirected toward the user's eye, or it passes to the next optical strip. A combination of modulation of laser projector150, and modulation of optical strips, may control specific photons or beams of light. In an example, a processor controls optical strips155A-N by initiating mechanical, acoustic, or electromagnetic signals. Although shown as having two optical assemblies180A,180B, the eyewear device100can include other arrangements, such as a single or three optical assemblies, or each optical assembly180A,180B may have arranged different arrangement depending on the application or intended user of the eyewear device100. As further shown inFIGS.2A and2B, eyewear device100includes a left corner110A adjacent the left lateral side170A of the frame105and a right corner110B adjacent the right lateral side170B of the frame105. The corners110A,110B may be integrated into the frame105on the respective lateral sides170A,170B (as illustrated) or implemented as separate components attached to the frame105on the respective sides170A,170B. Alternatively, the corners110A,110B may be integrated into temples125A,125B attached to the frame105. In another example, the eyewear device100shown inFIG.2Bmay include two projectors, a left projector (not shown) and a right projector (shown as projector150). The left optical assembly180A may include a left display matrix177A (not shown) or a left set of optical strips155′A,155′B, . . .155′N (155prime, A through N, not shown) which are configured to interact with light from the left projector150. Similarly, the right optical assembly180B may include a right display matrix177B (not shown) or a right set of optical strips155″A,155″B, . . .155″N (155double prime, A through N, not shown) which are configured to interact with light from the right projector. In this example, the eyewear device100includes a left display and a right display. FIG.3is a diagrammatic depiction of a three-dimensional scene306, a left raw image302A captured by a left visible-light camera114A, and a right raw image302B captured by a right visible-light camera114B. The left field of view111A may overlap, as shown, with the right field of view111B. The overlapping field of view304represents that portion of the image captured by both cameras114A,114B. The term ‘overlapping’ when referring to field of view means the matrix of pixels in the generated raw images overlap by thirty percent (30%) or more. ‘Substantially overlapping’ means the matrix of pixels in the generated raw images—or in the infrared image of scene—overlap by fifty percent (50%) or more. As described herein, the two raw images302A,302B may be processed to include a timestamp, which allows the images to be displayed together as part of a three-dimensional projection. For the capture of stereo images, as illustrated inFIG.3, a pair of raw red, green, and blue (RGB) images are captured of a real scene306at a given moment in time—a left raw image302A captured by the left camera114A and right raw image302B captured by the right camera114B. When the pair of raw images302A,302B are processed (e.g., by the image processor412), depth images are generated. The generated depth images may be viewed on an optical assembly180A,180B of an eyewear device, on another display (e.g., the image display580on a mobile device401), or on a screen. The generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance). Each vertex may include a color attribute (e.g., a red pixel light value, a green pixel light value, or a blue pixel light value); a position attribute (e.g., an X location coordinate, a Y location coordinate, and a Z location coordinate); a texture attribute; a reflectance attribute; or a combination thereof. The texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of color or intensities in a region of vertices of the depth image. In one example, the element animation system400(FIG.4) includes the eyewear device100, which includes a frame105and a left temple125A extending from a left lateral side170A of the frame105and a right temple125B extending from a right lateral side170B of the frame105. The eyewear device100may further include at least two visible-light cameras114A,114B having overlapping fields of view. In one example, the eyewear device100includes a left visible-light camera114A with a left field of view111A, as illustrated inFIG.3. The left camera114A is connected to the frame105or the left temple125A to capture a left raw image302A from the left side of scene306. The eyewear device100further includes a right visible-light camera114B with a right field of view111B. The right camera114B is connected to the frame105or the right temple125B to capture a right raw image302B from the right side of scene306. FIG.4is a functional block diagram of an example element animation system400that includes a wearable device (e.g., an eyewear device100), a mobile device401, and a server system498connected via various networks495such as the Internet. As shown, the element animation system400includes a low-power wireless connection425and a high-speed wireless connection437between the eyewear device100and the mobile device401. As shown inFIG.4, the eyewear device100includes one or more visible-light cameras114A,114B that capture still images, video images, or both still and video images, as described herein. The cameras114A,114B may have a direct memory access (DMA) to high-speed circuitry430and function as a stereo camera. The cameras114A,114B may be used to capture initial-depth images that may be rendered into three-dimensional (3D) models that are texture-mapped images of a red, green, and blue (RGB) imaged scene. The device100may also include a depth sensor213, which uses infrared signals to estimate the position of objects relative to the device100. The depth sensor213in some examples includes one or more infrared emitter(s)215and infrared camera(s)410. The eyewear device100further includes two image displays of each optical assembly180A,180B (one associated with the left side170A and one associated with the right side170B). The eyewear device100also includes an image display driver442, an image processor412, low-power circuitry420, and high-speed circuitry430. The image displays of each optical assembly180A,180B are for presenting images, including still images, video images, or still and video images. The image display driver442is coupled to the image displays of each optical assembly180A,180B in order to control the display of images. The eyewear device100additionally includes one or more speakers (e.g., one associated with the left side of the eyewear device and another associated with the right side of the eyewear device). The speakers may be incorporated into the frame105, temples125, or corners110of the eyewear device100. The one or more speakers are driven by audio processor under control of low-power circuitry420, high-speed circuitry430, or both. The speakers are for presenting audio signals including, for example, a beat track. The audio processor is coupled to the speakers in order to control the presentation of sound. The components shown inFIG.4for the eyewear device100are located on one or more circuit boards, for example a printed circuit board (PCB) or flexible printed circuit (FPC), located in the rims or temples. Alternatively, or additionally, the depicted components can be located in the corners, frames, hinges, or bridge of the eyewear device100. Left and right visible-light cameras114A,114B can include digital camera elements such as a complementary metal-oxide-semiconductor (CMOS) image sensor, a charge-coupled device, a lens, or any other respective visible or light capturing elements that may be used to capture data, including still images or video of scenes with unknown objects. As shown inFIG.4, high-speed circuitry430includes a high-speed processor432, a memory434, and high-speed wireless circuitry436. In the example, the image display driver442is coupled to the high-speed circuitry430and operated by the high-speed processor432in order to drive the left and right image displays of each optical assembly180A,180B. High-speed processor432may be any processor capable of managing high-speed communications and operation of any general computing system needed for eyewear device100. High-speed processor432includes processing resources needed for managing high-speed data transfers on high-speed wireless connection437to a wireless local area network (WLAN) using high-speed wireless circuitry436. In some examples, the high-speed processor432executes an operating system such as a LINUX operating system or other such operating system of the eyewear device100and the operating system is stored in memory434for execution. In addition to any other responsibilities, the high-speed processor432executes a software architecture for the eyewear device100that is used to manage data transfers with high-speed wireless circuitry436. In some examples, high-speed wireless circuitry436is configured to implement Institute of Electrical and Electronic Engineers (IEEE) 802.11 communication standards, also referred to herein as Wi-Fi. In other examples, other high-speed communications standards may be implemented by high-speed wireless circuitry436. The low-power circuitry420includes a low-power processor422and low-power wireless circuitry424. The low-power wireless circuitry424and the high-speed wireless circuitry436of the eyewear device100can include short-range transceivers (Bluetooth™ or Bluetooth Low-Energy (BLE)) and wireless wide, local, or wide-area network transceivers (e.g., cellular or Wi-Fi). Mobile device401, including the transceivers communicating via the low-power wireless connection425and the high-speed wireless connection437, may be implemented using details of the architecture of the eyewear device100, as can other elements of the network495. Memory434includes any storage device capable of storing various data and applications, including, among other things, camera data generated by the left and right visible-light cameras114A,114B, the infrared camera(s)410, the image processor412, and images generated for display by the image display driver442on the image display of each optical assembly180A,180B. Although the memory434is shown as integrated with high-speed circuitry430, the memory434in other examples may be an independent, standalone element of the eyewear device100. In certain such examples, electrical routing lines may provide a connection through a chip that includes the high-speed processor432from the image processor412or low-power processor422to the memory434. In other examples, the high-speed processor432may manage addressing of memory434such that the low-power processor422will boot the high-speed processor432any time that a read or write operation involving memory434is needed. As shown inFIG.4, the high-speed processor432of the eyewear device100can be coupled to the camera system (visible-light cameras114A,114B), the image display driver442, the user input device491, and the memory434. As shown inFIG.5, the CPU540of the mobile device401may be coupled to a camera system570, a mobile display driver582, a user input layer591, and a memory540A. The server system498may be one or more computing devices as part of a service or network computing system, for example, that include a processor, a memory, and network communication interface to communicate over the network495with an eyewear device100and a mobile device401. The output components of the eyewear device100include visual elements, such as the left and right image displays associated with each lens or optical assembly180A,180B as described inFIGS.2A and2B(e.g., a display such as a liquid crystal display (LCD), a plasma display panel (PDP), a light emitting diode (LED) display, a projector, or a waveguide). The eyewear device100may include a user-facing indicator (e.g., an LED, a loudspeaker, or a vibrating actuator), or an outward-facing signal (e.g., an LED, a loudspeaker). The image displays of each optical assembly180A,180B are driven by the image display driver442. In some example configurations, the output components of the eyewear device100further include additional indicators such as audible elements (e.g., loudspeakers), tactile components (e.g., an actuator such as a vibratory motor to generate haptic feedback), and other signal generators. For example, the device100may include a user-facing set of indicators, and an outward-facing set of signals. The user-facing set of indicators are configured to be seen or otherwise sensed by the user of the device100. For example, the device100may include an LED display positioned so the user can see it, a one or more speakers positioned to generate a sound the user can hear, or an actuator to provide haptic feedback the user can feel. The outward-facing set of signals are configured to be seen or otherwise sensed by an observer near the device100. Similarly, the device100may include an LED, a loudspeaker, or an actuator that is configured and positioned to be sensed by an observer. The input components of the eyewear device100may include alphanumeric input components (e.g., a touch screen or touchpad configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric-configured elements), pointer-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a button switch, a touch screen or touchpad that senses the location, force or location and force of touches or touch gestures, or other tactile-configured elements), and audio input components (e.g., a microphone), and the like. The mobile device401and the server system498may include alphanumeric, pointer-based, tactile, audio, and other input components. In some examples, the eyewear device100includes a collection of motion-sensing components referred to as an inertial measurement unit472. The motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip. The inertial measurement unit (IMU)472in some example configurations includes an accelerometer, a gyroscope, and a magnetometer. The accelerometer senses the linear acceleration of the device100(including the acceleration due to gravity) relative to three orthogonal axes (x, y, z). The gyroscope senses the angular velocity of the device100about three axes of rotation (pitch, roll, yaw). Together, the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw). The magnetometer, if present, senses the heading of the device100relative to magnetic north. The position of the device100may be determined by location sensors, such as a GPS unit, one or more transceivers to generate relative position coordinates, altitude sensors or barometers, and other orientation sensors. Such positioning system coordinates can also be received over the wireless connections425,437from the mobile device401via the low-power wireless circuitry424or the high-speed wireless circuitry436. The IMU472may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of the device100. For example, the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the device100(in linear coordinates, x, y, and z). The angular velocity data from the gyroscope can be integrated to obtain the position of the device100(in spherical coordinates). The programming for computing these useful values may be stored in memory434and executed by the high-speed processor432of the eyewear device100. The eyewear device100may optionally include additional peripheral sensors, such as biometric sensors, specialty sensors, or display elements integrated with eyewear device100. For example, peripheral device elements may include any I/O components including output components, motion components, position components, or any other such elements described herein. For example, the biometric sensors may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), to measure bio signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), or to identify a person (e.g., identification based on voice, retina, facial characteristics, fingerprints, or electrical bio signals such as electroencephalogram data), and the like. The mobile device401may be a smartphone, tablet, laptop computer, access point, or any other such device capable of connecting with eyewear device100using both a low-power wireless connection425and a high-speed wireless connection437. Mobile device401is connected to server system498and network495. The network495may include any combination of wired and wireless connections. The element animation system400, as shown inFIG.4, includes a computing device, such as mobile device401, coupled to an eyewear device100over a network. The element animation system400includes a memory for storing instructions and a processor for executing the instructions. Execution of the instructions of the element animation system400by the processor432configures the eyewear device100to cooperate with the mobile device401. The element animation system400may utilize the memory434of the eyewear device100or the memory elements540A,540B,540C of the mobile device401(FIG.5). Also, the element animation system400may utilize the processor elements432,422of the eyewear device100or the central processing unit (CPU)540of the mobile device401(FIG.5). In addition, the element animation system400may further utilize the memory and processor elements of the server system498. In this aspect, the memory and processing functions of the element animation system400can be shared or distributed across the processors and memories of the eyewear device100, the mobile device401, and the server system498. The memory434, in some example implementations, includes or is coupled to a hand gesture library480, as described herein. The process of detecting a hand shape, in some implementations, involves comparing the pixel-level data in one or more captured frames of video data900to the hand gestures stored in the library480until a good match is found. The memory434additionally includes, in some example implementations, an element animation application910, a localization system915, and an image processing system920. In an element animation system400in which a camera is capturing frames of video data900, the element animation application910configures the processor432to control the movement of a series of virtual items700on a display in response to detecting one or more hand shapes or gestures. The localization system915configures the processor432to obtain localization data for use in determining the position of the eyewear device100relative to the physical environment. The localization data may be derived from a series of images, an IMU unit472, a GPS unit, or a combination thereof. The image processing system920configures the processor432to present a captured still image on a display of an optical assembly180A,180B in cooperation with the image display driver442and the image processor412. FIG.5is a high-level functional block diagram of an example mobile device401. Mobile device401includes a flash memory540A which stores programming to be executed by the CPU540to perform all or a subset of the functions described herein. The mobile device401may include a camera570that comprises at least two visible-light cameras (first and second visible-light cameras with overlapping fields of view) or at least one visible-light camera and a depth sensor with substantially overlapping fields of view. Flash memory540A may further include multiple images or video, which are generated via the camera570. As shown, the mobile device401includes an image display580, a mobile display driver582to control the image display580, and a display controller584. In the example ofFIG.5, the image display580includes a user input layer591(e.g., a touchscreen) that is layered on top of or otherwise integrated into the screen used by the image display580. Examples of touchscreen-type mobile devices that may be used include (but are not limited to) a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or other portable device. However, the structure and operation of the touchscreen-type devices is provided by way of example; the subject technology as described herein is not intended to be limited thereto. For purposes of this discussion,FIG.5therefore provides a block diagram illustration of the example mobile device401with a user interface that includes a touchscreen input layer891for receiving input (by touch, multi-touch, or gesture, and the like, by hand, stylus, or other tool) and an image display580for displaying content As shown inFIG.5, the mobile device401includes at least one digital transceiver (XCVR)510, shown as WWAN XCVRs, for digital wireless communications via a wide-area wireless mobile communication network. The mobile device401also includes additional digital or analog transceivers, such as short-range transceivers (XCVRs)520for short-range network communication, such as via NFC, VLC, DECT, ZigBee, Bluetooth™, or Wi-Fi. For example, short range XCVRs520may take the form of any available two-way wireless local area network (WLAN) transceiver of a type that is compatible with one or more standard protocols of communication implemented in wireless local area networks, such as one of the Wi-Fi standards under IEEE 802.11. To generate location coordinates for positioning of the mobile device401, the mobile device401can include a global positioning system (GPS) receiver. Alternatively, or additionally the mobile device401can utilize either or both the short range XCVRs520and WWAN XCVRs510for generating location coordinates for positioning. For example, cellular network, Wi-Fi, or Bluetooth™ based positioning systems can generate very accurate location coordinates, particularly when used in combination. Such location coordinates can be transmitted to the eyewear device over one or more network connections via XCVRs510,520. The client device401in some examples includes a collection of motion-sensing components referred to as an inertial measurement unit (IMU)572for sensing the position, orientation, and motion of the client device401. The motion-sensing components may be micro-electro-mechanical systems (MEMS) with microscopic moving parts, often small enough to be part of a microchip. The inertial measurement unit (IMU)572in some example configurations includes an accelerometer, a gyroscope, and a magnetometer. The accelerometer senses the linear acceleration of the client device401(including the acceleration due to gravity) relative to three orthogonal axes (x, y, z). The gyroscope senses the angular velocity of the client device401about three axes of rotation (pitch, roll, yaw). Together, the accelerometer and gyroscope can provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw). The magnetometer, if present, senses the heading of the client device401relative to magnetic north. The IMU572may include or cooperate with a digital motion processor or programming that gathers the raw data from the components and compute a number of useful values about the position, orientation, and motion of the client device401. For example, the acceleration data gathered from the accelerometer can be integrated to obtain the velocity relative to each axis (x, y, z); and integrated again to obtain the position of the client device401(in linear coordinates, x, y, and z). The angular velocity data from the gyroscope can be integrated to obtain the position of the client device401(in spherical coordinates). The programming for computing these useful values may be stored in on or more memory elements540A,540B,540C and executed by the CPU540of the client device401. The transceivers510,520(i.e., the network communication interface) conforms to one or more of the various digital wireless communication standards utilized by modern mobile networks. Examples of WWAN transceivers510include (but are not limited to) transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and 3rd Generation Partnership Project (3GPP) network technologies including, for example and without limitation, 3GPP type 2 (or 3GPP2) and LTE, at times referred to as “4G.” For example, the transceivers510,520provide two-way wireless communication of information including digitized audio signals, still image and video signals, web page information for display as well as web-related inputs, and various types of mobile message communications to/from the mobile device401. The mobile device401further includes a microprocessor that functions as a central processing unit (CPU); shown as CPU540inFIG.5. A processor is a circuit having elements structured and arranged to perform one or more processing functions, typically various data processing functions. Although discrete logic components could be used, the examples utilize components forming a programmable CPU. A microprocessor for example includes one or more integrated circuit (IC) chips incorporating the electronic elements to perform the functions of the CPU. The CPU540, for example, may be based on any known or available microprocessor architecture, such as a Reduced Instruction Set Computing (RISC) using an ARM architecture, as commonly used today in mobile devices and other portable electronic devices. Of course, other arrangements of processor circuitry may be used to form the CPU540or processor hardware in smartphone, laptop computer, and tablet. The CPU540serves as a programmable host controller for the mobile device401by configuring the mobile device401to perform various operations, for example, in accordance with instructions or programming executable by CPU540. For example, such operations may include various general operations of the mobile device, as well as operations related to the programming for applications on the mobile device. Although a processor may be configured by use of hardwired logic, typical processors in mobile devices are general processing circuits configured by execution of programming. The mobile device401includes a memory or storage system, for storing programming and data. In the example, the memory system may include a flash memory540A, a random-access memory (RAM)540B, and other memory components540C, as needed. The RAM540B serves as short-term storage for instructions and data being handled by the CPU540, e.g., as a working data processing memory. The flash memory540A typically provides longer-term storage. Hence, in the example of mobile device401, the flash memory540A is used to store programming or instructions for execution by the CPU540. Depending on the type of device, the mobile device401stores and runs a mobile operating system through which specific applications are executed. Examples of mobile operating systems include Google Android, Apple iOS (for iPhone or iPad devices), Windows Mobile, Amazon Fire OS, RIM BlackBerry OS, or the like. The processor432within the eyewear device100may construct a map of the environment surrounding the eyewear device100, determine a location of the eyewear device within the mapped environment, and determine a relative position of the eyewear device to one or more objects in the mapped environment. The processor432may construct the map and determine location and position information using a simultaneous localization and mapping (SLAM) algorithm applied to data received from one or more sensors. Sensor data includes images received from one or both of the cameras114A,114B, distance(s) received from a laser range finder, position information received from a GPS unit, motion and acceleration data received from an IMU572, or a combination of data from such sensors, or from other sensors that provide data useful in determining positional information. In the context of augmented reality, a SLAM algorithm is used to construct and update a map of an environment, while simultaneously tracking and updating the location of a device (or a user) within the mapped environment. The mathematical solution can be approximated using various statistical methods, such as particle filters, Kalman filters, extended Kalman filters, and covariance intersection. In a system that includes a high-definition (HD) video camera that captures video at a high frame rate (e.g., thirty frames per second), the SLAM algorithm updates the map and the location of objects at least as frequently as the frame rate; in other words, calculating and updating the mapping and localization thirty times per second. Sensor data includes image(s) received from one or both cameras114A,114B, distance(s) received from a laser range finder, position information received from a GPS unit, motion and acceleration data received from an IMU472, or a combination of data from such sensors, or from other sensors that provide data useful in determining positional information. FIG.6depicts an example physical environment600along with elements that are useful when using a SLAM application and other types of tracking applications (e.g., natural feature tracking (NFT)). A user602of eyewear device100is present in an example physical environment600(which, inFIG.6, is an interior room). The processor432of the eyewear device100determines its position with respect to one or more objects604within the environment600using captured images, constructs a map of the environment600using a coordinate system (x, y, z) for the environment600, and determines its position within the coordinate system. Additionally, the processor432determines a head pose (roll, pitch, and yaw) of the eyewear device100within the environment by using two or more location points (e.g., three location points606a,606b,and606c) associated with a single object604a,or by using one or more location points606associated with two or more objects604a,604b,604c.The processor432of the eyewear device100may position a virtual object608(such as the key shown inFIG.6) within the environment600for viewing during an augmented reality experience. The localization system915in some examples a virtual marker610aassociated with a virtual object608in the environment600. In augmented reality, markers are registered at locations in the environment to assist devices with the task of tracking and updating the location of users, devices, and objects (virtual and physical) in a mapped environment. Markers are sometimes registered to a high-contrast physical object, such as the relatively dark object, such as the framed picture604a,mounted on a lighter-colored wall, to assist cameras and other sensors with the task of detecting the marker. The markers may be preassigned or may be assigned by the eyewear device100upon entering the environment. Markers can be encoded with or otherwise linked to information. A marker might include position information, a physical code (such as a bar code or a QR code; either visible to the user or hidden), or a combination thereof. A set of data associated with the marker is stored in the memory434of the eyewear device100. The set of data includes information about the marker610a,the marker's position (location and orientation), one or more virtual objects, or a combination thereof. The marker position may include three-dimensional coordinates for one or more marker landmarks616a,such as the corner of the generally rectangular marker610ashown inFIG.6. The marker location may be expressed relative to real-world geographic coordinates, a system of marker coordinates, a position of the eyewear device100, or other coordinate system. The one or more virtual objects associated with the marker610amay include any of a variety of material, including still images, video, audio, tactile feedback, executable applications, interactive user interfaces and experiences, and combinations or sequences of such material. Any type of content capable of being stored in a memory and retrieved when the marker610ais encountered or associated with an assigned marker may be classified as a virtual object in this context. The key608shown inFIG.6, for example, is a virtual object displayed as a still image, either 2D or 3D, at a marker location. In one example, the marker610amay be registered in memory as being located near and associated with a physical object604a(e.g., the framed work of art shown inFIG.6). In another example, the marker may be registered in memory as being a particular position with respect to the eyewear device100. FIG.10is a flow chart1000depicting an example method of controlling the presentation of a virtual element or graphical element on the display180B of an eyewear device100. Although the steps are described with reference to the eyewear device100described herein, other implementations of the steps described, for other types of devices, will be understood by one of skill in the art from the description herein. One or more of the steps shown and described may be performed simultaneously, in a series, in an order other than shown and described, or in conjunction with additional steps. Some steps may be omitted or, in some applications, repeated. Block1002inFIG.10describes an example step of capturing frames a video data900with the camera system114of an eyewear device100. In some implementations, the camera system114includes one or more cameras114A,114B, as described herein, for capturing either still images or frames of video data900. The eyewear device100in this example includes an image processing system920, a localization system915, and one or more displays180A,180B. For example, as shown inFIG.7, the eyewear device100includes a semi-transparent image display180B which, as described herein, may include a semi-transparent lens layer and a display matrix layer configured to present images on the lens of the eyewear device. Graphical and virtual elements700,705,710(seeFIG.8) are presented as an overlay relative the physical environment600. The effect, as shown, allows the viewer to see and interact with the presented elements700while the surrounding environment600also remains visible through the display180B. In some implementations, the high-speed processor432of the eyewear device100stores the captured frames of video data900with a camera system114as the wearer moves through a physical environment600. As described herein and shown inFIG.7, the camera system114typically has a camera field of view904that captures images and video beyond the limits of the display180B. The camera system114, in some implementations, includes one or more high-resolution, digital cameras equipped with a CMOS image sensor capable of capturing high-definition still images and high-definition video at relatively high frame rates (e.g., thirty frames per second or more). Each frame of digital video includes depth information for a plurality of pixels in the image. In this aspect, the camera system114serves as a high-definition scanner by capturing a detailed input image of the physical environment. The camera system114, in some implementations, includes a pair of high-resolution digital cameras114A,114B coupled to the eyewear device100and spaced apart to acquire a left-camera raw image and a right-camera raw image, as described herein. When combined, the raw images form an input image that includes a matrix of three-dimensional pixel locations. The example method, at block1002, in some implementations, includes storing the captured frames of video data900in memory434on the eyewear device100, at least temporarily, such that the frames are available for analysis. Block1004describes an example step of detecting a hand651in the captured frames of video data900with the image processing system920. In some example implementations, the image processing system920analyzes the pixel-level data in the captured frames of video data900to determine whether the frame includes a human hand and, if so, whether the framed includes the upturned palm or palmar surface of the hand, as illustrated inFIG.7. The process of detecting a hand651includes detecting a current hand location681in three-dimensional coordinates relative to the display180B or to another known position, such as the eyewear location840, as shown. FIG.7is a perspective illustration of an example hand651at a current hand location681. The process of detecting at block1004, in some implementations, is accomplished by the image processing system920. The hand651may be predefined to be the left hand, as shown. In some implementations, the system includes a process for selecting and setting the hand, right of left, which will serve as the hand651to be detected. Those skilled in the art will understand that the process of detecting and tracking includes detecting the hand, over time, in various postures, in a set or series of captured frames of video data900. In this context, the detecting process at block1004refers to and includes detecting a hand in as few as one frame of video data, as well as detecting the hand, over time, in a subset or series of frames of video data. Accordingly, in some implementations, the process at block1004includes detecting a hand651in a particular posture in one or more of the captured frames of video data900. In other implementations, the process at block1004includes detecting the hand, over time, in various postures, in a subset or series of captured frames of video data900, which are described herein as a series of preliminary hand shapes651. In this aspect, the still images of hands651,652,653shown in the figures refer to and include such illustrated hands either as a still image or as part of a series of hand shapes. Block1006inFIG.10describes an example step of presenting a menu icon700on the display180B. The menu icon700is presented at a current icon position701, as shown inFIG.7. The current icon position701is defined in relation to, and in accordance with the detected current hand location681, such that the menu icon700moves on the display as the hand location681moves in the physical environment600, over time, as detected and tracked in the captured frames of video data900. Although the example steps are described with reference to a menu icon700, the process may be applied and used with other icons and graphical elements unrelated to a menu. In some implementations the menu icon700is a virtual element that is sized and shaped to be apparently graspable by a hand, such as the round ball-like three-dimensional polyhedron shown inFIG.8. In this aspect, presenting a virtual menu icon700that is apparently graspable invites the user, intuitively, to perform a grasping or cradling gesture. The menu icon700may include a ball, polygon, circle, polyhedron, or other shape; regular or irregular; rendered as a two-dimensional shape or as a three-dimensional object. The menu icon700may be presented along with a menu label705, as shown. In some implementations, a menu icon700has been presented on the display180B prior to the process of detecting a hand. For example, another system or application that is currently running on the eyewear device100may include a series of actions which logically proceed to the presentation of a menu icon700on the display. The menu icon700may be presented at a default position, such as the center of the display180B. At such a point, the running application may access the processes described herein, starting in some implementations with detecting a hand651at block1004and then at block1006presenting the menu icon700at a current icon position701relative to the detected current hand location681. In other implementations, the menu icon700is not presented on the display180B unless and until the hand651is detected, in any position or posture, in at least one frame of video data at block1004. In this example, detecting a hand651results in presenting the menu icon700on the display180B. In other example implementations, the menu icon700is not presented on the display180B unless and until, at block1004, the hand651is detected in a particular posture or hand shape. In some implementations, the menu icon700is not presented on the display180B unless and until, at block1004, a preliminary series of hand shapes651is detected. In this example, preliminary series of hand shapes651includes a sequence of hand shapes in the captured framed of video data900, such as the hand shapes that include an upturned palmar surface with relaxed fingers, as shown inFIG.7. After the detecting step, the process then includes determining whether the detected preliminary series of hand shapes matches a preliminary predefined hand gesture851(e.g., a cradling gesture) from among a plurality of predefined hand gestures850stored in a hand gesture library480, as described herein. If the detected preliminary series of hand shapes651matches the preliminary predefined hand gesture851, then the process of presenting a menu icon700is executed. Block1008describes an example step of detecting a first series of hand shapes652in the captured frames of video data900with the image processing system920. The image processing system920analyzes the pixel-level data in the captured frames of video data900to track the motion of the hand. FIG.8is a perspective illustration of an example first series of hand shapes652in which the hand performs an opening gesture (e.g., the fingers are opening relative to the palm). The first series of hand shapes652in some implementations includes one or more fingers extending from a relaxed position (e.g., shown inFIG.7) to a hyperextended position relative to the palm, as shown inFIG.8. The process in some implementations includes detecting the series of current finger or fingertip locations in three-dimensional coordinates relative to the current hand location681or to another known position, such as the display180B or the current eyewear location840. As used herein, the term hyperextended refers to and includes one or more fingers of the hand in an extended orientation relative to the palm. The extent of the hyperextension may be defined as one or more fingers located within a predefined threshold distance or angle relative to a plane defined by the palm of the hand. The example process at block1010includes determining whether the detected first series of hand shapes652matches any one of a plurality of predefined hand gestures850stored in the hand gesture library480. The data stored in the captured frames of video data900is compared to the predefined hand gestures650stored in the library of hand gestures480. Any of a variety of other predefined hand gestures850may be established and stored in the hand gesture library480. In the example shown inFIG.8, the image processing system920analyzes the pixel-level data in the captured frames of video data900to determine whether the hand652is performing an opening gesture. The predefined opening gesture, in this example, includes a sequence of hand poses in which one or more fingers is hyperextended relative to the palmar surface. The hand gesture library480includes a large number of poses and gestures, including descriptions of a hand in various positions and orientations. The stored poses and gestures are suitable for ready comparison to a hand shape that is detected in an image. A hand gesture record stored in the library480in some implementations includes three-dimensional coordinates for a number of landmarks on the hand, including the wrist, the fifteen interphalangeal joints, and the five fingertips, as well as other skeletal and soft-tissue landmarks. A hand gesture record stored in the library480may also include text identifiers, point of view annotations, directional references (e.g., palmar, dorsal, lateral), rotational references (e.g., stable, flexing, extending, pronating, supinating), and other data and descriptors related to each predefined hand gesture850. Each hand gesture record stored in the library480may include a set of exemplary three-dimensional coordinates for each joint and the tip, a hand position identifier (e.g., neutral hand), and a finger position identifier (e.g., index, flexed, partial). For a hand gesture (e.g., a series of hand poses observed over time), a hand gesture record stored in the library480may include a set of exemplary three-dimensional coordinates for each joint and the tip at each location of the index finger, over a particular time interval (e.g., two seconds or longer), a hand motion identifier (e.g., pronating, supinating, stable), and a finger motion identifier (e.g., index flexing and extending continually). For the opening gesture, the record stored in the hand gesture library480, in some implementations, includes a gesture identifier (e.g., opening), a first motion identifier (e.g., fingers hyperextended to within a predefined threshold distance or angle relative to the palm), a minimum duration (e.g., one second), and a subsequent motion identifier (e.g., relaxing, returning toward the palm)—along with a series of exemplary three-dimensional coordinates for each hand and finger landmark during the time interval (e.g., twenty coordinate sets, every five milliseconds). The process at block1010, in some implementations, includes comparing the detected first series of hand shapes652captured in the video data900, over a period of time, on a pixel-by-pixel level, to the plurality of predefined opening hand shapes that are stored in the hand gesture library480until a match is identified. As used herein, the term match is meant to include substantial matches or near matches, which may be governed by a predetermined confidence value associated with possible or candidate matches. The detected hand shape data may include three-dimensional coordinates for the wrist, up to fifteen interphalangeal joints, up five fingertips, and other skeletal or soft-tissue landmarks found in a captured frame. In some examples, the detecting process includes calculating the sum of the geodesic distances between the detected hand shape fingertip coordinates and a set of fingertip coordinates for each hand gesture stored in the library480. A sum that falls within a configurable threshold accuracy value represents a match. Referring again toFIG.10, the example process at block1012includes presenting on the display180B one or more graphical elements710adjacent the current icon position701in accordance with the matching first predefined hand gesture852. As used herein, the one or more graphical elements710means and includes any collection of graphical elements presented on a display, including but not limited to virtual objects associated with VR experiences and graphical elements such as icons, thumbnails, taskbars, and menu items. For example, the graphical elements710A,710B,710C inFIG.8represent selectable menu items including maps, photos, and friends, and may include element labels705A,705B,705C, as shown. The one or more graphical elements710are presented on the display180B at positions that are adjacent to the current menu icon position701. For example, the graphical elements710in some implementations are located a predefined default distance away from the current icon position701. When the current icon position701changes, the locations of the graphical elements710also change, so that the graphical elements710and the menu icon700are persistently displayed together as a grouping and appear to move together. Moreover, because the current menu icon position701is correlated with the current hand location681(at block1004), the graphical elements710and the menu icon700move as the hand moves. In this aspect, the graphical elements710are apparently anchored to the hand location681(as opposed to remaining anchored to the display180B). In one aspect, the physical process of opening the fingers of the hand is in accordance, intuitively, with the virtual process of opening the menu icon700on the display. The opening motion of the fingers corresponds to the opening of the menu icon710. In accordance with the opening gesture, the process of presenting the graphical elements710in some implementations includes animating a progression of each element along a path extending away from the menu icon700. For example,FIG.8illustrates a first graphical element710A and a first path720A that extends away from the menu icon700. The animated progression, in some implementations, includes presenting the first graphical element710A at a series of incremental locations along the first path720A, thereby simulating a progressive emerging of the first graphical element710A from the menu icon700.FIG.8also shows a second graphical element710B and a second path720B; and a third graphical element710C and a third path720C. In some implementations, the apparent speed of the animated progression is correlated with the detected first series of hand shapes652. In this aspect, the faster the fingers open, the faster the animated progression occurs. The paths720A,720B,720C in some implementations extend in a generally radial direction relative to a ball-shaped menu icon700, as shown inFIG.8. The paths720A,720B,720C are similar in length and the graphical elements710A,710B,710C move incrementally along their respective paths together, nearly in unison. In other example implementations, one or more of the graphical elements710is correlated with the detected motion of a particular finger on the hand. In this example, the graphical elements710move incrementally along their respective paths720separately, according to the detected current position of a particular finger of the hand. For example, the first graphical element710A shown inFIG.8is presented at a series of incremental locations along the first path720A in accordance with the detected current position of the thumb. Because the thumb is located on the left side of the hand652, it would be naturally associated with the leftmost first graphical element710A. The faster the thumb opens, the faster the animated progression of the first graphical element710A takes place along the first path720A. Moreover, if the thumb pauses or retreats, the first graphical element710A would pause or retreat, in accordance with the detected current thumb location. Block1014describes an example step of detecting a second series of hand shapes653in the captured frames of video data900with the image processing system920. The image processing system920analyzes the pixel-level data in the captured frames of video data900to track the motion of the hand. FIG.9is a perspective illustration of an example second series of hand shapes653in which the hand performs a closing gesture (e.g., making a fist). The second series of hand shapes653in some implementations includes one or more fingers moving toward the palm to make a fist, as shown inFIG.9. The process in some implementations includes detecting the series of current finger or fingertip locations in three-dimensional coordinates relative to the current hand location681or to another known position, such as the display180B or the current eyewear location840. The example process at block1016includes determining whether the detected second series of hand shapes653matches any one of a plurality of predefined hand gestures850stored in the hand gesture library480. The image processing system920analyzes the pixel-level data in the captured frames of video data900over a period of time and compares the data about the detected second series of hand shapes653to the predefined hand gestures650stored in the library480until a match is identified. For the closing gesture, the record stored in the hand gesture library480, in some implementations, includes a gesture identifier (e.g., closing), a motion identifier (e.g., fingers closing toward the palm to make a fist), a minimum duration (e.g., one second), and a subsequent motion identifier (e.g., relaxing)—along with a series of exemplary three-dimensional coordinates for each hand and finger landmark during the time interval (e.g., twenty coordinate sets, every five milliseconds). The example process at block1018includes removing the one or more graphical elements710from the display180B in accordance with the matching first predefined hand gesture853. The graphical elements710disappear from the display180B when the detected hand closes into a fist. In this aspect, the physical process of closing the fingers of the hand is in accordance, intuitively, with the virtual process of closing the menu represented by the menu icon700on the display. In accordance with the closing gesture, the process of removing the graphical elements710in some implementations includes animating a regression of each element along a path extending toward the menu icon700. For example,FIG.9illustrates a first path720A along which the first graphical element710A appeared to move as it regressed or withdrew toward and into menu icon700. The animated progression, in some implementations, includes presenting the first graphical element710A at a series of incremental locations along the first path720A, thereby simulating a progressive retreating or collapsing of the first graphical element710A back into the menu icon700. the second and third paths720B,720C are also shown inFIG.9. In some implementations, the apparent speed of the animated regression is correlated with the detected first series of hand shapes653. In this aspect, the faster the fingers close, the faster the animated regression occurs. The paths720A,720B,720C in some implementations are similar in length and the graphical elements710A,710B,710C move incrementally along their respective paths together, nearly in unison, toward the menu icon700. In other example implementations, one or more of the graphical elements710is correlated with the detected motion of a particular finger on the hand; and move in accordance with the detected current position of a particular finger of the hand, as described herein. The menu icon700in some implementations remains presented on the display180B, as shown inFIG.9. The process steps and methods described herein may be repeated. For example, a subsequent series of hand shapes may match an opening gesture, resulting in another presentation of the graphical elements710as described. The process steps and methods described herein may be terminated when a hand is detected in a palm down position (e.g., revealing the distal surface of the hand), when the hand is partly or completely removed from the camera's field of view904, or when a predefined stopping gesture is detected. In some implementations, the presenting process steps (at blocks1006and1012) are executed such that the menu icon700and the graphical elements710are presented on the display180B in accordance with the current eyewear location840relative to the current hand location681. In this example implementation, the sizes and shapes, and orientations of the menu icon700and the graphical elements710varies depending on the relative motion between the eyewear device100(at the current eyewear location840) and the hand (at the current hand location681). A localization system915on the eyewear device100in some implementations configures the processor432on the eyewear100to obtain localization data for use in determining the current eyewear location840relative to the detected hand location681. The localization data may be derived from the captured frames of video data900, an IMU unit472, a GPS unit, or a combination thereof. The localization system915may construct a virtual map of various elements within the camera field of view904using a SLAM algorithm, as described herein, updating the map and the location of objects at least as frequently as the frame rate of the camera system114(e.g., calculating and updating the mapping and localization of the current eyewear location840as frequently as thirty times per second, or more). The process of localization includes an example step of calculating a correlation between the detected current hand location681and the current eyewear location840. The term correlation refers to and includes one or more vectors, matrices, formulas, or other mathematical expressions sufficient to define the three-dimensional distance between the detected current hand location681and the eyewear display180B, in accordance with the current eyewear location840. The current eyewear location840is tied to or persistently associated with the display180B which is supported by the frame of the eyewear device100. In this aspect, the correlation performs the function of calibrating the motion of the eyewear100with the motion of the hand650. Because the localization process at block1010occurs continually and frequently, the correlation is calculated continually and frequently, resulting in accurate and near real-time tracking of the current hand location681relative to the current eyewear location840. In another example implementation, the processes at block1010and block1016of determining whether a detected series of hand shapes matches any of the predefined hand gestures850involves using a machine-learning algorithm to compare the pixel-level data about the hand shape in one or more captured frames of video data to a collection of images that include hand gestures. Machine learning refers to an algorithm that improves incrementally through experience. By processing a large number of different input datasets, a machine-learning algorithm can develop improved generalizations about particular datasets, and then use those generalizations to produce an accurate output or solution when processing a new dataset. Broadly speaking, a machine-learning algorithm includes one or more parameters that will adjust or change in response to new experiences, thereby improving the algorithm incrementally; a process similar to learning. In the context of computer vision, mathematical models attempt to emulate the tasks accomplished by the human visual system, with the goal of using computers to extract information from an image and achieve an accurate understanding of the contents of the image. Computer vision algorithms have been developed for a variety of fields, including artificial intelligence and autonomous navigation, to extract and analyze data in digital images and video. Deep learning refers to a class of machine-learning methods that are based on or modeled after artificial neural networks. An artificial neural network is a computing system made up of a number of simple, highly interconnected processing elements (nodes), which process information by their dynamic state response to external inputs. A large artificial neural network might have hundreds or thousands of nodes. A convolutional neural network (CNN) is a type of neural network that is frequently applied to analyzing visual images, including digital photographs and video. The connectivity pattern between nodes in a CNN is typically modeled after the organization of the human visual cortex, which includes individual neurons arranged to respond to overlapping regions in a visual field. A neural network that is suitable for use in the determining process described herein is based on one of the following architectures: VGG16, VGG19, ResNet50, Inception V3, Xception, or other CNN-compatible architectures. In the machine-learning example, at block1010and block1016, the processor432determines whether a detected series of bimanual hand shapes substantially matches a predefined hand gesture using a machine-trained algorithm referred to as a hand feature model. The processor432is configured to access the hand feature model, trained through machine learning, and applies the hand feature model to identify and locate features of the hand shape in one or more frames of the video data. In one example implementation, the trained hand feature model receives a frame of video data which contains a detected hand shape and abstracts the image in the frame into layers for analysis. Data in each layer is compared to hand gesture data stored in the hand gesture library480, layer by layer, based on the trained hand feature model, until a good match is identified. In one example, the layer-by-layer image analysis is executed using a convolutional neural network. In a first convolution layer, the CNN identifies learned features (e.g., hand landmarks, sets of joint coordinates, and the like). In a second convolution layer, the image is transformed into a plurality of images, in which the learned features are each accentuated in a respective sub-image. In a pooling layer, the sizes and resolution of the images and sub-images are reduced in order isolation portions of each image that include a possible feature of interest (e.g., a possible palm shape, a possible finger joint). The values and comparisons of images from the non-output layers are used to classify the image in the frame. Classification, as used herein, refers to the process of using a trained model to classify an image according to the detected hand shape. For example, an image may be classified as a “touching action” if the detected series of bimanual hand shapes matches the touching gesture stored in the library480. Any of the functionality described herein for the eyewear device100, the mobile device401, and the server system498can be embodied in one or more computer software applications or sets of programming instructions, as described herein. According to some examples, “function,” “functions,” “application,” “applications,” “instruction,” “instructions,” or “programming” are program(s) that execute functions defined in the programs. Various programming languages can be employed to develop one or more of the applications, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, a third-party application (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may include mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application can invoke API calls provided by the operating system to facilitate functionality described herein. Hence, a machine-readable medium may take many forms of tangible storage medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer devices or the like, such as may be used to implement the client device, media gateway, transcoder, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims. It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or includes a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Unless otherwise stated, any and all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. Such amounts are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. For example, unless expressly stated otherwise, a parameter value or the like may vary by as much as plus or minus ten percent from the stated amount or range. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, the subject matter to be protected lies in less than all features of any single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. While the foregoing has described what are considered to be the best mode and other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that they may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all modifications and variations that fall within the true scope of the present concepts. | 91,092 |
11861071 | DETAILED DESCRIPTION Illustrative embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. These embodiments are provided to understand the present disclosure more thoroughly and to fully convey the scope of the present disclosure to a person skilled in the art. Although illustrative embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure can be implemented in various forms and should not be limited by the embodiments illustrated herein. Virtual reality technology is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulation environment and makes users immerse in this environment. Virtual reality technology uses the data from real life to generate electronic signals by computer technology, combines them with various output devices to transform them into phenomena that people can perceive. These phenomena may be real objects in reality, or objects that cannot be seen by our flesh eyes but are expressed through three-dimensional models. The virtual reality equipment in the present disclosure may refer to VR glasses. VR glasses use the head mounted display device to isolate people's vision and hearing from the outside world and guide the user to produce a feeling of being in the virtual environment. Its display principle is that the screens for left and right eyes display the images for the left and right eyes respectively, and a three-dimensional feeling is generated in the mind after the human eyes acquire these different information. For convenience of description, the present disclosure will be described below by taking VR glasses as a specific application example of the virtual reality equipment. FIG.1shows a schematic flowchart of a local perspective method of a virtual reality equipment according to an embodiment of the present disclosure. Referring toFIG.1, the local perspective method of the virtual reality equipment according to an embodiment of the present disclosure comprises the following steps S110to S130. In step S110, a user's hand action is identified. In the local perspective display of the virtual reality equipment, a user's hand action may be identified firstly. As shown inFIG.2, generally, the conventional VR glasses are equipped with a binocular camera at the external front end of the glasses, which is used to collect the external environment information and capture the user's posture and motion information, such as hand action information. In the conventional virtual reality application scenarios, computer vision technology is usually used for hand action identification. The results of hand action identification are often used for user interface operation based on the hand action, or some hand sensory games. In the embodiments of the present disclosure, the information collected by the camera installed in the conventional VR glasses may also be used to identify the user's hand action, so as to perform local perspective display according to the hand action. It should be noted that, in addition to the above binocular camera for collecting hand action information, monocular camera or other types of camera may also be used. The specific type of camera can be flexibly set by a person skilled in the art according to the actual needs, which is not specifically limited here. When using computer vision technology for hand action identification, the following methods may be used. Firstly, the hand action features and hand action model are designed, the hand action samples are used to extract the features, the hand action model is trained, and finally the hand action model is established. On this basis, a new hand action image is collected by the binocular camera and preprocessed; then, the hand action image is segmented so as to accurately extract the human hand part in the image; then, the hand action feature is extracted; finally, the input hand actions are classified and identified by using the previously established hand action model. Of course, besides the above identification method, a person skilled in the art can also select other methods for hand action identification according to actual needs, which is not specifically limited here. In addition, the above identification of the user's hand action can be real-time identification to facilitate timely response to the user's needs. Of course, for the purpose of saving power of the equipment, the hand action may be identified at a preset time interval. The specific frequency used to identify the hand action can be flexibly set by a person skilled in the art according to the actual needs, which is not specifically limited here. In step S120, if the user's hand action satisfies a preset trigger action, a local perspective function of the virtual reality equipment is triggered. After obtaining the user's hand action, it is necessary to further determine whether the user's hand action is an action to trigger the local perspective function of VR glasses. Therefore, the user's hand action identified may be matched with the preset trigger action. If the matching is successful, the local perspective function of VR glasses can be triggered at this point. The type of preset trigger action may be flexibly set by a person skilled in the art according to the actual needs, and is not specifically limited here. It should be noted that “triggering the local perspective function of the virtual reality equipment” in this step can be understood as that only the local perspective function of VR glasses is triggered, and VR glasses have not actually entered the perspective state, that is, at present the user cannot see the real scene, and subsequent steps are needed to determine the local perspective display area in the virtual scene. Of course, it can also be understood that the VR glasses have entered the perspective state, and at present the user can see the real scene, but in order to avoid too much influence on the user's immersive experience, the local perspective display area in the virtual scene can be re-determined through subsequent steps. In step S130, under the local perspective function, the local perspective display area in the virtual scene is determined according to the position of the user's hand action, so as to display a real scene in the local perspective display area. When determining the local perspective display area in the virtual scene, the position of the hand action can be determined by using the user's hand action obtained in the above steps, and then the local perspective display area can be determined according to the specific position of the user's hand action. As shown inFIG.3, the user can see the real scene through the local perspective display area, while the user can still see the virtual scene at remaining parts other than the local perspective area. The local perspective method of the virtual reality equipment according to the embodiments of the present disclosure can determine the range of the area to be perspectively displayed by using the user's hand action. Compared with the conventional global perspective solution, it can be applicable to more and richer use scenarios, and can greatly improve the user's use experience. In an embodiment of the present disclosure, the preset trigger action includes a one-hand trigger action. The step of, under the local perspective function, determining the local perspective display area in the virtual scene according to the position of the user's hand action comprises: if the user's one-hand action satisfies the one-hand trigger action, determining positions of an index finger and a thumb of the user's one-hand action; and generating a circular perspective display area in the virtual scene according to the positions of the index finger and the thumb of the user's one-hand action. In the embodiments of the present disclosure, the preset trigger action may be a one-hand trigger action. As shown inFIG.4which shows a schematic diagram of a one-hand trigger action, the palm of the user's one-hand is bent inward, and the thumb is opposite to the other four fingers to make an action similar to the “C” shape. In order to generate a more accurate local perspective display area later, if the user's hand action identified satisfies the above one-hand trigger action, the positions of the index finger and the thumb corresponding to the user's one-hand action may be further determined, and then according to the positions of the index finger and the thumb, a circular perspective display area as shown inFIG.5is formed between the index finger and the thumb of one-hand. For example, in the scenario when the user wants to use the mobile phone or take a water cup, the real scene captured by the camera on the VR glasses will be perspectively displayed in the above circular perspective display area. The user can operate the mobile phone or pick up the water cup through the circular perspective display area, and the circular perspective display area can move with the movement of the user's hand. In an embodiment of the present disclosure, the preset trigger action includes a two-hand trigger action. The step of, under the local perspective function, determining the local perspective display area in the virtual scene according to the position of the user's hand action comprises: if the user's two-hand action satisfies the two-hand trigger action, determining positions of two index fingers and two thumbs of the user's two-hand action; and generating a triangular perspective display area in the virtual scene according to the positions of the two index fingers and the two thumbs of the user's two-hand action. In the embodiments of the present disclosure, the preset trigger action may also be a two-hand trigger action. As shown inFIG.6which shows a schematic diagram of a two-hand trigger action, the user's left thumb is in contact with the right thumb, the left index finger is in contact with the right index finger, all of them are located on a same plane, and other fingers may be bent and retracted or expanded, so that the area surrounded by the left thumb, the right thumb, the left index finger and the right index finger is a triangular area. In order to generate a more accurate local perspective display area later, if the user's two-hand action identified satisfies the above two-hand trigger action, the positions of the two index fingers and two thumbs corresponding to the user's two-hand action can be further determined, and then according to the positions of the two index fingers and two thumbs, a triangular perspective display area as shown inFIG.7is formed between the positions of two index fingers and two thumbs. For example, in the scenario when the user needs to find something, perspective display may need to be performed in a larger range. The above triangular perspective display area will perspectively display the real scene captured by the camera on VR glasses. As the user's hands move towards both sides, the range of triangular perspective display area will gradually increase, so that users can find the things in time. In an embodiment of the present disclosure, in addition to determining the local perspective display area in the virtual scene based on the two trigger actions listed above, other trigger actions may also be flexibly set according to the actual needs. For example, the user may draw a track having a defined shape in front of his/her eyes, and the area surrounded by the track can be regarded as the area where the user wants to perform perspective display. For example, if the track drawn by the user is a square track, the area surrounded by the square track can be perspectively displayed in the virtual scene formed by VR glasses. In an embodiment of the present disclosure, in order to prevent the user from triggering the local perspective display function of VR glasses by mistake, when the user's hand action satisfies the preset trigger action, more complicated trigger conditions may be further set. For example, the duration of the user's hand trigger action identified can be counted. If a preset time threshold is exceeded, it is considered that the user wants to trigger the local perspective display function of VR glasses. Alternatively, the number of times of performing the user's hand trigger action may be counted. If it reaches a preset number of times of performing, it is considered that the user wants to trigger the local perspective display function of VR glasses. Regarding how to specifically set the trigger conditions of the local perspective function, it can be flexibly set by a person skilled in the art according to the actual situation, which will not be listed here one by one. In an embodiment of the present disclosure, the method further comprises: determining whether the position of the user's hand action has changed; and if it has changed, updating the local perspective display area in the virtual scene according to a changed position of the user's hand action. In the actual application scenarios, the user's hand position may change in real time. When the hand position changes greatly, if the local perspective display area is still determined according to the user's hand position before the change, it may occur that the local perspective display area cannot be fully matched with the user's hand. That is, the user may not be able to see what they want to see in the local perspective display area, or can only see part of it. Therefore, in the embodiment of the present disclosure, the position change of the user's hand action may be detected in real time. When the position change of the user's hand action has been detected, the local perspective display area may be re-determined according to the changed position of the user's hand action. In an embodiment of the present disclosure, the method further comprises: if the user's hand action satisfies a preset turning-off action, turning off the local perspective function of the virtual reality equipment. In the actual application scenarios, the user's demand to trigger the local perspective display function of VR glasses may be only temporary, such as temporarily answering a phone, temporarily drinking a cup of water, etc. Therefore, in order to ensure that the user can quickly return to the immersive experience of the virtual scene from the local perspective display function state, it may also be detected whether the user has made a hand action to turn off the local perspective function of VR glasses. If it is detected that the user's hand action matches the preset turning-off action, the local perspective display function of VR glasses may be turned off at this point. As shown inFIG.8, which shows a schematic diagram of a preset turning-off action. The user can turn off the local perspective display function of VR glasses by making an action of clenching his hands in front of his eyes. Of course, in addition to turning-off the local perspective display area in the virtual scene based on the preset turning-off action shown inFIG.8, other turning-off actions can also be flexibly set according to the actual needs, which are not specifically limited here. In an embodiment of the present disclosure, similar to the trigger conditions of the local perspective display function, in order to prevent the user from turning-off the local perspective display function by mistake, more complicated turning-off conditions may be further set when the user's hand action satisfies the preset turning-off action. For example, the duration of the user's hand turning-off action identified can be counted. If a preset time threshold is exceeded, it is considered that the user wants to turn off the local perspective display function of VR glasses. Alternatively, the number of times of performing the user's hand turning-off action may be counted. If it reaches a preset number of times of performing, it is considered that the user wants to turn off the local perspective display function of VR glasses. Regarding how to specifically set the turning-off conditions of the local perspective function, it can be flexibly set by a person skilled in the art according to the actual situation, which will not be listed here one by one. An embodiment of the present disclosure also provides a local perspective device of a virtual reality equipment, which belongs to the same technical concept as the local perspective method of the virtual reality equipment.FIG.9shows a block diagram of a local perspective device of a virtual reality equipment according to an embodiment of the present disclosure. Referring toFIG.9, the local perspective device of the virtual reality equipment900comprises a hand action identification unit910, a local perspective function triggering unit920and a local perspective display area determination unit930. The hand action identification unit910is for identifying a user's hand action. The local perspective function triggering unit920is for triggering a local perspective function of the virtual reality equipment if the user's hand action satisfies a preset trigger action. The local perspective display area determination unit930is for, under the local perspective function, determining a local perspective display area in a virtual scene according to a position of the user's hand action, so as to display a real scene in the local perspective display area. In an embodiment of the present disclosure, the preset trigger action includes a one-hand trigger action, and the local perspective display area determination unit930is specifically for: if the user's one-hand action satisfies the one-hand trigger action, determining positions of the index finger and the thumb of the user's one-hand action; and generating a circular perspective display area in the virtual scene according to the positions of the index finger and the thumb of the user's one-hand action. In an embodiment of the present disclosure, the preset trigger action includes a two-hand trigger action, and the local perspective display area determination unit930is specifically for: if the user's two-hand action satisfies the two-hand trigger action, determining positions of two index fingers and two thumbs of the user's two-hand action; and generating a triangular perspective display area in the virtual scene according to the positions of two index fingers and two thumbs of the user's two-hand action. In an embodiment of the present disclosure, the device further comprises: a position change determination unit for determining whether the position of the user's hand action has changed; and a local perspective display area updating unit for updating the local perspective display area in the virtual scene according to a changed position of the user's hand action if the position of the user's hand action has changed. In an embodiment of the present disclosure, the device further comprises a local perspective function turning-offing unit for, if the user's hand action satisfies a preset turning-off action, turning off the local perspective function of the virtual reality equipment. FIG.10is a schematic diagram of the structure of a virtual reality equipment. Referring toFIG.10, at the hardware level, the virtual reality equipment comprises: a memory, a processor, and optionally an interface module, a communication module, etc. The memory may include an internal memory, such as a high-speed random access memory (RAM), or a non-volatile memory, such as at least one disk memory, etc. Of course, the virtual reality equipment may also include other hardware as required. The processor, the interface module, the communication module and the memory may be interconnected through an internal bus. The internal bus may be ISA (industry standard architecture) bus, PCI (peripheral component interconnect) bus or EISA (extended industry standard architecture) bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of representation, only one bidirectional arrow is used inFIG.10, but it does not mean that there is only one bus or one type of bus. The memory is used to store computer executable instructions. The memory provides the computer executable instructions to the processor through the internal bus. The processor executes the computer executable instructions stored in the memory and is specifically used to implement the following operations: identifying a user's hand action; triggering a local perspective function of the virtual reality equipment if the user's hand action satisfies a preset trigger action; and under the local perspective function, determining a local perspective display area in a virtual scene according to a position of the user's hand action, so as to display a real scene in the local perspective display area. The functions performed by the local perspective device of the virtual reality equipment disclosed in the embodiment shown inFIG.9of the present disclosure can be applied to the processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In the implementation process, the steps of the method described above can be completed by integrated logic circuits (in the form of hardware) or instructions (in the form of software) in the processor. The processor may be a general-purpose processor including a central processing unit (CPU), a network processor (NP), etc.; it may also be a digital signal processor (DSP), an application specific dedicated integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, which can implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this specification. The general-purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of this specification can be directly embodied as hardware and executed by a decoding processor, or executed by a combination of hardware in the decoding processor and software modules. The software module can be located in a storage medium well known in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers, etc. The storage medium is located in the memory, and the processor reads the information in the memory and cooperates with its hardware to complete the steps of the above method. The virtual reality equipment can also perform the steps performed by the local perspective method of the virtual reality equipment inFIG.1, and realize the function of the local perspective method of the virtual reality equipment in the embodiment shown inFIG.1, which will not be repeated here in the embodiment of the present disclosure. An embodiment of the present disclosure further provides a computer readable storage medium, which stores one or more programs. When executed by the processor, the one or more programs implement the local perspective method of the virtual reality equipment as stated above. Specifically, it is used to execute the following operations: identifying a user's hand action; triggering a local perspective function of the virtual reality equipment if the user's hand action satisfies a preset trigger action; and under the local perspective function, determining a local perspective display area in a virtual scene according to a position of the user's hand action, so as to display a real scene in the local perspective display area. A person skilled in the art should understand that the embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Thus, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROMs, optical memories, etc.) having computer-usable program code recorded thereon. The present disclosure is described with reference to flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that each flow and/or block in the flowcharts and/or block diagrams, and combinations of the flows and/or blocks in the flowcharts and/or block diagrams may be implemented by computer program instructions. The computer program instructions may be provided to a processor of a general purpose computer, a special purpose computer, an embedded processor, or other programmable data processing device to generate a machine for implementing the functions specified in one or more flows of a flowchart or and/or one or more blocks of a block diagram by instructions executed by the processor of the computer or the other programmable data processing device. These computer program instructions may also be stored in a computer readable memory capable of guiding a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate a manufactured product including an instruction device that implements the functions specified in one or more flows of a flowchart or and/or one or more blocks of a block diagram. These computer program instructions may also be loaded on a computer or other programmable data processing device so that a series of operation steps are performed on the computer or other programmable device to produce computer implemented processing, so that the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more flows of a flowchart or and/or one or more blocks of a block diagram. In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include non-permanent memory, random access memory (RAM) and/or nonvolatile memory in computer readable media, such as read only memory (ROM) or flash RAM. The memory is an example of computer readable media. Computer readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology. Information can be computer readable instructions, data structures, modules of programs or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technologies, read only disc read only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic tape cartridge, magnetic tape magnetic disk storage or other magnetic storage device, or any other non-transmission medium, which can be used to store information that can be accessed by computing devices. As defined herein, computer readable media does not include temporary computer readable media, such as modulated data signals and carriers. It should be noted that the terms “comprise”, “include” or any other variations thereof are non-exclusive or open-ended, so that a process, method, article, or device including a series of elements includes not only those elements listed but also includes unspecified elements as well as elements that are inherent to such a process, method, article, or device. In the case that there is no more limitation, the phrase “comprising a . . . ” does not exclude that the process, method, article, or device including the named element further includes additional named element. The above only describes preferred embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. For a person skilled in the art, the present disclosure may have various changes and changes. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present disclosure should all fall into the protection scope of the present disclosure. | 28,692 |
11861072 | The use of cross-hatching or shading in the accompanying figures is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the figures. Accordingly, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonalities of similarly illustrated elements, or any other characteristic, attribute, or property for any element illustrated in the accompanying figures. Additionally, it should be understood that the proportions and dimensions (either relative or absolute) of the various features and elements (and collections and groupings thereof) and the boundaries, separations, and positional relationships presented therebetween, are provided in the accompanying figures merely to facilitate an understanding of the various embodiments described herein and, accordingly, may not necessarily be presented or illustrated to scale, and are not intended to indicate any preference or requirement for an illustrated embodiment to the exclusion of embodiments described with reference thereto. DETAILED DESCRIPTION Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims. The following description relates to the configuration and operation of SMI-based gesture input systems—i.e., systems that can identify gestures made by a user using signals received from one or more SMI sensors. An SMI sensor can be used to optically measure the relative motion (displacement) between the SMI sensor and a target (e.g., a surface or object), with sub-wavelength resolution. When displacement measurements are associated with measurement times, the velocity of the target may also be measured. Furthermore, by modulating the SMI sensor with a known wavelength modulation (e.g., a triangular modulation), the absolute distance from the SMI sensor to the target may be measured. In augmented reality (AR), virtual reality (VR), and mixed reality (MR) applications, as well as other applications, it can be useful to track a user's finger movement(s) and/or identify a user's gestures (e.g., gestures made with one or more fingers, a hand, an arm, etc.). In some applications, it is useful for a user to be able to provide input to a system by interacting with a surface (e.g., making a gesture on any random surface, such as a tabletop, wall, or piece of paper), or by making a gesture in free space. In such applications, an SMI-based gesture input system may be used to track a user's finger movements with reference to any surface, including, in some cases, the surface of another finger, the user's palm, and so on. Described herein are SMI-based gesture input systems and devices that can be worn or held by a user. Some of the systems include a singular wearable or handheld device. Other systems may include two or more wearable and/or handheld devices. The systems may be provided with more or fewer SMI sensors, which generally enable finer or lower resolution tracking, or more or less complex gesture detection/identification. For example, with one SMI sensor, scrolling along a single axis may be detected. With two SMI sensors, user motion in a plane may be tracked. With three or more SMI sensors, movements in x, y, and z directions may be tracked. Motion tracking with six degrees of freedom may also be tracked with three or more SMI sensors, and in some cases, by modulating the SMI sensors in particular or different ways. In comparison to traditional optical tracking methods, such as optical flow and speckle tracking, an SMI-based tracking method can reject ambient light (e.g., sunlight or other ambient light) and track motion with six degrees of freedom without a need for a supplemental sensor for determining the distance to a target surface. An SMI-based gesture input system can also be used in a dark room (e.g., a room with no ambient light). These and other techniques are described with reference toFIGS.1-14. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting. Directional terminology, such as “top”, “bottom”, “upper”, “lower”, “front”, “back”, “over”, “under”, “beneath”, “left”, “right”, etc. may be used with reference to the orientation of some of the components in some of the figures described below. Because components in various embodiments can be positioned in a number of different orientations, directional terminology is used for purposes of illustration only and is in no way limiting. The directional terminology is intended to be construed broadly, and therefore should not be interpreted to preclude components being oriented in different ways. The use of alternative terminology, such as “or”, is intended to indicate different combinations of the alternative elements. For example, A or B is intended to include, A, or B, or A and B. FIG.1shows an example SMI-based gesture input system100. The system100includes a device housing102, a set of one or more SMI sensors104mounted within the device housing102, a processing system106mounted within the device housing102, and/or a communications interface108mounted within the device housing102. The device housing102may take various forms, and in some cases may be configured to be worn or held by a user110. When the device housing102is configured to be worn by a user110, the device housing102may define a wearable device such as a finger ring, a full or partial glove, a sleeve, etc. When the device housing102is configured to be held by a user110, the device housing102may define a stylus, another writing implement (e.g., a pen or a pencil), an arbitrary object, etc. In any case, the device housing102may be made from various materials, such as plastic, metal, or ceramic materials. The device housing102may in some cases include multiple parts, such as first and second rings that snap together or are otherwise fastened (e.g., by adhesive or solder), first and second half-circle tubes that snap together or are otherwise fastened (e.g., by adhesive or solder), or one or more pieces defining an open partial circle, which open partial circle has one or more open ends plugged by a cap. Each of the SMI sensors104may include an electromagnetic radiation source. The electromagnetic radiation source may include a resonant cavity from which a beam of electromagnetic radiation112is emitted. The beam of electromagnetic radiation112may include a coherent (or partially coherent) mix of 1) electromagnetic radiation generated by the electromagnetic radiation source, and 2) electromagnetic radiation that is received into the resonant cavity of the electromagnetic radiation source after reflecting or backscattering from a surface114. Each of the SMI sensors104may include a photodetector that generates an SMI signal116containing information about a relationship between the SMI sensor104and the surface114. The SMI signal116generated by an SMI sensor104contains information corresponding to information contained in the electromagnetic radiation waveform received by the SMI sensor104. Alternatively, an SMI sensor104may output, as an SMI signal116, a measurement of the current or junction voltage of its electromagnetic radiation source. The one or more SMI sensors104may emit a set of one or more beams of electromagnetic radiation112. Different beams112may be emitted in different directions. In some cases, some or all of the beams112may be emitted in directions that extend away from a first surface of the user110(e.g., away from a surface of the user110on which the device housing102is worn). Some (or all) of the beams112may be emitted toward a second surface (e.g., the surface114). The SMI signals116generated by the set of one or more SMI sensors104may not only contain information about the relationships between individual SMI sensor(s)104and the surface114, but information about a relationship between the device housing102and the surface114, and thus information about a position, orientation, or movement of the user110that is wearing or holding the device housing102. The processing system106may include, for example, one or more analog-to-digital converters118(ADCs) for digitizing the SMI signals116output by the SMI sensors104(e.g., an ADC118per SMI sensor104), a processor120, and/or other components). The processing system106may in some cases include filters, amplifiers, or other discrete circuits for processing the SMI signal(s)116. The processor120may take various forms, such as that of a microprocessor, microcontroller, application-specific integrated circuit (ASIC), and so on). The processor120may be configured to extract the relationship between the device housing102and the surface114from digitized samples of the one or more SMI signals116. When the system100includes only one SMI sensor104, or when the processor120uses only one SMI signal116, the processor120may determine, for example, motion of the device housing102(and thus a motion of the user110) along an axis of the SMI sensor's emitted beam112(e.g., in an x, y, or z direction of a Cartesian coordinate system). When the system100includes only two SMI sensors104, or when the processor120uses only two SMI signals116, the processor120may determine, for example, motion of the device housing102(and thus a motion of the user110) in a plane (e.g., in an xy, xz, or yz plane of a Cartesian coordinate system, assuming the beams112are tilted (i.e., not perpendicular or parallel to) the plane). When the system100includes only at least three SMI sensors104, or when the processor120uses at least three SMI signals116, the processor120may determine, for example, motion of the device housing102(and thus a motion of the user110) in free space (e.g., in an xyz space of a Cartesian coordinate system). When the system100includes two or three SMI sensors104, the beams112emitted by the SMI sensors104preferably have orthogonal axes, which decouple the SMI signals116to improve sensitivity and minimize error, and which simplify the processing burden (i.e., computation burden) placed on the processor120. However, the beams112need not have orthogonal axes if the angles between the beams112and direction(s) of displacement being measured is/are known. When the system100generates more SMI signals116than are needed by the processor120, or when the system100includes more than three SMI sensors104, the processor120may analyze digitized samples of multiple SMI signals116, and identify (based at least in part on the analyzing) at least one of the multiple SMI signals116from which to extract the relationship between the device housing102and the surface114. In the latter case, it is acknowledged that the device housing102may in some cases be positioned in different ways, such that its SMI sensors104may emit beams of electromagnetic radiation112in directions that are not useful, or in directions that result in different beams112impinging on different surfaces. The processor120may therefore analyze the digitized samples of multiple SMI signals116to determine which SMI signals116seem to contain useful information about the same surface (e.g., the processor120may be programmed to assume that SMI signals116indicating that a surface is within a threshold distance are being generated by SMI sensors104facing toward a user's palm or other nearby body part, and then ignore these SMI signals116). Alternatively, the system's user110may position the device housing102so that its SMI sensors104emit beams of electromagnetic radiation112in useful directions. In some embodiments, the processor120may be configured to transmit information indicating the relationship between the device housing102and the surface114using the communications interface108. The information may be transmitted to a remote device. In some cases, the transmitted information may include a sequence of time-dependent measurements, or a sequence of time-dependent positions, orientations, or movements. In other cases, the processor120may be configured to identify one or more gestures made by the user110and transmit indications of the one or more gestures (which indications are a form of information indicating the relationship between the device housing102and the surface114). The processor120may identify a gesture of the user110by comparing a sequence of changes in one or more SMI signals116obtained from one or more SMI sensors104to one or more stored sequences that have been associated with one or more gestures. For example, the processor120may compare a sequence of changes in an SMI signal116to a stored sequence corresponding to a press or lunge, and upon determining a match (or that the sequences are similar enough to indicate a match), the processor120may indicate that the user110has made a press or lunge gesture. Similarly, upon comparing sequences of changes in a set of SMI signals116to a stored set of sequences corresponding to the user110writing a letter “A,” or to a stored set of sequences corresponding to the user110making a circular motion, and determining a match to one of these gestures, the processor120may indicate that the user110has drawn a letter “A” or made a circular gesture. In addition to or in lieu of comparing sequences of changes in one or more SMI signals116to stored sequences of changes, the processor120may determine, from the sequence(s) of changes in or more SMI signals116, a set of time-dependent positions, orientations, movement vectors, or other pieces of information, in 1-, 2-, or 3-dimensions, and may compare this alternative information to stored information that has been associated with one or more predetermined gestures. When determining motion of the device housing102with respect to the surface114, there is ambiguity between displacement and rotation when using only three sequences of time-dependent measurements. This is because characterization of motion, in a Cartesian coordinate system, requires characterization of six degrees of freedom (6 DoF). Characterization of 6 DoF requires characterization of six unknowns, which consequently requires six sequences of time-dependent measurements—e.g., not only measurements of displacement along three axes (x, y, and z axes), but rotation about each of the three axes (e.g., yaw, pitch, and roll). In other words, the processor120cannot solve for six unknowns using only three sequences of time-dependent measurements. To provide three additional sequences of time-dependent measurements, the processor120may use SMI signals116obtained by six different SMI sensors104, which emit beams112directed in six different directions toward the surface114. Alternatively, the processor120may obtain two or more sequences of time-dependent measurements, from each SMI sensor104in a smaller number of SMI sensors104. For example, the processor120may alternately modulate an input of each SMI sensor104, in a set of three SMI sensors104, using a sinusoidal waveform and a triangular waveform, and obtain a sequence of time-dependent measurements for each type of modulation from each of the three SMI sensors104(e.g., the processor120may modulate an input of each SMI sensor104using a sinusoidal waveform during a first set of time periods, and modulate the input of each SMI sensor104using a triangular waveform during a second set of time periods). Modulation of the inputs using the triangular waveform can provide an absolute distance measurement, which may not be obtainable using sinusoidal waveform modulation. The communications interface108may include a wired and/or wireless communications interface (e.g., a Bluetooth®, Bluetooth® Low Energy (BLE), Wi-Fi, or Universal Serial Bus (USB) interface) usable for communications with a remote device (e.g., a mobile phone, electronic watch, tablet computer, or laptop computer). FIGS.2and3show examples of SMI-based gesture input systems, which systems may be embodiments of the system described with reference toFIG.1.FIG.2shows an example SMI-based gesture input system that takes the form of a closed ring200. The closed ring200may be configured to receive a user's finger202(i.e., the closed ring200may be a finger ring). A set of SMI sensors204housed within the closed ring200may emit beams of electromagnetic radiation206through apertures and/or window elements that are transparent to the wavelength(s) of the emitted beams206. By way of example, the closed ring200includes three SMI sensors204that emit orthogonal beams of electromagnetic radiation206. In alternative embodiments, the closed ring200may include more or fewer SMI sensors204that emit orthogonal or non-orthogonal beams of electromagnetic radiation206. FIG.3shows an example SMI-based gesture input system that takes the form of an open ring300. The open ring300may be configured to receive a user's finger302(e.g., the open ring300may be a finger ring). The open ring300may include SMI sensors304that are disposed to emit beams of electromagnetic radiation306from along its ring body308and/or from one or both ends310,312of its ring body308(e.g., from a cap at an end310,312of its ring body308). By way of example, the open ring300includes three SMI sensors304that emit orthogonal beams of electromagnetic radiation306. In alternative embodiments, the open ring300may include more or fewer SMI sensors304that emit orthogonal or non-orthogonal beams of electromagnetic radiation306. Although the SMI sensors304are shown near both ends310,312of the open ring300inFIG.3, all of the SMI sensors304(or more or fewer SMI sensors304) may alternatively be disposed near one end of the open ring300. An open ring, as shown inFIG.3, can be useful in that it may not obstruct the inner surfaces of a user's hand, which in some cases may improve the user's ability to grip an object, feel a texture on a surface, or receive a haptic output provided via a surface. In some embodiments, the wearable device described with reference to any ofFIGS.1-3may determine the absolute distance, direction, and velocity of a surface with respect to an SMI sensor by triangularly modulating an input to the SMI sensor, as described with reference toFIGS.10and11. Displacement of the surface may then be obtained by integrating velocity. In some embodiments, the wearable device can determine displacement and direction of the surface with respect to an SMI sensor (in the time domain) using I/Q demodulation, as described with reference toFIG.12. Absolute distance can then be obtained using triangular modulation. In some cases, a wearable device such as a finger ring may include a deformable or compressible insert that enables the finger ring to be worn farther from, or closer to, a user's fingertip. In some cases, a finger ring may be rotated by a user, so that it may alternately sense a surface below a user's hand, a surface of an object held by the user, an adjacent finger, and so on. In some cases, a wearable device may include sensors in addition to SMI sensors, such as an inertial measurement unit (IMU). In some cases, the additional sensor(s) may also be used to characterize motion. A wearable device may also contain a haptic engine to provide haptic feedback to a user, a battery, or other components. FIG.4shows a wearable device400having a set of SMI sensors402from which a processor of the device400may select a subset404to determine a relationship between the wearable device400and a surface406. Alternatively, a processor of the device400may use SMI signals generated by different subsets404,408of the SMI sensors402to determine relationships between the wearable device400and different surfaces406,410(e.g., a tabletop406and a finger410of the user adjacent the finger on which the device400is worn). By way of example, the wearable device400is shown to be a closed finger ring (e.g., a wearable device having a form factor similar to the form factor of the closed ring described with reference toFIG.2). In alternative embodiments, the device400may take other forms. InFIG.4, the SMI sensors402are grouped in subsets of three SMI sensors402, and the subsets are located at different positions around the circumference of the device400. In other embodiments, the subsets of SMI sensors402may have different numbers of SMI sensors402(including only one SMI sensor402, in some cases). In some embodiments, the SMI sensors402may not be arranged in discrete subsets, and a processor of the device400may analyze SMI signals received from the SMI sensors402and dynamically identify one or more subsets of SMI sensors402in response to analyzing the SMI signals. The processor may also determine that one or more of the SMI sensors are not generating useful SMI signals and exclude those SMI sensors from inclusion in any subset (and in some cases, may not use those SMI sensors until a change in their SMI signals is identified). In some embodiments of the device400(or in embodiments of other devices described herein), the device400may include one or more sensors for determining an orientation of the device400with respect to its user (e.g., with respect to the finger on which the device400is worn, one or more adjacent fingers, the user's palm, and so on) or a surface (e.g., a tabletop, piece of paper, wall, surface of the user's body, and so on). The sensors may include, for example, one or more of proximity sensors, contact sensors, pressure sensors, accelerometers, IMUs, and so on. FIG.5shows another example SMI-based gesture input system500. In contrast to the system described with reference toFIG.1, the system500may include more than one device. For example, the system500may include a wearable device502that is configured to be worn by a user, and an object504that is configured to be held by the user. In some embodiments, the wearable device502may be constructed similarly to the wearable device described with reference toFIG.1, and may include a device housing506, a set of one or more SMI sensors508mounted within the device housing506, a processing system510mounted within the device housing506, and/or a communications interface512mounted within the device housing102. The device housing506, SMI sensors508, processing system510, and/or communications interface512may be configured similarly to the same components described with reference toFIG.1. In some embodiments, the wearable device502may be a finger ring, as described, for example, with reference toFIG.2or3. In some embodiments, the object504may be shaped as one or more of a stylus, a pen, a pencil, a marker, or a paintbrush. The object504may also take other forms. In some cases, one or more of the SMI sensors508in the wearable device502may emit beams of electromagnetic radiation514that impinge on the object504. As the object504is moved by the user, such as to write or draw, a relationship between the wearable device502and the object504may change. The processing system510may extract information about the time-varying relationship between the wearable device502and the object504(and/or information about a time-varying relationship between the wearable device502and a surface other than a surface of the object504), from the SMI signals of the SMI sensors508, and in some cases may identify one or more gestures made by the user. In some cases, the gestures may include a string of alphanumeric characters (one or more characters) written by the user. In these cases, the processing system510may be configured to identify, from the information about the time-varying relationship between the wearable device502and the object504, the string of alphanumeric characters. The SMI sensors508may also or alternatively be used to determine whether a user is holding the object504, as well as to track or predict motion of the object504. For example, if the object504is a writing implement (e.g., a pen), the SMI signals generated by the SMI sensors508can be analyzed to determine whether a user is holding the object504, and in some cases whether the user is holding the object504loosely or tightly. The processing system510can determine from the presence of the object504, and/or the user's grip and/or movement of the object504, whether the user is about to write, gesture, etc. The processing system510can then fully wake the wearable device502in response to the presence, grip, and/or movement of the object504; or begin recording motion of the object504and/or identifying letters, gestures, and so on made by the user with the object504. In some embodiments, the processing system510may switch the wearable device502to a first mode, in which the SMI sensors508are used to track movement with respect to a tabletop or the user, when the object504is not detected; and switch the wearable device502to a second mode, in which the SMI sensors508are used to track movement of the object504, when the object504is detected. In some embodiments, the SMI sensors508may track motion of the object504by tracking motion of the wearable device502with respect to a tabletop or other surface (i.e., a surface other than a surface of the object504). This is because the user's holding of the object504may influence how the user holds their hand or moves their finger, which hand/finger positions or movements with respect to a non-object surface may be indicative of how the user is moving the object504(e.g., indicative of the letters or gestures the user is making with the object504). In some cases, the wearable device502may effectively turn any object, including a dumb or non-electronic object, into a smart pen or the like. In some cases, the wearable device502may have relatively more SMI sensors508, as described, for example, with reference toFIG.4. In some cases, the object504may have one or more SMI sensors516therein, in addition to the wearable device502having one or more SMI sensors508therein. When provided, the SMI sensors516may be used similarly to the SMI sensors508included in the wearable device502, and may determine a relationship of the object504to the wearable device, the user's skin (i.e., a surface of the user), or a remote surface (e.g., the surface518). The SMI sensors516may be positioned along the body of the object504(e.g., proximate where a user might hold the object504) or near a tip of the object504(e.g., proximate a pointing, writing, or drawing tip) of the object504. In some embodiments, the object504may include a processing system and/or communications interface for communicating SMI signals generated by the SMI sensors516, or information related to or derived therefrom, to the wearable device502. Alternatively or additionally, the processing system and/or communications interface may receive SMI signals, or information related to or derived therefrom, from the wearable device502. The wearable device502and object504may communicate wirelessly, or may be connected by an electrical cord, cable, and/or wire(s). In some embodiments, the processing system510of the wearable device502may bear most of the processing burden (e.g., identifying gestures). In other embodiments, the processing system of the object504may bear most of the processing burden, or the processing burden may be shared. In other embodiments, the object504may include all of the system's SMI sensors and processing system. FIG.6shows an example of the system described with reference toFIG.5, in which the wearable device502is a finger ring and the object504is shaped as one or more of a stylus, a pen, a pencil, a marker, or a paintbrush. In some cases, an SMI-based gesture input system may include more than one wearable device and/or more than one handheld device. For example,FIG.7shows an alternative embodiment of the system described with reference toFIG.5, in which the object504is also a wearable device. By way of example, both the wearable device502and the object504are shown to be finger rings. Finger rings worn on a user's thumb and index finger, for example, may be used to identify gestures such as a pinch, zoom, rotate, and so on. An SMI-based gesture input system, such as one of the systems described with reference toFIGS.1-7, may in some cases be used to provide input to an AR, VR, or MR application. An SMI-based gesture input system may also be used as an anchor for another system. For example, in a camera-based gesture input system, it is difficult to determine whether the camera or a user's hand (or finger) is moving. An SMI-based gesture input system may replace a camera-based gesture input system, or may provide anchoring information to a camera-based gesture input system. FIG.8Ashows a first example SMI sensor800that may be used in one or more of the SMI-based gesture input systems described with reference toFIGS.1-7. In this example, the SMI sensor800may include a VCSEL802with an integrated resonant cavity (or intra-cavity) photodetector (RCPD)804. FIG.8Bshows a second example SMI sensor810that may be used in one or more of the SMI-based gesture input systems described with reference toFIGS.1-7. In this example, the SMI sensor810may include a VCSEL812with an extrinsic on-chip RCPD814. As an example, the RCPD814may form a disc around the VCSEL812. FIG.8Cshows a third example SMI sensor820that may be used in one or more of the SMI-based gesture input systems described with reference toFIGS.1-7. In this example, the SMI sensor820may include a VCSEL822with an extrinsic off-chip photodetector824. FIG.8Dshows a fourth example SMI sensor830that may be used in one or more of the SMI-based gesture input systems described with reference toFIGS.1-7. In this example, the SMI sensor830may include a dual-emitting VCSEL832with an extrinsic off-chip photodetector834. For example, the top emission may be emitted towards optics and/or another target and the bottom emission may be provided to the extrinsic off-chip photodetector834. FIGS.9A-9Dshow different beam-shaping or beam-steering optics that may be used with any of the SMI sensors described with reference toFIGS.1-8D.FIG.9Ashows beam-shaping optics900(e.g., a lens or collimator) that collimates the beam of electromagnetic radiation902emitted by an SMI sensor904. A collimated beam may be useful when the range supported by a device is relatively greater (e.g., when a device has a range of approximately ten centimeters).FIG.9Bshows beam-shaping optics910(e.g., a lens) that focuses the beam of electromagnetic radiation912emitted by an SMI sensor914. Focusing beams of electromagnetic radiation may be useful when the range supported by a device is limited (for example, to a few centimeters).FIG.9Cshows beam-steering optics920(e.g., a lens or set of lenses) that directs the beams of electromagnetic radiation922emitted by a set of SMI sensors924such that the beams922converge. Alternatively, the SMI sensors924may be configured or oriented such that their beams converge without the optics920. In some embodiments, the beam-steering optics920may include or be associated with beam-shaping optics, such as the beam-shaping optics described with reference toFIG.9A or9B.FIG.9Dshows beam-steering optics930(e.g., a lens or set of lenses) that directs the beams of electromagnetic radiation932emitted by a set of SMI sensors934such that the beams932diverge. Alternatively, the SMI sensors934may be configured or oriented such that their beams diverge without the optics930. In some embodiments, the beam-steering optics930may include or be associated with beam-shaping optics, such as the beam-shaping optics described with reference toFIG.9A or9B. FIG.10shows a triangular bias procedure1000for determining velocity and absolute distance of a surface (or object) using self-mixing interferometry. The procedure1000may be used by one or more of the systems or devices described with reference toFIGS.1-7, to modulate an SMI sensor using a triangular waveform, as described, for example, with reference toFIG.1. At an initial stage1002, an initial signal is generated, such as by a digital or analog signal generator. At stage1006-1, the generated initial signal is processed as needed to produce the triangle waveform modulation current1102that is applied to a VCSEL (seeFIG.11). Stage1006-1can be, as needed, operations of a DAC (such as when the initial signal is an output of a digital step generator), low-pass filtering (such as to remove quantization noise from the DAC), and voltage-to-current conversion. The application of the modulation current1102to the VCSEL induces an SMI output1118(i.e., a change in an interferometric property of the VCSEL). It will be assumed for simplicity of discussion that the SMI output1118is from a photodetector, but in other embodiments it may be from another component. At initial stage1004inFIG.10, the SMI output1118is received. At stage1006-2, initial processing of the SMI output1118is performed as needed. Stage1006-2may include high-pass filtering or digital subtraction. At stage1008, a processor may equalize the received signals in order to match their peak-to-peak values, mean values, root-mean-square values, or any other characteristic values, if necessary. For example the SMI output1118may be a predominant triangle waveform component being matched to the modulation current1102, with a smaller and higher frequency component due to changes in the interferometric property. High-pass filtering may be applied to the SMI output1118to obtain the component signal related to the interferometric property. Also this stage may involve separating and/or subtracting the parts of the SMI output1118and the modulation current1102corresponding to the ascending and to the descending time intervals of the modulation current1102. This stage may include sampling the separated information. At stages1010and1012, a separate fast Fourier transform (FFT) may be first performed on the parts of the processed SMI output1118corresponding to the ascending and to the descending time intervals. The two FFT spectra may be analyzed at stage1014. At stage1016, the FFT spectra may be further processed, such as to remove artifacts and reduce noise. Such further processing can include peak detection and Gaussian fitting around the detected peak for increased frequency precision. From the processed FFT spectra data, information regarding the absolute distance can be obtained at stage1018. FIG.11shows a block diagram of a system (e.g., part or all of the processing system described with reference toFIGS.1-7) that may implement the spectrum analysis described in the method described above with respect toFIG.10. In the exemplary system shown, the system includes generating an initial digital signal and processing it as needed to produce a modulation current1102as an input to the VCSEL1110. In an illustrative example, an initial step signal may be produced by a digital generator to approximate a triangle function. The digital output values of the digital generator are used in the digital-to-analog converter (DAC)1104. The resulting voltage signal may then be filtered by the low-pass filter1106to remove quantization noise. Alternatively, an analog signal generator based on an integrator can be used to generate an equivalent voltage signal directly. The filtered voltage signal then is an input to a voltage-to-current converter1108to produce the desired modulation current1102in a form for input to the VCSEL1110. As described above, movement of a target can cause changes in an interferometric parameter, such as a parameter of the VCSEL1110or of a photodetector operating in the system. The changes can be measured to produce an SMI output1118. In the embodiment shown, it will be assumed the SMI output1118is measured by a photodetector. For the modulation current1102having the triangle waveform, the SMI output1118may be a triangle wave of a similar period combined with a smaller and higher frequency signal related to the interferometric property. In some cases, the SMI output1118may not be perfectly linear, even though the modulation current1102is linear. This may be a result of the bias current verses light output curve of the VCSEL1110being non-linear (e.g., due to non-idealities, such as self-heating effects). The SMI output1118is first passed into the high-pass filter1120, which can effectively convert the major ascending and descending ramp components of the SMI output1118to DC offsets. As the SMI output1118may typically be a current, the transimpedance amplifier1122can produce a corresponding voltage output (with or without amplification) for further processing. The voltage output can then be sampled and quantized by the ADC block1124. Before immediately applying a digital FFT to the output of the ADC block1124, it can be helpful to apply equalization. The initial digital signal values from the digital generator used to produce the modulation current1102are used as input to the digital high-pass filter1112to produce a digital signal to correlate with the output of the ADC block1124. An adjustable gain can be applied by the digital variable gain block1114to the output of the digital high-pass filter1112. The output of the digital variable gain block1114is used as one input to the digital equalizer and subtractor block1116. The other input to the digital equalizer and subtractor block1116is the output of the ADC block1124. The two signals are differenced, and used as part of a feedback to adjust the gain provided by the digital variable gain block1114. Equalization and subtraction may be used to clean up any remaining artifacts from the triangle that may be present in the SMI output1118. For example, if there is a slope error or nonlinearity in the SMI output1118, the digital high-pass filter1112may not fully eliminate the triangle and artifacts may remain. In such a situation, these artifacts may show up as low frequency components after the FFT and make the peak detection difficult for nearby objects. Applying equalization and subtraction may partially or fully remove these artifacts. Once an optimal correlation is obtained by the feedback, an FFT, indicated by block1128, can then be applied to the components of the output of the ADC block1124corresponding to the rising and descending side of the triangle wave. From the FFT spectra obtained, absolute distance and/or directional velocity may be inferred using the detected peak frequencies on the rising and descending sides, as discussed above and indicated by block1126. The method just described, and its variations, involve applying a spectrum analysis to an SMI output. However, it is understood that this is an example. In other implementations, alternate methods for determining absolute distances may be obtained directly from a time domain SMI output, without applying a spectrum analysis. Various configurations are possible and contemplated without departing from the scope of the present disclosure. FIG.12shows a sinusoidal bias procedure1200for determining displacement of a surface (or object) using quadrature demodulation with self-mixing interferometry. The procedure1200may be used by one or more of the systems or devices described with reference toFIGS.1-7, to modulate an SMI sensor using a sinusoidal waveform, as described, for example, with reference toFIG.1. As explained in more detail below,FIG.12shows components which generate and apply a sinusoidally modulated bias current to a VCSEL. The sinusoidal bias current can generate in a photodetector1216an output current depending on the frequency of the sinusoidal bias and the displacement to the structural component of the device. In the circuit ofFIG.12, the photodetector's1216output current is digitally sampled and then multiplied with a first sinusoid at the frequency of the original sinusoidal modulation of the bias current, and a second sinusoid at double that original frequency. The two separate multiplied outputs are then each low-pass filtered and the phase of the interferometric parameter may be calculated. Thereafter the displacement is determined using at least the phase. The DC voltage generator1202is used to generate a constant bias voltage. A sine wave generator1204may produce an approximately single frequency sinusoid signal, to be combined with constant voltage. As shown inFIG.12, the sine wave generator1204is a digital generator, though in other implementations it may produce an analog sine wave. The low-pass filter1206-1provides filtering of the output of the DC voltage generator1202to reduce undesired varying of the constant bias voltage. The bandpass filter1206-2can be used to reduce distortion and noise in the output of the sine wave generator1204to reduce noise, quantization or other distortions, or frequency components of its signal away from its intended modulation frequency, ωm. The circuit adder1208combines the low-pass filtered constant bias voltage and the bandpass filtered sine wave to produce on link1209a combined voltage signal which, in the embodiment ofFIG.12, has the form V0+Vmsin(ωmt). This voltage signal is used as an input to the voltage-to-current converter1210to produce a current to drive the lasing action of the VCSEL1214. The current from the voltage-to-current converter1210on the line1213can have the form I0+Imsin(ωmt). The VCSEL1214is thus driven to emit a laser light modulated as described above. Reflections of the modulated laser light may then be received back within the lasing cavity of VCSEL1214and cause self-mixing interference. The resulting emitted optical power of the VCSEL1214may be modified due to self-mixing interference, and this modification can be detected by the photodetector1216. As described above, in such cases the photocurrent output of the photodetector1216on the link1215can have the form: iPD=i0+imsin(ωmt)+γ cos(φ0+φmsin(ωmt)). As the I/Q components to be used in subsequent stages are based on just the third term, the first two terms can be removed or reduced by the differential transimpedance amplifier and anti-aliasing (DTIA/AA) filter1218. To do such a removal/reduction, a proportional or scaled value of the first two terms is produced by the voltage divider1212. The voltage divider1212can use as input the combined voltage signal on the link1209produced by the circuit adder1208. The output of the voltage divider1212on link1211can then have the form: α(V0+Vmsin(ωmt)). The photodetector current and this output of the voltage divider1212can be the inputs to the DTIA/AA filter1218. The output of the DTIA/AA filter1218can then be, at least mostly, proportional to the third term of the photodetector current. The output of the DTIA/AA filter1218may then be quantized for subsequent calculation by the ADC block1220. Further, the output of the ADC block1220may have a residual signal component proportional to the sine wave originally generated by the sine wave generator1204. To filter this residual signal component, the originally generated sine wave can be scaled (such as by the indicated factor of β) at multiplier block1224-3, and then subtracted from the output of ADC block1220at subtraction block1222. The filtered output on link1221may have the form: A+B sin(ωmt)+C cos(2ωmt)+D sin(3ωDmt)+ . . . , from the Fourier expansion of the γ cos(φ0+φmsin(ωmt)) term discussed above. The filtered output can then be used for extraction of the I/Q components by mixing. The digital sine wave originally generated by sine wave generator1204onto link1207is mixed (multiplied) by the multiplier block1224-1with the filtered output on link1221. This product is then low-pass filtered at block1228-1to obtain the Q component discussed above, possibly after scaling with a number that is related to the amount of frequency modulation of the laser light and distance to the target. Also, the originally generated digital sine wave is used as input into the squaring/filtering block1226to produce a digital cosine wave at a frequency double that of the originally produced digital sine wave. The digital cosine wave is then mixed (multiplied) at the multiplier block1224-2with the filtered output of the ADC block1220on link1221. This product is then low-pass filtered at block1228-2to obtain the I component discussed above, possibly after scaling with a number that is related to the amount of frequency modulation of the laser light and distance to the target. The Q and the I components are then used by the phase calculation component1230to obtain the phase from which the displacement of the target can be calculated, as discussed above. One skilled in the art will appreciate that while the embodiment shown inFIG.12makes use of the digital form of the originally generated sine wave produced by sine wave generator1204onto link1207, in other embodiments the originally generated sine wave may be an analog signal and mixed with an analog output of the DTIA/AA filter1218. In other embodiments, the voltage divider1212may be a variable voltage divider. In still other embodiments, the voltage divider1212may be omitted and the DTIA/AA filter1218may be a single-ended DTIA/AA filter. In such embodiments, subtraction may be done only digitally at subtraction block1222. In yet other embodiments, the subtraction block1222may be omitted and no subtraction of the modulation current may be performed. The circuit ofFIG.12can be adapted to implement the modified I/O method described above that uses Q′∝Lowpass{IPD×sin(3ωmt)}. Some such circuit adaptations can include directly generating both mixing signals sin(2ωmt) and sin(3ωmt), and multiplying each with the output of the output of the ADC block1220, and then applying respective low-pass filtering, such as by the blocks1228-1,1228-2. The DTIA/AA filter1218may then be replaced by a filter to remove or greatly reduce the entire component of IPDat the original modulation frequency ωm. One skilled in the art will recognize other circuit adaptations for implementing this modified I/O method. For example, the signal sin(3ωmt) may be generated by multiplying link1207and the output of squaring/filtering block1226, and subsequently performing bandpass filtering to reject frequency components other than sin(3ωmt). In additional and/or alternative embodiments, the I/Q time domain based methods just described may be used with the spectrum based methods of the first family of embodiments. The spectrum methods of the first family can be used at certain times to determine the absolute distance to the target, and provide a value of L0. Thereafter, during subsequent time intervals, any of the various I/Q methods just described may be used to determine ΔL. In additional and/or alternative embodiments, the spectrum methods based on triangle wave modulation of a bias current of a VCSEL may be used as a guide for the I/Q time domain methods. The I/Q methods operate optimally in the case that J1(b)=J2(b), so that the I and Q components have the same amplitude. However, b depends on the distance L. An embodiment may apply a triangle wave modulation to the VCSEL's bias current to determine a distance to a point of interest. Then this distance is used to find the optimal peak-to-peak sinusoidal modulation of the bias current to use in an I/Q approach. Such a dual method approach may provide improved signal-to-noise ratio and displacement accuracy obtained from the I/Q method. FIG.13shows an example method1300of identifying a type of gesture. The method1300may be performed, for example, by any of the processing systems or processors described herein. At block1302, the method1300may include emitting a beam of electromagnetic radiation from each SMI sensor in a set of one or more SMI sensors disposed in a wearable device. Alternatively, a beam of electromagnetic radiation may be emitted from each SMI sensor in a set of one or more SMI sensors disposed in a handheld device. At block1304, the method1300may include sampling an SMI signal generated by each SMI sensor to produce a time-varying sample stream for each SMI sensor. At block1306, the method1300may include determining, using a processor of the wearable device and the time-varying sample stream of at least one SMI sensor in the set of one or more SMI sensors, a movement of the wearable device (or handheld device) with respect to a surface. The operation(s) at block1306may also or alternatively include determining a position and/or orientation of the wearable device (or handheld device) with respect to the surface. At block1308, the method1300may include transmitting information indicative of the movement of the wearable device (or handheld device) from the wearable device (or handheld device) to a remote device. In some embodiments, the method1300may include modulating an input to an SMI sensor (or to each SMI sensor) using a triangular waveform or a sinusoidal waveform. In some embodiments, the method1300may include modulating an input to an SMI sensor (or to each SMI sensor) using 1) a first type of modulation when producing a first subset of samples in the time-varying sample stream for the SMI sensor, and 2) a second type of modulation when producing a second subset of samples in the time-varying sample stream for the SMI sensor, where the first type of modulation is different from the second type of modulation (e.g., triangular versus sinusoidal modulation). In some embodiments of the method1300, the at least one SMI sensor may include three SMI sensors, and determining the movement of the wearable device (or handheld device) with respect to the surface may include determining the movement of the wearable device in 6 DoF. In some embodiments of the method1300, the set of one or more SMI sensors includes multiple SMI sensors, and the method1300may include analyzing the time-varying sample streams produced for the multiple SMI sensors, and identifying, based at least in part on the analyzing, the at least one SMI sensor used to determine the movement of the wearable device (or handheld device) with respect to the surface. In some embodiments of the method1300, the at least one SMI sensor may be a first subset of one or more SMI sensors, and the surface may be a first surface. In these embodiments, the method1300may include determining, using the processor of the wearable device (or handheld device) and the time-varying sample stream of a second subset of one or more SMI sensors in the set of one or more SMI sensors, a movement of the wearable device (or handheld device) with respect to a second surface. FIG.14shows a sample electrical block diagram of an electronic device1400, which electronic device may in some cases be implemented as any of the devices described with reference toFIGS.1-7and13. The electronic device1400may include an electronic display1402(e.g., a light-emitting display), a processor1404, a power source1406, a memory1408or storage device, a sensor system1410, or an input/output (I/O) mechanism1412(e.g., an input/output device, input/output port, or haptic input/output interface). The processor1404may control some or all of the operations of the electronic device1400. The processor1404may communicate, either directly or indirectly, with some or all of the other components of the electronic device1400. For example, a system bus or other communication mechanism1414can provide communication between the electronic display1402, the processor1404, the power source1406, the memory1408, the sensor system1410, and the I/O mechanism1412. The processor1404may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions, whether such data or instructions is in the form of software or firmware or otherwise encoded. For example, the processor1404may include a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a controller, or a combination of such devices. As described herein, the term “processor” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements. In some cases, the processor1404may provide part or all of the processing systems or processors described with reference to any ofFIGS.1-7and10-13. It should be noted that the components of the electronic device1400can be controlled by multiple processors. For example, select components of the electronic device1400(e.g., the sensor system1410) may be controlled by a first processor and other components of the electronic device1400(e.g., the electronic display1402) may be controlled by a second processor, where the first and second processors may or may not be in communication with each other. The power source1406can be implemented with any device capable of providing energy to the electronic device1400. For example, the power source1406may include one or more batteries or rechargeable batteries. Additionally or alternatively, the power source1406may include a power connector or power cord that connects the electronic device1400to another power source, such as a wall outlet. The memory1408may store electronic data that can be used by the electronic device1400. For example, the memory1408may store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing signals, control signals, and data structures or databases. The memory1408may include any type of memory. By way of example only, the memory1408may include random access memory, read-only memory, Flash memory, removable memory, other types of storage elements, or combinations of such memory types. The electronic device1400may also include one or more sensor systems1410positioned almost anywhere on the electronic device1400. In some cases, the sensor systems1410may include one or more SMI sensors, positioned as described with reference to any ofFIGS.1-13. The sensor system(s)1410may be configured to sense one or more types of parameters, such as but not limited to, vibration; light; touch; force; heat; movement; relative motion; biometric data (e.g., biological parameters) of a user; air quality; proximity; position; connectedness; and so on. By way of example, the sensor system(s)1410may include an SMI sensor, a heat sensor, a position sensor, a light or optical sensor, an accelerometer, a pressure transducer, a gyroscope, a magnetometer, a health monitoring sensor, and an air quality sensor, and so on. Additionally, the one or more sensor systems1410may utilize any suitable sensing technology, including, but not limited to, interferometric, magnetic, capacitive, ultrasonic, resistive, optical, acoustic, piezoelectric, or thermal technologies. The I/O mechanism1412may transmit or receive data from a user or another electronic device. The I/O mechanism1412may include the electronic display1402, a touch sensing input surface, a crown, one or more buttons (e.g., a graphical user interface “home” button), one or more cameras (including an under-display camera), one or more microphones or speakers, one or more ports such as a microphone port, and/or a keyboard. Additionally or alternatively, the I/O mechanism1412may transmit electronic signals via a communications interface, such as a wireless, wired, and/or optical communications interface. Examples of wireless and wired communications interfaces include, but are not limited to, cellular and Wi-Fi communications interfaces. The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art, after reading this description, that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art, after reading this description, that many modifications and variations are possible in view of the above teachings. | 56,679 |
11861073 | DETAILED DESCRIPTION Aspects of this disclosure involve recognition of gestures performed by an athlete in order to invoke certain functions related to an athletic performance monitoring device. Gestures may be recognized from athletic data that includes, in addition to gesture information, athletic data representative of one or more athletic activities being performed by an athlete/user. The athletic data may be actively or passively sensed and/or stored in one or more non-transitory storage mediums, and used to generate an output, such as for example, calculated athletic attributes, feedback signals to provide guidance, and/or other information. These, and other aspects, will be discussed in the context of the following illustrative examples of a personal training system. In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope and spirit of the present disclosure. Further, headings within this disclosure should not be considered as limiting aspects of the disclosure and the example embodiments are not limited to the example headings. I. Example Personal Training System A. Illustrative Networks Aspects of this disclosure relate to systems and methods that may be utilized across a plurality of networks. In this regard, certain embodiments may be configured to adapt to dynamic network environments. Further embodiments may be operable in differing discrete network environments.FIG.1illustrates an example of a personal training system100in accordance with example embodiments. Example system100may include one or more interconnected networks, such as the illustrative body area network (BAN)102, local area network (LAN)104, and wide area network (WAN)106. As shown inFIG.1(and described throughout this disclosure), one or more networks (e.g., BAN102, LAN104, and/or WAN106), may overlap or otherwise be inclusive of each other. Those skilled in the art will appreciate that the illustrative networks102-106are logical networks that may each comprise one or more different communication protocols and/or network architectures and yet may be configured to have gateways to each other or other networks. For example, each of BAN102, LAN104and/or WAN106may be operatively connected to the same physical network architecture, such as cellular network architecture108and/or WAN architecture110. For example, portable electronic device112, which may be considered a component of both BAN102and LAN104, may comprise a network adapter or network interface card (NIC) configured to translate data and control signals into and from network messages according to one or more communication protocols, such as the Transmission Control Protocol (TCP), the Internet Protocol (IP), and the User Datagram Protocol (UDP) through one or more of architectures108and/or110. These protocols are well known in the art, and thus will not be discussed here in more detail. Network architectures108and110may include one or more information distribution network(s), of any type(s) or topology(s), alone or in combination(s), such as for example, cable, fiber, satellite, telephone, cellular, wireless, etc. and as such, may be variously configured such as having one or more wired or wireless communication channels (including but not limited to: WiFi®, Bluetooth®, Near-Field Communication (NFC) and/or ANT technologies). Thus, any device within a network ofFIG.1, (such as portable electronic device112or any other device described herein) may be considered inclusive to one or more of the different logical networks102-106. With the foregoing in mind, example components of an illustrative BAN and LAN (which may be coupled to WAN106) will be described. 1. Example Local Area Network LAN104may include one or more electronic devices, such as for example, computer device114. Computer device114, or any other component of system100, may comprise a mobile terminal, such as a telephone, music player, tablet, netbook or any portable device. In other embodiments, computer device114may comprise a media player or recorder, desktop computer, server(s), a gaming console, such as for example, a Microsoft® XBOX, Sony® PlayStation, and/or a Nintendo® Wii gaming consoles. Those skilled in the art will appreciate that these are merely example devices for descriptive purposes and this disclosure is not limited to any console or computing device. Those skilled in the art will appreciate that the design and structure of computer device114may vary depending on several factors, such as its intended purpose. One example implementation of computer device114is provided inFIG.2, which illustrates a block diagram of computing device200. Those skilled in the art will appreciate that the disclosure ofFIG.2may be applicable to any device disclosed herein. Device200may include one or more processors, such as processor202-1and202-2(generally referred to herein as “processors202” or “processor202”). Processors202may communicate with each other or other components via an interconnection network or bus204. Processor202may include one or more processing cores, such as cores206-1and206-2(referred to herein as “cores206” or more generally as “core206”), which may be implemented on a single integrated circuit (IC) chip. Cores206may comprise a shared cache208and/or a private cache (e.g., caches210-1and210-2, respectively). One or more caches208/210may locally cache data stored in a system memory, such as memory212, for faster access by components of the processor202. Memory212may be in communication with the processors202via a chipset216. Cache208may be part of system memory212in certain embodiments. Memory212may include, but is not limited to, random access memory (RAM), read only memory (ROM), and include one or more of solid-state memory, optical or magnetic storage, and/or any other medium that can be used to store electronic information. Yet other embodiments may omit system memory212. System200may include one or more I/O devices (e.g., I/O devices214-1through214-3, each generally referred to as I/O device214). I/O data from one or more I/O devices214may be stored at one or more caches208,210and/or system memory212. Each of I/O devices214may be permanently or temporarily configured to be in operative communication with a component of system100using any physical or wireless communication protocol. Returning toFIG.1, four example I/O devices (shown as elements116-122) are shown as being in communication with computer device114. Those skilled in the art will appreciate that one or more of devices116-122may be stand-alone devices or may be associated with another device besides computer device114. For example, one or more I/O devices may be associated with or interact with a component of BAN102and/or WAN106. I/O devices116-122may include, but are not limited to athletic data acquisition units, such as for example, sensors. One or more I/O devices may be configured to sense, detect, and/or measure an athletic parameter from a user, such as user124. Examples include, but are not limited to: an accelerometer, a gyroscope, a location-determining device (e.g., GPS), light (including non-visible light) sensor, temperature sensor (including ambient temperature and/or body temperature), sleep pattern sensors, heart rate monitor, image-capturing sensor, moisture sensor, force sensor, compass, angular rate sensor, and/or combinations thereof among others. In further embodiments, I/O devices116-122may be used to provide an output (e.g., audible, visual, or tactile cue) and/or receive an input, such as a user input from athlete124. Example uses for these illustrative I/O devices are provided below, however, those skilled in the art will appreciate that such discussions are merely descriptive of some of the many options within the scope of this disclosure. Further, reference to any data acquisition unit, I/O device, or sensor is to be interpreted disclosing an embodiment that may have one or more I/O device, data acquisition unit, and/or sensor disclosed herein or known in the art (either individually or in combination). Information from one or more devices (across one or more networks) may be used (or be utilized) in the formation of a variety of different parameters, metrics or physiological characteristics including but not limited to: motion parameters, or motion data, such as speed, acceleration, distance, steps taken, direction, relative movement of certain body portions or objects to others, or other motion parameters which may be expressed as angular rates, rectilinear rates or combinations thereof, physiological parameters, such as calories, heart rate, sweat detection, effort, oxygen consumed, oxygen kinetics, and other metrics which may fall within one or more categories, such as: pressure, impact forces, information regarding the athlete, such as height, weight, age, demographic information and combinations thereof. System100may be configured to transmit and/or receive athletic data, including the parameters, metrics, or physiological characteristics collected within system100or otherwise provided to system100. As one example, WAN106may comprise sever111. Server111may have one or more components of system200ofFIG.2. In one embodiment, server111comprises at least a processor and a memory, such as processor206and memory212. Server111may be configured to store computer-executable instructions on a non-transitory computer-readable medium. The instructions may comprise athletic data, such as raw or processed data collected within system100. System100may be configured to transmit data, such as energy expenditure points, to a social networking website or host such a site. Server111may be utilized to permit one or more users to access and/or compare athletic data. As such, server111may be configured to transmit and/or receive notifications based upon athletic data or other information. Returning to LAN104, computer device114is shown in operative communication with a display device116, an image-capturing device118, sensor120and exercise device122, which are discussed in turn below with reference to example embodiments. In one embodiment, display device116may provide audio-visual cues to athlete124to perform a specific athletic movement. The audio-visual cues may be provided in response to computer-executable instruction executed on computer device114or any other device, including a device of BAN102and/or WAN. Display device116may be a touchscreen device or otherwise configured to receive a user-input. In one embodiment, data may be obtained from image-capturing device118and/or other sensors, such as sensor120, which may be used to detect (and/or measure) athletic parameters, either alone or in combination with other devices, or stored information. Image-capturing device118and/or sensor120may comprise a transceiver device. In one embodiment sensor128may comprise an infrared (IR), electromagnetic (EM) or acoustic transceiver. For example, image-capturing device118, and/or sensor120may transmit waveforms into the environment, including towards the direction of athlete124and receive a “reflection” or otherwise detect alterations of those released waveforms. Those skilled in the art will readily appreciate that signals corresponding to a multitude of different data spectrums may be utilized in accordance with various embodiments. In this regard, devices118and/or120may detect waveforms emitted from external sources (e.g., not system100). For example, devices118and/or120may detect heat being emitted from user124and/or the surrounding environment. Thus, image-capturing device126and/or sensor128may comprise one or more thermal imaging devices. In one embodiment, image-capturing device126and/or sensor128may comprise an IR device configured to perform range phenomenology. In one embodiment, exercise device122may be any device configurable to permit or facilitate the athlete124performing a physical movement, such as for example a treadmill, step machine, etc. There is no requirement that the device be stationary. In this regard, wireless technologies permit portable devices to be utilized, thus a bicycle or other mobile exercising device may be utilized in accordance with certain embodiments. Those skilled in the art will appreciate that equipment122may be or comprise an interface for receiving an electronic device containing athletic data performed remotely from computer device114. For example, a user may use a sporting device (described below in relation to BAN102) and upon returning home or the location of equipment122, download athletic data into element122or any other device of system100. Any I/O device disclosed herein may be configured to receive activity data. 2. Body Area Network BAN102may include two or more devices configured to receive, transmit, or otherwise facilitate the collection of athletic data (including passive devices). Exemplary devices may include one or more data acquisition units, sensors, or devices known in the art or disclosed herein, including but not limited to I/O devices116-122. Two or more components of BAN102may communicate directly, yet in other embodiments, communication may be conducted via a third device, which may be part of BAN102, LAN104, and/or WAN106. One or more components of LAN104or WAN106may form part of BAN102. In certain implementations, whether a device, such as portable device112, is part of BAN102, LAN104, and/or WAN106, may depend on the athlete's proximity to an access points permit communication with mobile cellular network architecture108and/or WAN architecture110. User activity and/or preference may also influence whether one or more components are utilized as part of BAN102. Example embodiments are provided below. User124may be associated with (e.g., possess, carry, wear, and/or interact with) any number of devices, such as portable device112, shoe-mounted device126, wrist-worn device128and/or a sensing location, such as sensing location130, which may comprise a physical device or a location that is used to collect information. One or more devices112,126,128, and/or130may not be specially designed for fitness or athletic purposes. Indeed, aspects of this disclosure relate to utilizing data from a plurality of devices, some of which are not fitness devices, to collect, detect, and/or measure athletic data. In certain embodiments, one or more devices of BAN102(or any other network) may comprise a fitness or sporting device that is specifically designed for a particular sporting use. As used herein, the term “sporting device” includes any physical object that may be used or implicated during a specific sport or fitness activity. Exemplary sporting devices may include, but are not limited to: golf balls, basketballs, baseballs, soccer balls, footballs, power balls, hockey pucks, weights, bats, clubs, sticks, paddles, mats, and combinations thereof. In further embodiments, exemplary fitness devices may include objects within a sporting environment where a specific sport occurs, including the environment itself, such as a goal net, hoop, backboard, portions of a field, such as a midline, outer boundary marker, base, and combinations thereof. In this regard, those skilled in the art will appreciate that one or more sporting devices may also be part of (or form) a structure and vice-versa, a structure may comprise one or more sporting devices or be configured to interact with a sporting device. For example, a first structure may comprise a basketball hoop and a backboard, which may be removable and replaced with a goal post. In this regard, one or more sporting devices may comprise one or more sensors, such one or more of the sensors discussed above in relation toFIGS.1-3, that may provide information utilized, either independently or in conjunction with other sensors, such as one or more sensors associated with one or more structures. For example, a backboard may comprise a first sensors configured to measure a force and a direction of the force by a basketball upon the backboard and the hoop may comprise a second sensor to detect a force. Similarly, a golf club may comprise a first sensor configured to detect grip attributes on the shaft and a second sensor configured to measure impact with a golf ball. Looking to the illustrative portable device112, it may be a multi-purpose electronic device, that for example, includes a telephone or digital music player, including an IPOD®, IPAD®, or iPhone®, brand devices available from Apple, Inc. of Cupertino, California or Zune® or Microsoft® Windows devices available from Microsoft of Redmond, Washington. As known in the art, digital media players can serve as an output device, input device, and/or storage device for a computer. Device112may be configured as an input device for receiving raw or processed data collected from one or more devices in BAN102, LAN104, or WAN106. In one or more embodiments, portable device112may comprise one or more components of computer device114. For example, portable device112may be include a display116, image-capturing device118, and/or one or more data acquisition devices, such as any of the I/O devices116-122discussed above, with or without additional components, so as to comprise a mobile terminal. a. Illustrative Apparel/Accessory Sensors In certain embodiments, I/O devices may be formed within or otherwise associated with user's124clothing or accessories, including a watch, armband, wristband, necklace, shirt, shoe, or the like. These devices may be configured to monitor athletic movements of a user. It is to be understood that they may detect athletic movement during user's124interactions with computer device114and/or operate independently of computer device114(or any other device disclosed herein). For example, one or more devices in BAN102may be configured to function as an-all day activity monitor that measures activity regardless of the user's proximity or interactions with computer device114. It is to be further understood that the sensory system302shown inFIG.3and the device assembly400shown inFIG.4, each of which are described in the following paragraphs, are merely illustrative examples. i. Shoe-Mounted Device In certain embodiments, device126shown inFIG.1, may comprise footwear which may include one or more sensors, including but not limited to those disclosed herein and/or known in the art.FIG.3illustrates one example embodiment of a sensor system302providing one or more sensor assemblies304. Assembly304may comprise one or more sensors, such as for example, an accelerometer, gyroscope, location-determining components, force sensors and/or or any other sensor disclosed herein or known in the art. In the illustrated embodiment, assembly304incorporates a plurality of sensors, which may include force-sensitive resistor (FSR) sensors306; however, other sensor(s) may be utilized. Port308may be positioned within a sole structure309of a shoe, and is generally configured for communication with one or more electronic devices. Port308may optionally be provided to be in communication with an electronic module310, and the sole structure309may optionally include a housing311or other structure to receive the module310. The sensor system302may also include a plurality of leads312connecting the FSR sensors306to the port308, to enable communication with the module310and/or another electronic device through the port308. Module310may be contained within a well or cavity in a sole structure of a shoe, and the housing311may be positioned within the well or cavity. In one embodiment, at least one gyroscope and at least one accelerometer are provided within a single housing, such as module310and/or housing311. In at least a further embodiment, one or more sensors are provided that, when operational, are configured to provide directional information and angular rate data. The port308and the module310include complementary interfaces314,316for connection and communication. In certain embodiments, at least one force-sensitive resistor306shown inFIG.3may contain first and second electrodes or electrical contacts318,320and a force-sensitive resistive material322disposed between the electrodes318,320to electrically connect the electrodes318,320together. When pressure is applied to the force-sensitive material322, the resistivity and/or conductivity of the force-sensitive material322changes, which changes the electrical potential between the electrodes318,320. The change in resistance can be detected by the sensor system302to detect the force applied on the sensor316. The force-sensitive resistive material322may change its resistance under pressure in a variety of ways. For example, the force-sensitive material322may have an internal resistance that decreases when the material is compressed. Further embodiments may utilize “volume-based resistance” may be measured, which may be implemented through “smart materials.” As another example, the material322may change the resistance by changing the degree of surface-to-surface contact, such as between two pieces of the force sensitive material322or between the force sensitive material322and one or both electrodes318,320. In some circumstances, this type of force-sensitive resistive behavior may be described as “contact-based resistance.” ii. Wrist-Worn Device As shown inFIG.4, device400(which may resemble or comprise sensory device128shown inFIG.1, may be configured to be worn by user124, such as around a wrist, arm, ankle, neck or the like. Device400may include an input mechanism, such as a depressible input button402configured to be used during operation of the device400. The input button402may be operably connected to a controller404and/or any other electronic components, such as one or more of the elements discussed in relation to computer device114shown inFIG.1. Controller404may be embedded or otherwise part of housing406. Housing406may be formed of one or more materials, including elastomeric components and comprise one or more displays, such as display408. The display may be considered an illuminable portion of the device400. The display408may include a series of individual lighting elements or light members such as LED lights410. The lights may be formed in an array and operably connected to the controller404. Device400may include an indicator system412, which may also be considered a portion or component of the overall display408. Indicator system412can operate and illuminate in conjunction with the display408(which may have pixel member414) or completely separate from the display408. The indicator system412may also include a plurality of additional lighting elements or light members, which may also take the form of LED lights in an exemplary embodiment. In certain embodiments, indicator system may provide a visual indication of goals, such as by illuminating a portion of lighting members of indicator system412to represent accomplishment towards one or more goals. Device400may be configured to display data expressed in terms of activity points or currency earned by the user based on the activity of the user, either through display408and/or indicator system412. A fastening mechanism416can be disengaged wherein the device400can be positioned around a wrist or portion of the user124and the fastening mechanism416can be subsequently placed in an engaged position. In one embodiment, fastening mechanism416may comprise an interface, including but not limited to a USB port, for operative interaction with computer device114and/or devices, such as devices120and/or112. In certain embodiments, fastening member may comprise one or more magnets. In one embodiment, fastening member may be devoid of moving parts and rely entirely on magnetic forces. In certain embodiments, device400may comprise a sensor assembly (not shown inFIG.4). The sensor assembly may comprise a plurality of different sensors, including those disclosed herein and/or known in the art. In an example embodiment, the sensor assembly may comprise or permit operative connection to any sensor disclosed herein or known in the art. Device400and or its sensor assembly may be configured to receive data obtained from one or more external sensors. iii. Apparel and/or Body Location Sensing Element130ofFIG.1shows an example sensory location which may be associated with a physical apparatus, such as a sensor, data acquisition unit, or other device. Yet in other embodiments, it may be a specific location of a body portion or region that is monitored, such as via an image capturing device (e.g., image capturing device118). In certain embodiments, element130may comprise a sensor, such that elements130aand130bmay be sensors integrated into apparel, such as athletic clothing. Such sensors may be placed at any desired location of the body of user124. Sensors130a/bmay communicate (e.g., wirelessly) with one or more devices (including other sensors) of BAN102, LAN104, and/or WAN106. In certain embodiments, passive sensing surfaces may reflect waveforms, such as infrared light, emitted by image-capturing device118and/or sensor120. In one embodiment, passive sensors located on user's124apparel may comprise generally spherical structures made of glass or other transparent or translucent surfaces which may reflect waveforms. Different classes of apparel may be utilized in which a given class of apparel has specific sensors configured to be located proximate to a specific portion of the user's124body when properly worn. For example, golf apparel may include one or more sensors positioned on the apparel in a first configuration and yet soccer apparel may include one or more sensors positioned on apparel in a second configuration. FIG.5shows illustrative locations for sensory input (see, e.g., sensory locations130a-130o). In this regard, sensors may be physical sensors located on/in a user's clothing, yet in other embodiments, sensor locations130a-130omay be based upon identification of relationships between two moving body parts. For example, sensor location130amay be determined by identifying motions of user124with an image-capturing device, such as image-capturing device118. Thus, in certain embodiments, a sensor may not physically be located at a specific location (such as one or more of sensor locations130a-1306o), but is configured to sense properties of that location, such as with image-capturing device118or other sensor data gathered from other locations. In this regard, the overall shape or portion of a user's body may permit identification of certain body parts. Regardless of whether an image-capturing device is utilized and/or a physical sensor located on the user124, and/or using data from other devices, (such as sensory system302), device assembly400and/or any other device or sensor disclosed herein or known in the art is utilized, the sensors may sense a current location of a body part and/or track movement of the body part. In one embodiment, sensory data relating to location130mmay be utilized in a determination of the user's center of gravity (a.k.a, center of mass). For example, relationships between location130aand location(s)130f/130lwith respect to one or more of location(s)130m-130omay be utilized to determine if a user's center of gravity has been elevated along the vertical axis (such as during a jump) or if a user is attempting to “fake” a jump by bending and flexing their knees. In one embodiment, sensor location130nmay be located at about the sternum of user124. Likewise, sensor location130omay be located approximate to the naval of user124. In certain embodiments, data from sensor locations130m-130omay be utilized (alone or in combination with other data) to determine the center of gravity for user124. In further embodiments, relationships between multiple several sensor locations, such as sensors130m-130o, may be utilized in determining orientation of the user124and/or rotational forces, such as twisting of user's124torso. Further, one or more locations, such as location(s), may be utilized to as a center of moment location. For example, in one embodiment, one or more of location(s)130m-130omay serve as a point for a center of moment location of user124. In another embodiment, one or more locations may serve as a center of moment of specific body parts or regions. FIG.6depicts a schematic block diagram of a sensor device600that is configured to recognize one or more gestures in accordance with certain embodiments. As shown, sensor device600may be embodied with (and/or in operative communication with) elements configurable to recognize one or more gestures from sensor data received by/output by the sensor device600. In accordance with one embodiment, a recognized gesture may execute one or more processes in accordance with one or more operational modes of sensor device600, in addition to bringing about a reduction in power consumption by one or more integral components. Illustrative sensor device600is shown as having a sensor602, a filter604, an activity processor606, a gesture recognition processor608, a memory610, a power supply612, a transceiver614, and an interface616. However, one of ordinary skill in the art will realize thatFIG.6is merely one illustrative example of sensor device600, and that sensor device600may be implemented using a plurality of alternative configurations, without departing from the scope of the processes and systems described herein. For example, it will be readily apparent to one of ordinary skill that activity processor606and gesture recognition processor608may be embodied as a single processor, or embodied as one or more processing cores of a single multi-core processor, among others. In other embodiments, processors606and608may be embodied using dedicated hardware, or shared hardware that may be localized (on a common motherboard, within a common server, and the like), or may be distributed (across multiple network-connected servers, and the like). Additionally, sensor device600may include one or more components of computing system200ofFIG.2, wherein sensor device600may be considered to be part of a larger computer device, or may itself be a stand-alone computer device. Accordingly, in one implementation, sensor device600may be configured to perform, partially or wholly, the processes of controller404fromFIG.4. In such an implementation, sensor device600may be configured to, among other things, recognize one or more gestures performed by a user of a wrist-worn device400. In response, the wrist-worn device400may execute one or more processes to, among others, adjust one or more data analysis conditions or settings associated with one or more operational modes, recognize one or more activities being performed by the user, or bring about a reduction in power consumption by a wrist-worn device400, or combinations thereof. In one implementation, power supply612may comprise a battery. Alternatively, power supply612may be a single cell deriving power from stored chemical energy (a group of multiple such cells commonly referred to as a battery), or may be implemented using one or more of a combination of other technologies, including solar cells, capacitors, which may be configured to store electrical energy harvested from the motion of device400in which sensor device600may be positioned, a supply of electrical energy by “wireless” induction, or a wired supply of electrical energy from a power mains outlet, such as a universal serial bus (USB 1.0/1.1/2.0/3.0 and the like) outlet, and the like. It will be readily understood to one of skill that the systems and methods described herein may be suited to reducing power consumption from these, and other power supply612embodiments, without departing from the scope of the description. In one implementation, sensor602of sensor device600may include on or more accelerometers, gyroscopes, location-determining devices (GPS), light sensors, temperature sensors, heart rate monitors, image-capturing sensors, microphones, moisture sensors, force sensors, compasses, angular rate sensors, and/or combinations thereof, among others. As one example embodiment comprising an accelerometer, sensor602may be a three-axis (x-, y-, and z-axis) accelerometer implemented as a single integrated circuit, or “chip”, wherein acceleration in one or more of the three axes is detected as a change in capacitance across a silicon structure of a microelectromechanical system (MEMS) device. Accordingly, a three-axis accelerometer may be used to resolve an acceleration in any direction in three-dimensional space. In one particular embodiment, sensor602may include a STMicroelectronics LIS3DH 3-axis accelerometer package, and outputting a digital signal corresponding to the magnitude of acceleration in one or more of the three axes to which the accelerometer is aligned. One of ordinary skill will understand that sensor602may output a digital, or pulse-width modulated signal, corresponding to a magnitude of acceleration. The digital output of sensor602, such as one incorporating an accelerometer for example, may be received as a time-varying frequency signal, wherein a frequency of the output signal corresponds to a magnitude of acceleration in one or more of the three axes to which the sensor602is sensitive. In alternative implementations, sensor602may output an analog signal as a time-varying voltage corresponding to the magnitude of acceleration in one or more of the three axes to which the sensor602is sensitive. Furthermore, it will be understood that sensor602may be a single-axis, or two-axis accelerometer, without departing from the scope of the embodiments described herein. In yet other implementations, sensor602may represent one or more sensors that output an analog or digital signal corresponding to the physical phenomena/input to which the sensor602is responsive. Optionally, sensor device600may include a filter604, wherein filter604may be configured to selectively remove certain frequencies of an output signal from sensor602. In one implementation, filter604is an analog filter with filter characteristics of low-pass, high-pass, or band-pass, or filter604is a digital filter, and/or combinations thereof. The output of sensor602may be transmitted to filter604, wherein, in one implementation, the output of an analog sensor602will be in the form of a continuous, time-varying voltage signal with changing frequency and amplitude. In one implementation, the amplitude of the voltage signal corresponds to a magnitude of acceleration, and the frequency of the output signal corresponds to the number of changes in acceleration per unit time. However, the output of sensor602may alternatively be a time-varying voltage signal corresponding to one or more different sensor types. Furthermore, the output of sensor602may be an analog or digital signal represented by, among others, an electrical current, a light signal, and a sound signal, or combinations thereof. Filter604may be configured to remove those signals corresponding to frequencies outside of a range of interest for gesture recognition, and/or activity recognition by a gesture monitoring device, such as device400. For example, filter604may be used to selectively remove high frequency signals over, for example, 100 Hz, which represent motion of sensor602at a frequency beyond human capability. In another implementation, filter604may be used to remove low-frequency signals from the output of sensor602such that signals with frequencies lower than those associated with a user gesture are not processed further by sensor device600. Filter604may be referred to as a “pre-filter”, wherein filter604may remove one or more frequencies from a signal output of sensor602such that activity processor606does not consume electrical energy processing data that is not representative of a gesture or activity performed by the user. In this way, pre-filter604may reduce overall power consumption by sensor device600or a system of which sensor device600is part of. In one implementation, the output of filter604is transmitted to both activity processor606and gesture recognition processor608. When sensor device600is powered-on in a first state and electrical energy is supplied from power supply612, both activity processor606and gesture recognition processor608may receive a continuous-time output signal from sensor602, wherein the output signal may be filtered by filter604before being received by activity processor606and gesture recognition processor608. In another implementation, the sensor data received by gesture recognition processor608is not filtered by filter604whereas sensor data received by activity processor606has been filtered by filter604. In yet another implementation, when sensor device600is powered-on in a second state, activity processor606and gesture recognition processor608receive an intermittent signal from sensor602. Those skilled in the art will also appreciate that one or more processors (e.g., processor606and/or608) may analyze data obtained from a sensor other than sensor602. Memory610, which may be similar to system memory212fromFIG.2, may be used to store computer-executable instructions for carrying out one or more processes executed by activity processor606and/or gesture recognition processor608. Memory610may include, but is not limited to, random access memory (RAM), read only memory (ROM), and include one or more of solid-state memory, optical or magnetic storage, and/or any other medium that can be used to store electronic information. Memory610is depicted as a single and separate block inFIG.6, but it will be understood that memory610may represent one or more memory types which may be the same, or differ from one another. Additionally, memory610may be omitted from sensor device600such that the executed instructions are stored on the same integrated circuit as one or more of activity processor606and gesture recognition processor608. Gesture recognition processor608may, in one implementation, have a structure similar to processor202fromFIG.2, such that gesture recognition processor608may be implemented as part of a shared integrated-circuit, or microprocessor device. In another implementation, gesture recognition processor608may be configured as an application-specific integrated circuit (ASIC), which may be shared with other processes, or dedicated to gesture recognition processor608alone. Further, it will be readily apparent to those of skill that gesture recognition processor608may be implemented using a variety of other configurations, such as using discrete analog and/or digital electronic components, and may be configured to execute the same processes as described herein, without departing from the spirit of the implementation depicted inFIG.6. Similarly, activity processor606may be configured as an ASIC, or as a general-purpose processor202fromFIG.2, such that both activity processor606and gesture recognition processor608may be implemented using physically-separate hardware, or sharing part or all of their hardware. Activity processor606may be configured to execute processes to recognize one or more activities being carried out by a user, and to classify the one or more activities into one or more activity categories. In one implementation, activity recognition may include quantifying steps taken by the user based upon motion data, such as by detecting arm swings peaks and bounce peaks in the motion data. The quantification may be done based entirely upon data collected from a single device worn on the user's arm, such as for example, proximate to the wrist. In one embodiment, motion data is obtained from an accelerometer. Accelerometer magnitude vectors may be obtained for a time frame and values, such as an average value from magnitude vectors for the time frame may be calculated. The average value (or any other value) may be utilized to determine whether magnitude vectors for the time frame meet an acceleration threshold to qualify for use in calculating step counts for the respective time frame. Acceleration data meeting a threshold may be placed in an analysis buffer. A search range of acceleration frequencies related to an expected activity may be established. Frequencies of the acceleration data within the search range may be analyzed in certain implementations to identify one or more peaks, such as a bounce peak and an arm swing peak. In one embodiment, a first frequency peak may be identified as an arm swing peak if it is within an estimated arm swing range and further meets an arm swing peak threshold. Similarly, a second frequency peak may be determined to be a bounce peak if it is within an estimated bounce range and further meets a bounce peak threshold. Furthermore, systems and methods may determine whether to utilize the arm swing data, bounce data, and/or other data or portions of data to quantify steps or other motions. The number of peaks, such as arm swing peaks and/or bounce peaks may be used to determine which data to utilize. In one embodiment, systems and methods may use the number of peaks (and types of peaks) to choose a step frequency and step magnitude for quantifying steps. In still further embodiments, at least a portion of the motion data may be classified into an activity category based upon the quantification of steps. In one embodiment, the sensor signals (such as accelerometer frequencies) and the calculations based upon sensor signals (e.g., a quantity of steps) may be utilized in the classification of an activity category, such as either walking or running, for example. In certain embodiments, if data cannot be categorized as being within a first category (e.g., walking) or group of categories (e.g., walking and running), a first method may analyze collected data. For example, in one embodiment, if detected parameters cannot be classified, then a Euclidean norm equation may be utilized for further analysis. In one embodiment, an average magnitude vector norm (square root of the sum of the squares) of obtained values may be utilized. In yet another embodiment, a different method may analyze at least a portion of the data following classification within a first category or groups of categories. In one embodiment, a step algorithm, may be utilized. Classified and unclassified data may be utilized to calculate an energy expenditure value in certain embodiments. Exemplary systems and methods that may be implemented to recognize one or more activities are described in U.S. patent application Ser. No. 13/744,103, filed Jan. 17, 2013, the entire content of which is hereby incorporated by reference herein in its entirety for any and all non-limited purposes. In certain embodiments, activity processor606may be utilized in executing one or more of the processes described in the herein including those described in the '103 application. The processes used to classify the activity of a user may compare the data received from sensor602to a stored data sample that is characteristic of a particular activity, wherein one or more characteristic data samples may be stored in memory610. Gesture recognition processor608may be configured to execute one or more processes to recognize, or classify, one or more gestures performed by a user, such as a user of device400of which sensor device600may be a component. In this way, a user may perform one or more gestures in order to make selections related to the operation of sensor device600. Accordingly, a user may avoid interacting with sensor device600via one or more physical buttons, which may be cumbersome and/or impractical to use during physical activity. Gesture recognition processor608may receive data from sensor602, and from this received data, recognize one or more gestures based on, among others, a motion pattern of sensor device600, a pattern of touches of sensor device600, an orientation of sensor device600, and a proximity of sensor device600to a beacon, or combinations thereof. For example, gesture recognition processor608may receive acceleration data from sensor602, wherein sensor602is embodied as an accelerometer. In response to receipt of this acceleration data, gesture recognition processor608may execute one or more processes to compare the received data to a database of motion patterns. A motion pattern may be a sequence of acceleration values that are representative of a specific motion by a user. In response to finding a motion pattern corresponding to sensor data received, gesture recognition processor608may execute one or more processes to change and operational mode of activity processor606from a first operational mode to a second operational mode. An operational mode may be a group of one or more processes that generally define the manner in which sensor device600operates. For instance, operational modes may include, among others, a hibernation mode of activity processor606, an activity recognition mode of activity processor606, and a sensor selection mode of gesture recognition processor608, or combinations thereof. Furthermore, it will be readily understood that a motion pattern may be a sequence of values corresponding to sensor types other than accelerometers. For example, a motion pattern may be a sequence of, among others: gyroscope values, force values, light intensity values, sound volume/pitch/tone values, or a location values, or combinations thereof. For the exemplary embodiment of sensor602as an accelerometer, a motion pattern may be associated with, among others, a movement of a user's arm in a deliberate manner representative of a gesture. For example, a gesture may invoke the execution of one or more processes, by sensor device600, to display a lap time to a user. The user may wear the wrist-worn device400and his/her left wrist, wherein wrist-worn device400may be positioned on his/her left wrist with a display408positioned on the top of the wrist. Accordingly, the “lap-time gesture” may include “flicking,” or shaking of the user's left wrist through an angle of approximately 90° and back to an initial position. Gesture recognition processor608may recognize this flicking motion as a lap-time gesture, and in response, display a lap time to the user on display408. An exemplary motion pattern associated with the lap-time gesture may include, among others, a first acceleration period with an associated acceleration value below a first acceleration threshold, a second acceleration period corresponding to a sudden increase in acceleration as the user begins flicking his/her wrist from an initial position, and a third acceleration period corresponding to a sudden change in acceleration as the user returns his/her wrist from an angle approximately 90° from the initial position. It will be readily apparent to those of skill that motion patterns may include many discrete “periods,” or changes in sensor values associated with a gesture. Furthermore, a motion pattern may include values from multiple sensors of a same, or different types. In order to associate data from sensor602with one or more motion patterns, gesture recognition processor608may execute one or more processes to compare absolute sensor values, or changes in sensor values, to stored sensor values associated with one or more motion patterns. Furthermore, gesture recognition processor608may determine that a sequence of sensor data from sensor602corresponds to one or more motion patterns if one or more sensor values within received sensor data are: above/below one or more threshold values, within an acceptable range of one or more stored sensor values, or equal to one or more stored sensor values, or combinations thereof. It will be readily apparent to those of skill that motion patterns may be used to associate gestures performed by a user with many different types of processes to be executed by sensor device600. For example, a gesture may include motion of a user's left and right hands into a “T-shape” position and holding both hands in this position for a predetermined length of time. Gesture recognition processor608may receive sensor data associated with this gesture, and execute one or more processes to compare the received sensor data to one or more stored motion patterns. The gesture recognition processor608may determine that the received sensor data corresponds to a “timeout” motion pattern. In response, gesture recognition processor608may instruct activity processor606to execute one or more processes associated with a “timeout” operational mode. For example, the “timeout” operational mode may include reducing power consumption by activity processor606by decreasing a sampling rate at which activity processor606receives data from sensor602. In another example, a gesture may include motion of a user's arms into a position indicative of stretching the upper body after an athletic workout. Again, gesture recognition processor608may receive sensor data associated with this gesture, and execute one or more processes to compare this received data to one or more stored motion patterns. The gesture recognition processor608, upon comparison of the received sensor data to the one or more stored motion patterns, may determine that the received sensor data corresponds to a “stretching” motion pattern. In response, gesture recognition processor608may instruct activity processor606to execute one or more processes associated with a “stretching” operational mode. This “stretching” operational mode may include processes to cease activity recognition of one or more athletic activities performed prior to a stretching gesture. In one implementation, gestures may be recognized by gesture recognition processor608after execution of one or more “training mode” processes by gesture recognition processor608. During a training mode, gesture recognition processor608may store one or more data sets corresponding to one or more motion patterns. In particular, gesture recognition processor608may instruct a user to perform a “training gesture” for a predetermined number of repetitions. For each repetition, gesture recognition processor608may receive data from one or more sensors602. Gesture recognition processor608may compare the sensor data received for each training gesture, and identify one or more characteristics that are common to multiple gestures. These common characteristics may be stored as one or more sequences of sensor value thresholds, or motion patterns. For example, during a training mode in which a “tell time” gesture is to be analyzed, gesture recognition processor608may instruct a user to carry out a specific motion three times. The specific motion may include, among others, positioning the user's left arm substantially by his/her side and in a vertical orientation, moving the left arm from a position substantially by the user's side to a position substantially horizontal and pointing straight out in front of user, and bending the user's left arm at the elbow such that the user's wrist is approximately in front of the user's chin. Gesture recognition processor608may execute one or more processes to identify sensor data that is common to the three “tell time” training gestures carried out by the user during the training mode, and store these common characteristics as a motion pattern associated with a “tell time” gesture. Gesture recognition processor608may further store one or more processes to be carried out upon recognition of the “tell time” gesture, which may include displaying a current time to the user on a display408. In this way, if the user's motion corresponds to the “tell time” gesture in the future, as determined by the gesture recognition processor608, a current time may be displayed to the user. In another implementation, gesture recognition processor608may recognize a gesture from sensor data based on a pattern of touches of sensor device600. In one implementation, a pattern of touches may be generated by a user as a result of tapping on the exterior casing of device400. This tapping motion may be detected by one or more sensors602. In one embodiment, the tapping may be detected as one or more spikes in a data output from an accelerometer. In this way, gesture recognition processor608may associate a tapping pattern with one or more processes to be executed by activity processor606. For example, gesture recognition processor608may receive sensor data from an accelerometer representative of one or more taps of the casing of device400. In response, gesture recognition processor608may compare the received accelerometer data to one or more tapping patterns stored in memory610, wherein a tapping pattern may include one or more accelerometer value thresholds. The gesture recognition processor608may determine that the received data from an accelerometer corresponds to one or more tapping patterns if, for example, the received sensor data contains multiple “spikes,” or peaks in the acceleration data with values corresponding to those stored in the tapping patterns, and within a predetermined time period of one another. For example, gesture recognition processor608may determine that data received from an accelerometer corresponds to a tapping pattern if the received sensor data contains two acceleration value peaks with average values over a threshold of 2.0 g (g=acceleration due to gravity), and within 500 ms of one another. In another implementation, a pattern of touches may be generated by a user swiping one or more capacitive sensors in operative communication with sensor device600. In this way, a pattern of touches may be comprised of movement of one or more of a user's fingers according to a predetermined pattern across the one or more capacitive sensors. In yet another implementation, gesture recognition processor608may recognize a gesture based upon an orientation of sensor device600within device400. An orientation of sensor device600may be received from, among others, sensor602embodied as an accelerometer, a gyroscope, or a magnetic field sensor, or combinations thereof. In this way, gesture recognition processor608may receive data from sensor602representative of an orientation of sensor device600, and associate this sensor data with an orientation gesture. In turn, this orientation gesture may invoke gesture recognition processor608to execute one or more processes to select an operational mode for activity processor606. In one example, device400is positioned on a user's wrist. Device400may be oriented such that display408is positioned on top of the user's wrist. In this instance, the “top” of the user's wrist may be defined as the side of the user's wrist substantially in the same plane as the back of the user's hand. In this example, an orientation gesture may be associated with a user rotating his/her wrist, and accordingly device400, such that display408faces substantially downwards. In response to recognition of this orientation gesture, gesture recognition processor608may execute one or more processes to, among others, increase the sampling rate of activity processor606in preparation for a period of vigorous activity. In another example, an orientation gesture may be associated with the orientation of a user's hands on the handlebars of a road bicycle, wherein a first grip orientation gesture may be associated with sprinting while on a road bicycle, and a second grip orientation gesture may be associated with uphill climbing on a road bicycle, among others. Furthermore, it will be readily apparent to one of ordinary skill less many more orientation gestures may be defined without departing from the spirit of the disclosure described herein. In another embodiment, gesture recognition processor608may recognize a gesture associated with the proximity of sensor device600to a beacon. A beacon may be an electronic device, such as a transceiver, which is detectable when within a predetermined range of sensor device600. A beacon may emit a short-range signal that includes information identifying one or more pieces of information associated with the beacon, wherein a beacon may represent, for example, the starting point of a marathon/running race, a distance marker along the length of the marathon, or in the finish point of the marathon. The signal associated with a beacon may be transmitted using a wireless technology/protocol including, among others: Wi-Fi, Bluetooth, or a cellular network, or combinations thereof. The signal emitted from a beacon may be received by transceiver614of sensor device600. Upon receipt of a beacon signal, the transceiver614may communicate data to gesture recognition processor608. In response, gesture recognition processor608may identify the received data as a proximity gesture. In this example, the identified proximity gesture may be associated with one or more processes configured to update progress times associated with a user's marathon run. In yet another embodiment, a proximity gesture may be associated with a sensor device600coming into close proximity with, among others, another user, or an object. In this way, a proximity gesture may be used, for example, to execute one or more processes based on multiple individuals competing as part of a sports team, or based on a runner coming into close proximity with a starting block equipped with a beacon on a running track, and the like. FIG.7is a schematic block diagram of a gesture recognition training process700. This gesture recognition training process700may be executed as, among others, a “training mode” by the gesture recognition processor608. In particular, process700begins at block702, wherein a training mode is initiated by the gesture recognition processor608. The training mode may be initiated in response to initialization of sensor device600for a first time, or at any time during use of sensor device600, in order to save new gesture patterns into memory610. Accordingly, these saved gesture patterns may be recognized by gesture recognition processor608during “normal” operation of device600wherein normal operation of device600may be defined as any time during which device600is powered-on and not executing a training process700. During the gesture recognition training process700, the gesture recognition processor608may instruct a user to perform multiple successive repetitions of a training gesture. In one embodiment, the motions associated with a gesture may be defined by the user, while in another embodiment, the motions may be prescribed by the gesture recognition processor608to be performed by the user. Block704of process700includes, among others, the user performing the multiple successive repetitions of a training gesture. In one implementation, the number of successive repetitions of the training gesture may range from 1 to 10, but it will be readily apparent to those of skill that any number of repetitions of the training gesture may be employed during the training process700. Gesture recognition processor608may store one or more samples of the performed training gestures in memory610. Characteristics common to one or more of the training gestures may be identified by the gesture recognition processor608at block708of process700. Specifically, block708represents one or more comparison processes executed by gesture recognition processor608to identify sensor data points that characterize the performed training gestures. These characteristics may be, among others, peaks in acceleration data, or changes in gyroscope data points above a threshold value, and the like. Block708may also include a comparison of one or more training gestures sampled at different sampling rates. In this way, and for a given training gesture, gesture recognition processor608may identify a sampling rate that is below an upper sampling rate associated with activity processor606. At this lower sampling rate, the training gesture may still be recognized as if data from sensor602was sampled at the upper sampling rate. Gesture recognition processor608may store the lower sampling rate in combination with the gesture sample. Subsequently, and upon recognition, by gesture recognition processor608, of the gesture from sensor data received during normal operation of sensor device600, gesture recognition processor608may instruct activity processor606to sample the data at the lower sampling rate, and thereby reduce power consumption by activity processor606. Block710represents the storage of one or more gesture samples in memory610. Gesture recognition processor608may poll a database of stored gesture samples upon receipt of data from sensor602during normal operation of sensor device600. A gesture sample may be stored as a sequence of data points corresponding to one or more sensor values associated with one or more sensor types. Additionally, a gesture sample may be associated with one or more processes, such that upon recognition, by gesture recognition processor608, of a gesture from received sensor data, the gesture recognition processor608may instruct activity processor606to execute the one or more associated processes. These associated processes may include processes to transition sensor device600from a first operational mode into a second operational mode, among others. FIG.8is a schematic block diagram of a gesture recognition process800. Gesture recognition process800may be, in one implementation, performed by gesture recognition processor608. Process800is executed by gesture recognition processor608and response to a receipt of data from a sensor602. This receipt of sensor data represented by block802. As previously disclosed, a data output from a sensor602may be analog or digital. Furthermore, data output from a sensor602may be in the form of a data stream, such that the data output is continuous, or the data output may be intermittent. The data output from the sensor602may be comprised of one or more data points, wherein a data point may include, among others, an identification of the sensor type from which was generated, and one or more values associated with a reading from the sensor type. Process800may include buffering of one or more data points received from a sensor602. This is represented by block804, wherein a buffer circuit, or one or more buffer processes, may be used to temporarily store one or more received data points. In this way, gesture recognition processor608, or activity processor606, may poll a buffer to analyze data received from the sensor602. In one implementation, gesture recognition processor608compares the data received from sensor602one or more stored motion patterns. This is represented by block806of process800. In one embodiment, gesture recognition processor608identifies a sensor type from which data has been received. In response, gesture recognition processor608polls memory610for stored motion patterns associated with the identified sensor type. Upon response from polled memory610of those one or more stored motion patterns associated with the identified sensor type, gesture recognition processor608may iteratively search through the stored motion patterns for a sequence of sensor values that corresponds to the received data. Gesture recognition processor608may determine that the received data corresponds to a stored sequence of sensor values associated with a motion pattern if, among others, the received data is within a range of the stored sequence of sensor values. In another embodiment, gesture recognition processor608does not poll memory610for motion patterns associated with an identified sensor type, and instead, performs an iterative search for stored motion patterns corresponding to received sensor data. In another implementation, gesture recognition processor608may execute one or more processes to compare the data received from sensor602to one or more stored touch patterns. This is represented by block808of process800. The one or more stored touch patterns may be associated with, among others, a sequence of taps of the outer casing of device400of which sensor device600is a component. These touch patterns may be stored in a database in memory610, such that gesture recognition processor608may poll this touch pattern database upon receipt of sensor data from sensor602. In one embodiment, gesture recognition processor608may identify one or more peaks in the data output from sensor602, wherein the one or more peaks in the data output may be representative of a one or more respective “taps” of sensor device600. In response, gesture recognition processor608may poll memory610for one or more touch patterns with a one or more peaks corresponding to the received output data from sensor602. In another implementation, and at block810of process800, gesture recognition processor608may recognize a gesture based on an orientation of sensor device600. Gesture recognition processor608may detect an orientation of sensor device600based on data received from a sensor602, wherein an orientation may be explicit from data received from a sensor602embodied as, among others, an accelerometer, gyroscope, or a magnetic field sensor, or combinations thereof. In yet another implementation, gesture recognition processor608may recognize a gesture based on a detected proximity of sensor device600to a beacon. This is represented by block812of process800. In one embodiment, sensor602may receive a signal representing a proximity of sensor device600to a beacon via transceiver614. Gesture recognition processor608may execute one or more processes to select an operational mode of sensor device600, and specifically, activity processor606. This selection of an operational mode is represented by block816of process800. Furthermore, the selection of an operational mode may be in response to the recognition of a gesture, and wherein the gesture may be recognized by gesture recognition processor608based on the one or more processes associated with blocks806,808,810, and812. In one embodiment, activity processor606may execute one or more processes associated with a first operational mode upon initialization of sensor device600. In another embodiment, a first operational mode may be communicated by gesture recognition processor608to activity processor606as a default operational mode. Upon recognition of a gesture, gesture recognition processor608may instruct activity processor606to execute one or more processes associated with a second operational mode. One of ordinary skill will recognize that an operational mode may include many different types of processes to be executed by one or more components of sensor device600. In one example, an operational mode may include one or more processes to instruct activity processor606to receive data from one or more additional/alternative sensors. In this way, upon recognition of a gesture, activity processor606may be instructed to change the number, or type of sensors from which to receive data in order to recognize one or more activities. An operational mode may also include one or more processes to specify a sampling rate at which activity processor606is to sample data from sensor602, among others. In this way, upon recognition of a gesture, by gesture recognition processor608, activity processor606may be instructed to sample data at a sampling rate associated with a second operational mode. This sampling rate may be lower than an upper sampling rate possible for activity processor606, such that a lower sampling rate may be associated with lower power consumption by activity processor606. Block814of process800represents one or more processes to filter data received from a sensor602. Data may be filtered by filter604, wherein filter604may act as a “pre-filter.” By pre-filtering, filter604may allow activity processor606to remain in a hibernation, or low power state until received data is above a threshold value. Accordingly, filter604may communicate a “wake” signal to activity processor606upon receipt of data corresponding to, or above a threshold value. Upon selection of an operational mode, activity processor606may analyze data received from sensor602. This analysis is represented by block818, wherein activity processor606may execute one or more processes to recognize one or more activities being performed by a user. Additionally, the data received by analysis processor606from sensor602may be received simultaneously to gesture recognition processor608, as represented by the parallel processed pot from block814to block818. FIG.9is a schematic block diagram of an operational mode selection process900. Block902represents a receipt of data from sensor602. In one implementation, gesture recognition processor608may buffer the received data, as described by block904. Subsequently, gesture recognition processor608may execute one or more processes to recognize one or more gestures associated with the received data, as indicated by block908, and as discussed in relation to process800fromFIG.8. Data received at block902may simultaneously be communicated to activity processor606, wherein the received data may be filtered at block906, before being passed to activity processor606at block910. Activity processor606may execute one or more processes to recognize one or more activities from the received sensor data at block910, wherein this activity recognition is carried out in parallel to the gesture recognition of gesture recognition processor608. Block912of process900represents a selection of an operational mode, by gesture recognition processor608. The selection of an operational mode may be based on one or more recognized gestures from block908, and as described in relation to block816from process800, but additionally considers the one or more recognized activities from block910. In this way, a second operational mode may be selected based on one or more recognized gestures, and additionally, tailored to one or more recognized activities being performed by a user of sensor device600. Exemplary embodiments allow a user to quickly and easily change the operational mode in which a sensor device, such as an apparatus configured to be worn around an appendage of a user, by performing a particular gesture. This may be flicking the wrist, tapping the device, orienting the device in a particular manner, for example, or any combination thereof. In some embodiments the operation mode may be a power-saving mode, or a mode in which particular data is displayed or output. This may be particularly beneficial to a user who is participating in a physical activity where it would be difficult, dangerous, or otherwise undesirable to press a combination of buttons, or manipulate a touch-screen, for example. For example, if a user begins to run a marathon, it is advantageous that a higher sampling rate operational mode can be entered into by performing a gesture, rather than pressing a start button, or the like. Further, since operational modes can be changed by the user performing a gesture, it is not necessary to provide the sensor device with a wide-array of buttons or a complex touch-screen display. This may reduce the complexity and/or cost and/or reliability and/durability and/or power consumption of the device. Furthermore, in some embodiments the sensor device may recognize that a physical activity has commenced or ended. This may be recognized by a gesture and/or activity recognition. This automatic recognition may result in the operational mode being changed in response. For example, if the sensor device recognizes or determines that physical activity has ended, it may enter an operational mode in which the power consumption is reduced. This may result in improved battery life which may be particularly important for a portable or wearable device. The sensor device600may include a classifying module configured to classify the captured acceleration data as one of a plurality of gestures. The sensor device600may also include an operational mode selection module configured to select an operational mode for the processor based on at least the classified gesture. These modules may form part of gesture recognition processor608. The sensor device600may include an activity recognition module configured to recognize an activity based on the acceleration data. This module may form part of the activity processor606. In any of the above aspects, the various features may be implemented in hardware, or as software modules running on one or more processors. Features of one aspect may be applied to any of the other aspects. There may also be provided a computer program or a computer program product for carrying out any of the methods described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein. A computer program may be stored on a computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal provided from an Internet website, or it could be in any other form. For the avoidance of doubt, the present application extends to the subject-matter described in the following numbered paragraphs (referred to as “Para” or “Paras”): Para 1. A computer-implemented method of operating a device configured to be worn by a user and including an accelerometer, the method comprising:(a) operating the device in a first operational mode;(b) obtaining acceleration data representing movement of an appendage of the user using the accelerometer;(c) classifying the acceleration data obtained in (b) as one of a plurality of gestures;(d) entering a second operational mode based upon at least the classified gesture;(e) obtaining acceleration data representing movement of an appendage of the user using the accelerometer;(f) classifying the acceleration data obtained in (b) as one of a plurality of gestures. Para 2. The computer-implemented method of Para 1, wherein the gesture is classified based on a motion pattern of the device. Para 3. The computer-implemented method of Para 1 or 2, wherein the gesture is classified based on a pattern of touches of the device by the user. Para 4. The computer-implemented method of Para 3, wherein the pattern of touches is a series of taps of the device by the user. Para 5. The computer-implemented method of any of Paras 1-4, wherein the gesture is classified based on an orientation of the device. Para 6. The computer-implemented method of any of Paras 1-5, wherein the gesture is classified based on a proximity of the device to a beacon. Para 7. The computer-implemented method of Para 6, wherein the device is a first sensor device, and the beacon is associated with a second device on a second user. Para 8. The computer-implemented method of Para 6 or 7, wherein the beacon is associated with a location, and the device is registered at the location based on the proximity of the device to the beacon. Para 9. The computer-implemented method of any of Paras 1-8, further comprising:comparing a first value of acceleration data obtained using the accelerometer against a plurality of threshold values;determining that the first value of acceleration data corresponds to a first threshold value within the plurality the threshold values; andwherein the classification of the acceleration data as a gesture is based upon the correspondence of the first value of acceleration data to the first threshold. Para 10. A non-transitory computer-readable medium comprising executable instructions that when executed cause a computer device to perform the method as described in any of Paras 1 to 9. Para 11. A unitary apparatus configured to be worn around an appendage of a user, comprising:a sensor configured to capture acceleration data from the appendage of the user;a processor configured to receive the captured acceleration data;a classifying module configured to classify the captured acceleration data as one of a plurality of gestures;an operational mode selection module configured to select an operational mode for the processor based on at least the classified gesture, wherein the processor samples data from the accelerometer based on the operational mode. Para 12. The unitary apparatus of Para 11, wherein the operational mode selection module is configured to select a sampling rate at which data is sampled from the sensor based on the classified gesture. Para 13. The unitary apparatus of Para 11 or 12, wherein the operational mode is a hibernation mode such that the processor uses a low level of power. Para 14. The unitary apparatus of any of Paras 11-12, further comprising:an activity recognition module configured to recognize an activity based on the acceleration data;wherein the operational mode selection module is configured to select an operational mode based on at least the recognized activity and the classified gesture. Para 15. The unitary apparatus of Paras 11-14, further comprising:a second sensor configured to capture motion data from the user; andwherein the processor selects to receive motion data from the second sensor data based on the classified gesture. Para 16. The unitary apparatus of any of Paras 11-15, wherein the sensor, or second sensor, is one selected from a group comprising: an accelerometer, a gyroscope, a force sensor, a magnetic field sensor, a global positioning system sensor, and a capacitance sensor. Para 17. The unitary apparatus of any of Paras 11-16, wherein the unitary apparatus is a wristband. Para 18. A non-transitory computer-readable medium comprising executable instructions that when executed cause a computer device to function as a unitary apparatus as described in any of Paras 11 to 17. Para 19. A computer-implemented method of operating a device including a sensor, the method comprising:receiving motion data of a user from the sensor;identifying a gesture from the received motion data;adjusting an operational mode of the device based on the gesture identified. Para 20. The computer-implemented method of Para 19, wherein the sensor is one selected from a group comprising: an accelerometer, a gyroscope, a force sensor, a magnetic field sensor, a global positioning system sensor, and a capacitance sensor. Para 21. The computer-implemented method of Para 19 or 20, wherein the gesture is identified based on a motion pattern of the sensor device. Para 22. The computer-implemented method of any of Paras 19-21, wherein the gesture is identified based on a pattern of touches of the sensor device by the user. Para 23. The computer-implemented method of any of Paras 19-22, wherein the gesture is identified based on an orientation of the sensor device. Para 24. A non-transitory computer-readable medium comprising executable instructions that when executed cause a computer device to perform the method as described in any of Paras 19 to 23. | 79,236 |
11861074 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The function keys of the present utility contain the conventional function keys, editing keys, and F1-F12 shortcut keys. The above-mentioned shortcut modules of the third shortcut module area can be arranged freely at the left or right and left and right combined portions according to need. Embodiment 1 Referring toFIG.1, a keyboard of the present application is applied to a notebook. A keyboard includes a first operating area2, a second primary key area3, and a third shortcut module area4arranged on a C surface, which is a conventional keyboard surface position of a notebook. The second primary key area3is located on the upper portion of the first operating area2, includes the setting of the three rows of alphabet keys, and also includes function keys set on the left and right sides of the alphabet key rows. The second primary key area3is set to the three rows of alphabet keys to limit, with the upper and lower width is 5.5 cm. The first operating area2is located in the middle of the lower side of the second primary key area3, and is configured for the setting of the space bar, and further includes a function key such as ctrl, Fn, Alt, and windows key arranged at the lower side of the space bar. A bottom of the second primary key area3and the left and right sides of the first operating area2constitute the left and right palm support positions21, the left and right palm support positions21are configured for supporting the hands. Since the keyboard is used on the notebook, the palm support position21is in an inner casing of the notebook. The third shortcut module area4is located on the upper portion of the second primary key area3, and the third shortcut module area4includes the left and right sections corresponding to the left and right palm support positions of the first operating area2, with the left section provided with a 9-site-squared number module41, the right section provided with the mouse pointer control touch panel42. The third shortcut module area4further includes some commonly used function keys (symbol keys) and direction key modules. In particular, the distance between the lower edge of the mouse pointer control touch panel42to the bottom edge of the second primary key area3is about 5.5 cm. The mouse pointer control touch panel has a display function. When the palm is on the palm support position, the thumb of the palm can correspond to operate the space bar or function key of the first operating area2, and the four fingers except the thumb can span the shortcut module and keys of the shortcut module area through the second primary key area3, thus enabling the operator to operate all keys and modules of the keyboard without moving or slightly up and down lifting palms, improving work efficiency. Embodiment 2 As shown inFIGS.2and5, a keyboard includes a first operating area2, a second primary key area3, a third shortcut module area4, the first operating area2, the second primary key area3, and the third shortcut module area4are integrally formed with a keyboard carrier1. The second primary key area3is located on the upper portion of the first operating area2, includes three rows of alphabet keys set by a conventional method, and also includes a function key and a symbol key set on the left and right side of the alphabet rows. The second primary key3is set three rows of alphabet keys with the upper and lower width is 6 cm. The first operating area2is located in the middle of the lower side of the second primary key area3, and configured for the setting of the space bar, The first operating area2further includes function keys such as ctrl, Fn, Alt, and windows keys. The bottom of the second primary key area3and the left and right sides of the first operating area2constitute the left and right palm support positions21, and a hand pallet is attached to the palm support position21, which may be integrally formed with the keyboard carrier, and the left and right hand pallets are used for the hands support. The third shortcut module area4is located on the upper portion of the second primary key area3, and the third shortcut module area4has the left and right sections corresponding to the left and right palm support positions in the first operating area2, and the left section has a 9-site-squared number module41and the right section has the direction key module42composed of up, down, left and right direction keys. The third shortcut module area4further includes function keys (F1-F12, and editing keys) distributed in the shortcut module area. In particular, the 9-site-squared number module, and the direction key module bottom space the second primary key area a distance about 6 cm. The utility model is mainly by setting the second primary key area3three rows of alphabet keys, shortening the upper and lower width of the primary key area, and utilizes a shorter first operating area provided in the lower portion of the second primary key area and the second primary key area to form a left right palm support position, and the third shortcut module area is arranged at the upper part of the second primary key area corresponding to the left and right palm support positions, so that when the palm is on the palm support position, the thumb of the palm can correspond to the space bar or function key of the first operating area, while the other four fingers outside the thumb can reach to operate the shortcut modules and keys of the third shortcut module, allowing the operator to reach all keys and modules to operate the keyboard without moving or slightly up and down palms, improving work efficiency. Embodiment 3 As shown inFIG.3, the keyboard carrier of the first operating area, the second primary key area, the third shortcut module area is divided into left and right portions, and the keys on the keyboard are separated to the left and right portions, respectively, and using left and right hand to operate respectively. Embodiment 4 The keyboard carrier of the third shortcut module area is disposed at an angle relative to a horizontal plane, and the angle range is 45°˜−45°, so that the operation is more conforming to the user's using habits. Embodiment 5 As shown inFIG.4, the three rows of alphabet keys of the second primary key area can be arranged in a conventional manner and divided into the left and right key regions in boundary of T, G, B and Y, H, N, and keys of the left and right key regions are arranged in a shape of sector, thus more in line with the principle of ergonomics, and more convenient for users. | 6,596 |
11861075 | DETAILED DESCRIPTION One aspect of the present disclosure describes a personalized emoji dictionary, such as for use with emoji-first messaging. Text messaging is automatically converted to emojis by an emoji-first application so that only emojis are communicated from one client device to another client device. Each client device has a personalized emoji library of emojis that are mapped to words, which libraries are customizable and unique to the users of the client devices, such that the users can communicate secretly in code. Upon receipt of a string of emojis, a user can select the emoji string to convert to text if desired, such as by tapping the displayed received emoji string, for a predetermined period of time. This disclosure provides a more engaging user experience. The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products illustrative of examples of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various examples of the disclosed subject matter. It will be evident, however, to those skilled in the art, that examples of the disclosed subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. FIG.1is a block diagram illustrating a system100, according to some examples, configured to enable users of client devices to communicate with one another using only emojis, referred to in this disclosure as emoji-first messaging. Text created by users is automatically converted to emojis based on customizable libraries. The system100includes two or more client devices110. The client device110includes, but is not limited to, a mobile phone, eyewear, desktop computer, laptop, portable digital assistants (PDA), smart phone, tablet, ultrabook, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic, game console, set-top box, computer in a vehicle, or any other communication device that a user may utilize to access the system100. The client devices110include a display displaying information, e.g., in the form of user interfaces. In further examples, the client device110includes one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user that is used to access and utilize an online social platform. For example, client device110is a device of a given user who uses a client application114on an online social platform, a gaming platform, and communication applications. Client device110accesses a website, such as an online social platform hosted by a server system108. The user inputs login credentials associated with the user. Server system108receives the request and provides access to the online social platform. A user of the client device110launches and engages a client application114hosted by the server system108, which in one example is a messaging application. The client device110includes an emoji-first module116including a processor running client code for performing the emoji-first messaging on the client device110. The emoji-first module116automatically converts text words entered by a user on a client device110to generate a string of one or more emojis based on a customizable library118. The library118contains a list of emojis matched to one or more words of text. The messaging client application114communicates the emoji string between client devices110. When a user of another client device110having the same customizable library118receives the generated emoji string, it displays the string of emojis on a device display, and the user can optionally select converting the received string of emojis to text, such as by tapping on the emoji string. One or more users may be a person, a machine, or other means of interacting with the client device110. In examples, the user may not be part of the system100but may interact with the system100via the client device110or other means. For instance, the user may provide input (e.g., touch screen input, alphanumeric input, verbal input, or visual input) to the client device110and the input may be communicated to other entities in the system100(e.g., third-party servers128, server system108, etc.) via a network102(e.g., the Internet). In this instance, the other entities in the system100, in response to receiving the input from the user, may communicate information to the client device110via the network102to be presented to the user. In this way, the user interacts with the various entities in the system100using the client device110. One or more portions of the network102may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a 4G LTE network, another type of network, or a combination of two or more such networks. The client device110may access the various data and applications provided by other entities in the system100via a web client112(e.g., a browser) or one or more client applications114. The client device110may include one or more client application(s)114(also referred to as “apps”) such as, but not limited to, a web browser, messaging application, multi-player gaming application, electronic mail (email) application, an e-commerce site application, a mapping or location application, and the like. In some examples, one or more client application(s)114are included in a given one of the client device110, and configured to locally provide the user interface and at least some of the functionalities, with the client application(s)114configured to communicate with other entities in the system100(e.g., third-party server(s)128, server system108, etc.), on an as-needed basis, for data processing capabilities not locally available (e.g., to access location information, to authenticate a user, etc.). Conversely, one or more client application(s)114may not be included in the client device110, and then the client device110may use its web browser to access the one or more applications hosted on other entities in the system100(e.g., third-party server(s)128, server system108, etc.). The server system108provides server-side functionality via the network102(e.g., the Internet or wide area network (WAN)) to: one or more third party server(s)128, and one or more client devices110. The server system108includes an application server104including an application program interface (API) server120, a web server122, and one or more personalized font modules124, that may be communicatively coupled with one or more database(s)126. The one or more database(s)126may be storage devices that store data related to users of the server system108, applications associated with the server system108, cloud services, and so forth. The one or more database(s)126may further store information related to third-party server(s)128, third-party application(s)130, client device110, client application(s)114, users, and so forth. In one example, the one or more database(s)126may be cloud-based storage. The server system108may be a cloud computing environment, according to some examples. The server system108, and any servers associated with the server system108, may be associated with a cloud-based application, in one example. The emoji-first module116is stored on the client device110and/or server108to optimize processing efficiency. In some examples, all modules for performing a specific task are stored on the device/server performing that action. In other examples, some modules for performing a task are stored on the client device110and other modules for performing that task are stored on the server108and/or other devices. In some examples, modules may be duplicated on the client device110and the server108. The one or more third-party application(s)130, executing on third-party server(s)128may interact with the server system108via API server120via a programmatic interface provided by the API server120. For example, one or more of the third-party applications130may request and utilize information from the server system108via the API server120to support one or more features or functions on a website hosted by the third party or an application hosted by the third party. The third-party application(s)130, for example, may provide software version analysis functionality that is supported by relevant functionality and data in the server system108. FIG.2provides an overview of one example of communicating using the emoji-first module116among a plurality of client devices110a-nusing messaging application114. The client devices110a-ninFIG.2each include the emoji-first module116and messaging application114, and a respective display screen200a-nconfigured to display the messaging. The display screen200a-nis also referred to as a “chat” interface. The emoji-first module116is an application, such as an iOS app, that enables emoji-first communication between two people in a close relationship, leveraging their closeness and history with each other to foster a shared emoji vocabulary between them. Each user creates an account and specifies a single partner with whom they will use the emoji-first module116. The chat interface200allows the user pair to send and receive emoji-first messages between them, such that the messages comprise of only emojis, such as shown inFIGS.3A-3D. The emoji-first module116automatically and dynamically converts/translates all text into emojis on the fly as the user types, as shown inFIG.3B, by using automatic text-to-emoji mapping. The users have the option to personalize the emoji-first messages they send by defining their own text-to-emoji mappings, as shown inFIG.3C, which mappings are stored in library118. Emojis can be selectively associated with words that are different than the mapping provided by Unicode CLDR data. The chat interface200allows users to exchange emoji-first messages with their partners. That is, the users receive sequences of emoji, referred to as strings, representing a text message without being accompanied by text at first, though they may choose to view the message in text later by tapping the messages. As shown inFIG.3BandFIG.3C, the emoji-first messages appear in bubbles, where yellow messages indicate sent, and grey messages indicate received. The user can type, personalize, and preview their message using the three boxes302,304and306in the chat interface200. Once the user is satisfied with a message they have created in the chat interface200, they can send the message by tapping the send button310which is represented by a yellow circle containing an upward arrow. When a message is received, the receiving user only sees the emoji string at first in both the iOS notification312in the chat interface200as shown inFIG.3A, and in the chat interface200as shown at314inFIG.3B, where the user can tap the emoji string to reveal the fully translated message's corresponding text. Upon tapping, the revealed message will display for a predetermined time, such as 5 seconds in on example, which is helpful to maintain privacy. Referring toFIG.4, there is shown a method400of operating the emoji-first application116on client device110in view ofFIGS.3A-3D. The method400is performed by a processor of the client device110, shown as processor630inFIG.6as will be discussed more shortly. At block402, the recipient, referred to as “Friend1”, always sees the received emoji string first, shown as the notification message312inFIG.3A. The notification message312from “Friend2” includes only a string of emojis that are found in the libraries118of each client device110. The emoji-first application116sends users standard iOS push notifications whenever the user receives a message from their partner. At block404, Friend1can long press message312from Friend2to toggle between the emoji string and text words as shown at314inFIG.3B. The library118is used to map the received emojis into words. Responsively, Friend1can type a reply to Friend2in box302, and the emoji-first application116automatically and fully translates the reply to a string of emojis on the fly as shown in box304. Box306is tappable and allows Friend1to modify the automatic mappings. Box302is the “main” box that users type into, and they use their client device's standard text-based keyboard to do so. Users can enter emoji here as well through their smartphone's keyboard. At block406, as shown inFIG.3C, the word “craving” in box302is mapped to a single emoji, indicated by the matching the width of the bubble above this word in box304. The width (in pixels) of the bubble around each emoji mapping in box304matches the width (in pixels) of the corresponding input token from box302. To personalize the emoji mapping for the word “craving”, Friend1selects that emoji in box306to choose a new mapping for the word “craving.” This topmost box306acts as a final preview of the emoji message that will be sent, without the artificial spacing between emoji that box304shows. Box306is also interactive. The user can select substrings of emoji within box306, as shown inFIG.3Cto open the “personalization menu” in box308and replace the emoji substring with their own emoji string mapping. Shown at308is a set of possible emojis that are presented to Friend1, such that Friend1can choose from the set to establish the personalized emoji for the word “craving”. At block408, as shown inFIG.3D, the chosen emoji mapping now appears in box306and becomes part of the pair's shared vocabulary which is stored in library118. Both friends can view and modify the shared vocabulary in library118anytime, thereby providing a personalized and modifiable shared vocabulary. Referring toFIG.5, there is shown an example screen500of the chat interface200showing library118displayed on a client device display200, referred to in this disclosure as an Emotionary. This screen500shows the shared vocabulary of text-to-emoji mappings (from text strings to emoji strings) that a user and their partner has created over time. The library118serves as an emoji dictionary that both partners can contribute to and draw from in their communication. The on-the-fly text-to-emoji mapping algorithm400ofFIG.4uses this library118as described. There are two portions, the user's text-to-emoji mappings shown on top, and their partner's text-to-emoji mappings shown on bottom. The mappings can be sorted alphabetically or by creation date by the user. Users can add new mappings to the library118in two ways. The first way is via the “Add” button502on the screen, and the second way is through the display200itself, by simply changing any automatically generated text-to-emoji mapping. The combined library118allows users to utilize both their and their partner's mappings when typing messages. The emoji-first application116prioritizes a user's own library118during message translation whenever their partner has a competing mapping. FIG.6is a high-level functional block diagram of an example client device110including a client device that communicates via network102with server system108ofFIG.1. Display200is a touch screen type display, although other non-touch type displays can be used. Examples of touch screen type client devices110that may be used include (but are not limited to) a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, eyewear, or other portable device. However, the structure and operation of the touch screen type client devices is provided by way of example, and the subject technology as described herein is not intended to be limited thereto. For purposes of this discussion,FIG.6therefore provides a block diagram illustration of the example client device110having a touch screen display for displaying content and receiving user input as (or as part of) the user interface. Client device110also includes a camera(s)670, such as visible light camera(s), and a microphone680. The activities that are the focus of discussions here involve the emoji-first messaging, and also the personalized library of emojis that are shared between two users of client devices110. The emoji-first application116and the library118may be stored in memory640for execution by CPU630, such as flash memory640A or RAM memory640B. As shown inFIG.6, the client device110includes at least one digital transceiver (XCVR)610, shown as WWAN XCVRs, for digital wireless communications via a wide area wireless mobile communication network102. The client device110also includes additional digital or analog transceivers, such as short range XCVRs620for short-range network communication, such as via NFC, VLC, DECT, ZigBee, Bluetooth™, or WiFi. For example, short range XCVRs620may take the form of any available two-way wireless local area network (WLAN) transceiver of a type that is compatible with one or more standard protocols of communication implemented in wireless local area networks, such as one of the Wi-Fi standards under IEEE 802.11, 4G LTE and 5G. To generate location coordinates for positioning of the client device110, the client device110can include a global positioning system (GPS) receiver (not shown). Alternatively, or additionally, the client device110can utilize either or both the short range XCVRs620and WWAN XCVRs610for generating location coordinates for positioning. For example, cellular network, WiFi, or Bluetooth™ based positioning systems can generate very accurate location coordinates, particularly when used in combination. Such location coordinates can be transmitted to the eyewear device over one or more network connections via XCVRs620. The transceivers610,620(network communication interface) conforms to one or more of the various digital wireless communication standards utilized by modern mobile networks. Examples of WWAN transceivers610include (but are not limited to) transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and 3rd Generation Partnership Project (3GPP) network technologies including, for example and without limitation, 3GPP type 2 (or 3GPP2) and LTE, at times referred to as “4G”, and 5G. For example, the transceivers610,620provide two-way wireless communication of information including digitized audio signals, still image and video signals, web page information for display as well as web related inputs, and various types of mobile message communications to/from the client device110for user identification strategies. Several of these types of communications through the transceivers610,620and a network, as discussed previously, relate to protocols and procedures in support of communications with the server system108for obtaining and storing friend device capabilities. Such communications, for example, may transport packet data via the short range XCVRs620over the wireless connections of network102to and from the server system108as shown inFIG.1. Such communications, for example, may also transport data utilizing IP packet data transport via the WWAN XCVRs610over the network (e.g., Internet)102shown inFIG.1. Both WWAN XCVRs610and short range XCVRs620connect through radio frequency (RF) send-and-receive amplifiers (not shown) to an associated antenna (not shown). The client device110further includes microprocessor630, shown as a CPU, sometimes referred to herein as the host controller. A processor is a circuit having elements structured and arranged to perform one or more processing functions, typically various data processing functions. Although discrete logic components could be used, the examples utilize components forming a programmable CPU. A microprocessor for example includes one or more integrated circuit (IC) chips incorporating the electronic elements to perform the functions of the CPU. The processor630, for example, may be based on any known or available microprocessor architecture, such as a Reduced Instruction Set Computing (RISC) using an ARM architecture, as commonly used today in client devices and other portable electronic devices. Other processor circuitry may be used to form the CPU630or processor hardware in smartphone, laptop computer, and tablet. The microprocessor630serves as a programmable host controller for the client device110by configuring the device to perform various operations, for example, in accordance with instructions or programming executable by processor630. For example, such operations may include various general operations of the client device110, as well as operations related to emoji-first messaging using emoji-first application116, and also personalized libraries118mapping emojis to text between a two or more users. Although a processor may be configured by use of hardwired logic, typical processors in client devices are general processing circuits configured by execution of programming. The client device110includes a memory or storage device system, for storing data and programming. In the example, the memory system may include a flash memory640A and a random access memory (RAM)640B. The RAM640B serves as short term storage for instructions and data being handled by the processor630, e.g., as a working data processing memory. The flash memory640A typically provides longer term storage. Hence, in the example of client device110, the flash memory640A is used to store programming or instructions for execution by the processor630. Depending on the type of device, the client device110stores and runs a mobile operating system through which specific applications, including application114. Examples of mobile operating systems include Google Android®, Apple iOS® (I-Phone or iPad devices), Windows Mobile®, Amazon Fire OS®, RIM BlackBerry® operating system, or the like. The terms and expressions used herein are understood to have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or includes a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, the subject matter to be protected lies in less than all features of any single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. The examples illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other examples may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various examples is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Additional objects, advantages and novel features of the examples will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims. | 25,012 |
11861076 | DETAILED DESCRIPTION Various embodiments will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments disclosed herein. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment. The appearances of the phrases “in one embodiment” and “an embodiment” in various places in the specification do not necessarily all refer to the same embodiment. References to an “operable connection” or “operably connected” means that a particular device is able to communicate with one or more other devices. The devices themselves may be directly connected to one another or may be indirectly connected to one another through any number of intermediary devices, such as in a network topology. In general, embodiments disclosed herein relate to methods and systems for providing computer implemented services. To provide the computer implemented services, user input may be obtained. To obtain the user input, a human interface device may be used. The human interface device may be actuated by a user, and the actuations may be translated into magnetic fields detectable by a sensing system. The sensing system may sense the magnetic fields and obtain information reflecting changes in the position and/or orientation of a magnet of the human interface device that generates the magnetic fields. Thus, information reflecting actuations of the human interface device by the user may be encoded in the magnetic fields and may be sensed. The obtain information may then be used to identify, for example, user input provided by the user. For example, the information regarding changes in the position and/or orientation of the magnet may be translated into user input. The user input may then be used to drive computer implemented services. For example, the user input may be provided by the user to activate certain functionalities, change functionalities, terminate functionalities, and/or invoke desired activities by a data processing system. By using a magnet and mechanical linkage to the magnet, the human interface device may not need to be powered, may include fewer components thereby reducing the likelihood of component failures, may be made lighter/smaller thereby reducing loads placed on user of user input devices, etc. By doing so, a system in accordance with embodiments disclosed herein may have improved portability and usability when compared to other types of devices used to obtain user input that may be powered. Thus, embodiment disclosed herein may address, among others, the technical challenge of loads placed on users during acquisition of user input and mechanical or electrical failure of devices tasked with obtaining user input. In an embodiment, a human interface device is provided. The human interface device may include a body movable though application of force by a user; a magnet positioned with the body, the magnet emanating a magnetic field distribution that extends into an ambient environment proximate to the human interface device; a button mechanically coupled to the magnet via a first mechanical linkage, the first mechanical linkage being adapted to rotate the magnet in a first plane when the button is actuated by the user; and a scroll control mechanically coupled to the magnet via second mechanical linkage, the second mechanical linkage being adapted to rotate the magnet in a second plane when the scroll control is actuated by the user. The human interface device may be unpowered. The first plane and the second plane may not be coplanar or parallel. The first plane may be substantially perpendicular (e.g., within 5° of being perpendicular) to the second plane. The first plane may be substantially orthogonal (e.g., within 5° of being orthogonal) to the second plane. The first mechanical linkage may include a support element extending from the button to the body, the support element suspending the button above the body by a first distance, and the support element flexing when the button is actuated by the user to rotate the magnet in the first plane. The second mechanical linkage may include a cradle that houses the magnet; and a suspension element extending from the button toward the body by a second distance that is smaller than the first distance and positioned to suspend the cradle between the button and body. The scroll control may be directly attached to the cradle, and the suspension element flexing when the scroll control is actuated by the user to rotate the magnet in the second plane. The human interface device may also include a second button mechanically coupled to the magnet via the second mechanical linkage, the first mechanical linkage rotating the magnet in a first direction when the button is actuated and a second direction when the second button is actuated. The first mechanical linkage may be further adapted to return the magnet to a predetermined position while neither of the button and the second button are actuated. The second mechanical linkage may be further adapted to return the magnet to the predetermined position while the scroll control is not actuated. The button, the scroll control, and the second button may be positioned on a top surface of the human interface device. The suspension element may be adapted to flex to a first degree, the support element is adapted to flex to a second degree, and the first degree is larger than the second degree. The human interface device may further include an actuation element extending from the button toward the body; and a sensory feedback element positioned between the body and the actuation element, the actuation element adapted to: generate an auditory signal and/or haptic when suspension element flexes to the first degree, and limit an extent of rotation of the magnet in the first plane. The extent of rotation of the magnet in the second plane may be limited by an extent to which the scroll control is exposed above the button and the second button. In an embodiment, a user input system is provided. The user input system may include a human interface device as discussed above and a sensing system adapted to measure the magnetic field emanating from the magnet. In an embodiment, a data processing system is provided. The data processing system may include a user input system as discussed above, a processor, and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for obtaining user input using data obtained from the sensing system. In an embodiment, a non-transitory media is provided. The non-transitory media may include instructions that when executed by a processor operations for obtaining user input using data obtained from the sensing system, as discussed above. Turning toFIG.1, a block diagram illustrating a system in accordance with an embodiment is shown. The system shown inFIG.1may provide computer implemented services. The computer implemented services may include any type and quantity of computer implemented services. For example, the computer implemented services may include data storage services, instant messaging services, database services, and/or any other type of service that may be implemented with a computing device. To provide the computer implemented services, user input may be obtained. The user input may indicate, for example, how the computer implemented services are to be provided. The user input may include any type and quantity of information. To obtain the user input, a user may perform physical actions such as, for example, pressing buttons, moving structures, etc. These physical actions may be sensed by various devices, and the sensing may be interpreted (e.g., translated) into the user input (e.g., data). However, sensing physical actions by a user may involve use of sensors and/or devices that may consume power. The weight of the devices and forces applied by sources of the consumed power (e.g., batteries, cords to power supplies, etc.) may place a load (e.g., mechanical) on the user attempting to perform the physical actions. The mechanical load may fatigue the user, reduce the portability of the devices (e.g., mouses), and/or may be undesirable for other reasons. In general, embodiments disclosed herein may provide methods, systems, and/or devices for obtaining user input and/or using the obtained user input to provide computer implemented services. To provide the computer implemented services, a system may include data processing system100. Data processing system100may include hardware components usable to provide the computer implemented services. For example, data processing system100may be implemented using a computing device such as a laptop computer, desktop computer, portable computer, and/or other types of computing devices. Data processing system100may host software that may use user input to provide the computer implemented services. For example, the software may provide user input fields and/or other elements through which the user may provide information to manage and/or use the computer implemented services provided by data processing system100. To obtain the information from the user, data processing system100may obtain signals and/or data from sensing system102(e.g., via operable connection106). Data processing system100may interpret (e.g., translate) the signals (e.g., may be analog, data processing system100may include an analog to digital converter) and/or data (e.g., digital data) to obtain the user input. Sensing system102may track (e.g., by sensing108) and/or provide information regarding tracking of human interface device104, and provide the signals and/or data to data processing system100. A user may physically interact with human interface device104, thereby allowing the signals and/or data provided by sensing system102to include information regarding the physical actions of the user. For example, if a user moves human interface device104, sensing system102may track the change in position and/or motion of human interface device104and provide signals and/or data reflecting the changes in position and/or motion. Similarly, if a user actuates an actuatable portion (e.g., buttons) of human interface device, sensing system102may track the actuation of the actuatable portion and provide signals and/or data reflecting the actuation. To track human interface device104, sensing system102may include one or more sensors that sense a magnetic field emanating from human interface device104. The sensors may use the sensed magnetic field to track a location (absolute or relative) and orientation (absolute or relative) of a magnet embedded in human interface device104. The sensors may generate the signals and/or data provided by sensing system102to data processing system100. The sensors may sense the magnitude and/or direction of the magnetic field that passes proximate to each sensor. By knowing the relative placements of the sensors with respect to one another, the position and/or orientation of the magnet may be known (e.g., the magnetic field may be treated as emanating from a magnet with known dimensions and/or strength). Sensing system102may also include, for example, analog to digital converters, digital signal processing devices or other signal processing devices, and/or other devices for generating the signals and/or data based on information obtained via the sensors. Human interface device104may be implemented with a physical device that a user may actuate in one or more ways. For example, human interface device104may (i) be moveable, (ii) may include one or more buttons, (iii) may include one or more scroll controls, and/or (iv) may include other actuatable elements. Actuating human interface device104may change the orientation and/or position of the magnet with respect to the sensors of sensing system102. For example, when human interface device104is move away from sensing system102, the strength of the magnetic field emanating from the magnet as sensed by sensors of sensing system102may decrease. Similarly, when buttons or other actuatable elements of human interface device104are actuated, the magnet may be rotated (e.g., in one or more planes) thereby changing the direction of the magnetic field sensed by sensors of sensing system102. Refer toFIGS.2A-2Cfor additional details regarding sensing of human interface device104. Human interface device104may be a passive device. For example, human interface device104may not consume power, include batteries or sensors, etc. Consequently, human interface device104may be of smaller size, lower weight, and/or may provide other advantages when compared to active devices such as a computer mouse. Refer toFIGS.2C-2Ofor additional details regarding human interface device104. Data processing system100may perform a lookup or other type of operation to translate the signals and/or data from sensing system102into user input. Once obtained, the user input may be used to drive downstream processes. When providing its functionality, data processing system100may perform all, or a portion, of the method illustrated inFIG.3. Data processing system100may be implemented using a computing device (also referred to as a data processing system) such as a host or a server, a personal computer (e.g., desktops, laptops, and tablets), a “thin” client, a personal digital assistant (PDA), a Web enabled appliance, a mobile phone (e.g., Smartphone), an embedded system, local controllers, an edge node, and/or any other type of data processing device or system. For additional details regarding computing devices, refer toFIG.4. Any of the components illustrated inFIG.1may be operably connected to each other (and/or components not illustrated). For example, sensing system102may be operably connected to data processing system100via a wired (e.g., USB) or wireless connection. However, in some embodiment, human interface device104may not be operably connected to other device (e.g., may be a passive device), but may be sensed by sensing system102via sensing108. For example, during sensing108, a static magnetic field emanating from human interface device104may be sensed by sensing system102. While illustrated inFIG.1as included a limited number of specific components, a system in accordance with an embodiment may include fewer, additional, and/or different components than those illustrated therein. To further clarify embodiments disclosed herein, diagrams illustrating sensing of human interface device104in accordance with an embodiment are shown inFIGS.2A-2C. Turning toFIG.2A, an isometric diagram of human interface device104and sensing system102in accordance with an embodiment is shown. To obtain user input, human interface device104may include body220, and a number of actuatable elements (e.g.,222-226). Body220may be implemented with a structure upon which other elements may be affixed. For example, body220may be implemented with a plastic injection molded component or other structure. Body220may have a flat bottom that may allow human interface device104to slide along a surface on which it is positioned. Thus, one form of actuation of human interface device104may be for a person to grip body220and apply for to it to move it along the surface (thereby repositioning it with respect to sensing elements of sensing system102, discussed below). To obtain user input (in addition to via repositioning), the actuatable elements may include buttons224-226and a scroll control226. Generally, the actuatable element may be positioned on a top surface of human interface device104, but may be positioned elsewhere (e.g., on side surfaces). The actuatable elements may be actuatable by a person through application of force. Refer toFIGS.2H-2I,2M-2Ofor additional details regarding actuation of the actuatable elements by application of force. Buttons222-224may be implemented, for example, with surfaces that may be actuated through application of pressure downward. Application of the pressure may cause the button to move towards body220. A return mechanism may return the buttons to a resting position while force is not applied to it. Likewise, scroll control226may be implemented, for example, with a structural protrusion that may be actuated through application of pressure downward. In contrast to buttons222-224, scroll control226may be actuated differently through application of pressure to different portions of scroll control226. A return mechanism may return the scroll control226to a resting position while force is not applied to it. Application of force to body220may reposition human interface device104with respect to sensing elements of sensing system102. In contrast, application of force to the actuation elements may change an orientation of a magnet embedded inside of body220with respect to the sensing elements. For example, application of force to the respective buttons222-224may rotate the magnet forwards or backwards, respectively, in a first plane. Likewise, application of force to scroll control226may rotate the magnet forwards or backwards in a second plan, depending on where force is applied to scroll control226. The rotation and/or repositioning of the magnet may modify the magnetic field applied to the sensing elements of sensing system102. Refer toFIGS.2B-2Cfor additional details regarding the magnetic field emanating from human interface device104. Refer toFIG.2Dfor additional details regarding the magnet embedded in human interface device104. Like body220, the actuatable elements may generally be formed from plastic injection molded or other types of plastic and/or molded parts. To obtain user input, sensing system102may include any number of sensing elements (e.g.,202). The sensing elements may be sensors that monitor a magnitude and direction of a magnetic field, and generate signals or data to reflect these quantities. While not shown, sensing system102may include a signal processing chain (e.g., any number of signal conditioning and processing devices) that may condition and process the signals generated by the sensing elements to obtain information regarding the location and/or orientation of the magnet embedded in human interface device104. InFIG.2A, sensing system102is illustrated in the form of a pad or other structure upon which human interface device104may be positioned (the dashed line to the top left of the drawing indicates that the structure may continue on beyond that which is explicitly illustrated). However, sensing system102may be implemented with other types of structures. Additionally, while the sensing elements are illustrated with example positions, it will be appreciated that the sensing elements may be positioned differently without departing from embodiments disclosed herein. Turning toFIGS.2B-2C, diagrams illustrating a magnet and sensing element202in accordance with an embodiment are shown. As noted above, human interface device104may include magnet230. Magnet230may project a magnetic field. In these figures, the magnetic field is illustrated using lines with arrows superimposed over the midpoints of the lines. The direction of the arrow indicates and orientation of the field. As seen inFIG.2B, when magnet230is proximate (e.g., within a predetermined distance range, which may vary depending on the strength of magnet230and sensitivity level of sensing element202) to sensing element202, the magnetic field may be of sufficient strength to be measurable by sensing element202. Sensing element202may utilize any sensing technology to measure the magnitude and/or the orientation of the magnetic field at its location. Due to the field distribution of magnet230, the magnitude and orientation of the magnetic field at the location of sensing element202may be usable to identify, in part, the location and orientation of magnet230. For example, when magnet230is rotated as shown inFIG.2Cfrom the orientation as shown inFIG.2B, the direction and/or magnitude of the magnetic field at the location of sensing element202may change. By identify the magnitude and orientation of the magnetic field at a number of locations (e.g., corresponding to different sensing elements), the position and orientation of magnet230may be identified. To utilize the location and orientation of the magnet embedded in human interface device104to obtain user input, magnet230may be mechanically coupled to the actuatable elements and body220. Turning toFIGS.2D-2O, diagram illustrating mechanical coupling between magnet230and various portions of human interface device104in accordance with an embodiment are shown. InFIG.2D, a diagram of a portion of human interface device104in accordance with an embodiment is shown. The view may be looking upwards towards an underside of buttons222-224shown inFIG.2A. To mechanically couple magnet230to buttons222-224, scroll control226, and body220, magnet230, human interface device104may include two mechanical linkages. A first mechanical linkage may connect magnet230to buttons222-224and scroll control226(not shown inFIG.2D) and a second mechanical linkage may connect the buttons222-224to body220(not shown inFIG.2D). The first mechanical linkage may include cradle232and suspension elements (e.g.,234). Cradle232may be implemented with a structure in which magnet230is positioned. For example, the structure may include two posts on opposite sides of magnet230. Magnet230may be fixedly attached (e.g., via adhesive or other means) to the posts. Each of the posts may be attached to a corresponding suspension element and the scroll control. For example, a top of each of the posts may be attached to scroll control226, and a bottom of each of the posts may be attached to a different suspension element. Suspension element234may be implemented with a post, bar, or other mechanical structure. The structure may extend from a bottom surface of one of the buttons by a first distance (e.g., that is less than a second distance over which support element236extends, discussed below). The extended end of the structure may attach to cradle232. Suspension element234may suspend cradle232, magnet230, and scroll control226with respect to buttons222-224and body220. Suspension element234may also facilitate rotation of magnet230in a first plane. For example, when force is applied to scroll control226, the force may be transmitted to the suspension elements attaching cradle232to the buttons. The force may cause the suspension elements to flex thereby allowing for rotation of cradle232and magnet230(attached to cradle232). Suspension element234may be formed with an elastic material, and may include specific mechanical features (e.g., thickness, relief elements, etc.) to facilitate the flex and automatic return to a neutral position (as illustrated inFIG.2D) of suspension element234. Consequently, when force is no longer applied to scroll control226, magnet230may be automatically returned to the neutral position (at least with respect to rotation in the first plane). The second mechanical linkage may include support element236. Support element236may be implemented with a post, bar, or other mechanical structure. The structure may extend from a bottom surface of one or both of buttons222-224by a second distance that is greater than the first distance. As will be discussed in greater detail below, the extended end of support element236may be fixed to body220thereby suspending buttons222-224, cradle232, magnet230, suspension element234, and scroll control226with respect to body220. Support element236may also facilitate rotation of magnet230in a second plane. For example, when force is applied to one of buttons222-224, the force may be transmitted to the support elements. The force may cause the support elements to flex thereby allowing for rotation of cradle232and magnet230(attached to cradle232) in the second plane. The first plane and the second plane may be substantially (e.g., withing a few degrees such as 3°) perpendicular or orthogonal to one another. Support element236may be formed with an elastic material, and may include specific mechanical features (e.g., thickness, relief elements, etc.) to facilitate the flex and automatic return to the neutral position (as illustrated inFIG.2D) of support element236. Consequently, when force is no longer applied to buttons222-224, magnet230may be automatically returned to the neutral position (at least with respect to rotation in the second plane). To limit the degree of rotation in the first plane and provide a user with sensory feedback for buttons222-224, actuation elements (e.g.,238) may be positioned with buttons222-224. The actuation elements may be implemented with a post, bar, or other mechanical structure. The structure may extend from a bottom surface of one of buttons222-224by a thirds distance that is less than the first distance and the second. As will be discussed in greater detail below, the actuation elements may be positioned with other structures to limit the degree of flex of the support elements and to generate auditory signals (e.g., clicking noises) for users of human interface device104. A haptic feedback may also be generated. For example, the haptic feedback may be sensed by an appendage used by the user to actuate it. InFIG.2D, the structures positioned with buttons222-224have been illustrated using varying infill patterns. These infill patterns are maintained when these same structures are illustrated inFIGS.2E-2O. To further clarify the operation of human interface device, cross section views of human interface device104in accordance with an embodiment are shown inFIGS.2F-2I and2K-2O. Top view of human interface device104in accordance with an embodiment are shown inFIGS.2E and2Kto illustrate the locations of the planes in which the cross views are taken. Turning toFIG.2E, a first top view diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2E, the top of human interface device104may be substantially covered with buttons222-224and scroll control226. To actuate buttons222-224, force may be applied downward into the page on any portion of the respective button. The direction of rotation of magnet230may correspond to the respective buttons (e.g., opposite directions to one another) To actuate scroll control226, downward force may be applied to scroll control226. However, the location to which the force is applied may dictate the direction of the rotation of the magnet. With respect toFIG.2E, if force is applied to the top half of scroll control226, then magnet230may rotate in a first direction. In contrast, if force is applied to the bottom half of scroll control226, then magnet230may rotate in the second direction. InFIG.2E, two planes (i.e., Plane A and Plane B) are illustrated using respective dashed lines. The diagrams shown inFIGS.2F and2Hmay correspond to Plane A, while the diagrams shown inFIGS.2G and2Imay correspond to Plane B. Turning toFIG.2F, a first cross section diagram of human interface device104in accordance with an embodiment is shown. InFIG.2F, magnet230is not part of the cross section. However, for illustrative purposes, the outline of magnet230is superimposed. As seen inFIG.2F, when positioned with body220, support element236may suspend buttons222-224and actuation elements (e.g.,238) above body220. Sensory feedback elements (e.g.,239) may be positioned between body220and corresponding actuation elements. As will be illustrated inFIG.2H, actuation of either button may cause a corresponding actuation element to contact one of the sensory feedback elements. The position of the sensory feedback elements may limit the degree of rotation of magnet230and cause sensory feedback element239to generate an auditory signal (e.g., a sound) when an actuation element contact it. Sensory feedback element239may be implemented using a structure such as a noise making component that generates a sound when pressure is applied to one of its surfaces. The auditory signal may alert a user of human interface device104that sufficient force has been applied to a button for user input to be discerned by a data processing system. To position support element236, a positioning element237may be positioned with one end of support element236and body220. Positioning element237may be implemented, for example, with a portion of plastic or other material in which the end of support element236may be positioned. Turning toFIG.2G, a second cross section diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2G, while supported by support element236, suspension elements (e.g.,234) may suspend cradle232, magnet230, and scroll control226above body220. Consequently, when force is applied to either button (e.g.,222,224), cradle232and magnet230may rotate (clockwise or counterclockwise, depending on which button is pressed). Turning toFIG.2H, a third cross section diagram of human interface device104in accordance with an embodiment is shown. The diagram shown inFIG.2Hmay be similar to that shown inFIG.2F. As seen inFIG.2H, when force is applied to button222, support element236may flex thereby allowing magnet230to rotate counterclockwise in this example. The direction of rotation may be clockwise while button224is pressed. However, the degree of rotation may be limited by sensory feedback element239and actuation element238. For example, the degree of rotation may be limited to 6°. When the limit is reached, sensory feedback element239may both prevent additional limitation and may provide an auditory signal when the limit is reached. Sensory feedback element239may also provide a second auditory signal when actuation element238rotates away from sensory feedback element239once pressure on button222is released. In this manner two auditory signals may be provided to a user to guide use of human interface device104. Turning toFIG.2I, a fourth cross section diagram of human interface device104in accordance with an embodiment is shown. The diagram shown inFIG.2Imay be similar to that shown inFIG.2G. As seen inFIG.2G, when force is applied to button222, cradle232and magnet230may freely rotate without impinging on body220or other structures. However, as noted with respect toFIG.2I, the degree of rotation may be limited by sensory feedback element239and actuation element238. Turning toFIG.2J, a second top view diagram of human interface device104in accordance with an embodiment is shown. With respect toFIG.2E, human interface device104has been rotate 90° counterclockwise inFIG.2J. InFIG.2J, two planes (i.e., Plane E and Plane F) are illustrated using respective dashed lines. The diagrams shown inFIGS.2K,2M-2Nmay correspond to Plane F, while the diagrams shown inFIGS.2L and2Omay correspond to Plane E. Plane E may be aligned with one instance of actuation element238while Plane F may be aligned with magnet230. While not in Plane E, the outline of magnet230has been added toFIGS.2L and2Ousing a dashed line for illustrative purposes. Turning toFIG.2K, a fifth cross section diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2K, when positioned with body220, support element236may suspend buttons222-224, magnet230, cradle232, and scroll control226above body220. For example, scroll control226may be positioned on cradle232, and may extend above buttons222-224thereby allowing a user to apply pressure to it. Cradle232may be attached to the buttons via suspension elements (e.g.,234), which may be attached to respective buttons. By being suspended, magnet230may be free to rotate clockwise or counterclockwise depending on the portion of scroll control226to which force is applied. Turning toFIG.2L, a sixth cross section diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2L, cradle232may be attached to the buttons via suspension elements (e.g.,234), which may be attached to respective buttons. Consequently, magnet230may be suspended via this mechanical linkage. Turning toFIG.2M, a seventh cross section diagram of human interface device104in accordance with an embodiment is shown. The diagram shown inFIG.2Mmay be similar to that shown inFIG.2K. As seen inFIG.2M, when force is applied to a front portion of scroll control226, suspension element234(shown inFIG.2O) may flex thereby allowing magnet230to rotate counterclockwise in this example. The direction of rotation may be clockwise if force is applied to the back side of scroll control226. The degree of rotation of magnet230may be limited by the surface of the buttons that may form the rest of the top surface of human interface device104. However, the degree of rotation in this plane may be greater than the degree of rotation in the plane shown inFIG.2G. For example, turning toFIG.2N, an eighth cross section diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2N, the degree of rotation of magnet230may be greater in this plane than that shown inFIG.2G. For example, the degree of rotation may be up to 10 degrees. As will be discussed with respect toFIG.3, the degree of rotation may be used to identify different types of user input that a user is attempting to convey through actuation of scroll control226. Additionally, as seen inFIG.2N, magnet230is suspended through suspension element234. Turning toFIG.2O, a nineth cross section diagram of human interface device104in accordance with an embodiment is shown. As seen inFIG.2O, when force is applied to scroll control226, suspension element234may flex thereby allowing cradle232and magnet230attached to it to rotate. As noted above, the degree of rotation in this dimension may only be limited by the surface of the buttons (e.g.,222,224). Thus, for example, magnet230may rotate up to, for example, to 10° in this plane (in contrast to the rotation of 6° in the other plane). It will be appreciated that the extent of the rotation in each of the planes may vary without departing from embodiments disclosed herein. WhileFIGS.2A-2Ohave been illustrated as including specific numbers and types of components, it will be appreciated that any of the devices depicted therein may fewer, additional, and/or different components without departing from embodiments disclosed herein. As discussed above, the components ofFIG.1may perform various methods to provide computer implemented services using user input.FIG.3illustrates a method that may be performed by the components ofFIG.1. In the diagram discussed below and shown inFIG.3, any of the operations may be repeated, performed in different orders, and/or performed in parallel with or in a partially overlapping in time manner with other operations. Turning toFIG.3, a flow diagram illustrating a method of obtaining user input in accordance with an embodiment is shown. The method may be performed by data processing system100, sensing system102, human interface device104, and/or other components of the system ofFIG.1. At operation300, an orientation and/or position of a magnet in a human interface device is sensed. The orientation and/or positioned may be sensed by (i) obtaining measurements of a magnetic field emanating from the magnet, and (ii) computing the position and/or orientation based on the measurements. At operation302, a command is identified based on the orientation and/or position of the magnet. The command may be identified, for example, by comparing the position and/or orientation to a past position and/or orientation. The command may be identified by (i) identifying an orientation of the magnet in a first plane, (ii) identifying an orientation of the magnet in the second plane, and (iii) identifying the location of the magnet with respect to a sensing system. The orientation of the magnet in the first plane may be used to a perform a lookup based on a degree and direction of rotation of the magnet in the first plane. For example, if positively rotated by an amount exceeding a threshold, then the command may be identified as a left click of a pointing device. In another example, if negatively rotated by the amount exceeding the threshold, then the command may be identified as a right click of the pointing device. The orientation of the magnet in the second plane may be used to a perform a lookup based on a degree and direction of rotation of the magnet in the second plane. For example, if positively rotated by an amount exceeding a threshold, then the command may be identified as scrolling in a first direction and a rate of the scrolling may be identified (e.g., scaled) based on a degree of excess of the rotation beyond the threshold. In another example, if negatively rotated by an amount exceeding the threshold, then the command may be identified as scrolling in a second direction (opposite of the first, or another direction) and a rate of the scrolling may be identified (e.g., scaled) based on a degree of excess of the rotation beyond the threshold. The thresholds of rotation for the two planes may be similar or different. For example, the threshold for the second plane may be smaller than the first (e.g., thereby providing for a larger scaling range of the rate of scrolling), or may be larger (e.g., thereby limiting the scaling range of the rate of scrolling). The command may also be identified by, for example, using the position of the human interface device to identify a change in focus of the user (e.g., a mouse location on a display). The combination of the focus of the user and the user input (e.g., based on the user clicking a button, depressing a scroll wheel, etc.) may then be used to identify, for example, a function of an application or other type of functionality to be initiated or otherwise performed. At operation304, the command is performed. The command may be performed, for example, by an operating system passing through or otherwise providing information regarding the command to an application or other consumer of the user input. The consumer may then take action based on the command. For example, a data processing system may host an operating system, drivers, and/or other executing entities that may take responsibility for translating signals/data from a sensing system into commands or other types of user input. The method may end following operation304. Thus, using the method illustrated inFIG.3, embodiments disclosed herein may facilitate obtaining user input and using the user input to provide computer implemented services. By obtaining the user input via a passive device (at least with respect to user input), a human interface device in accordance with embodiments disclosed herein may be of lower complexity thereby improving the likelihood of continued operation, may not be dependent on power sources, may not require as large of physical loads to be exerted by users, and may provide other benefits. Any of the components illustrated inFIGS.1-2Omay be implemented with one or more computing devices. Turning toFIG.4, a block diagram illustrating an example of a data processing system (e.g., a computing device) in accordance with an embodiment is shown. For example, system400may represent any of data processing systems described above performing any of the processes or methods described above. System400can include many different components. These components can be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules adapted to a circuit board such as a motherboard or add-in card of the computer system, or as components otherwise incorporated within a chassis of the computer system. Note also that system400is intended to show a high level view of many components of the computer system. However, it is to be understood that additional components may be present in certain implementations and furthermore, different arrangement of the components shown may occur in other implementations. System400may represent a desktop, a laptop, a tablet, a server, a mobile phone, a media player, a personal digital assistant (PDA), a personal communicator, a gaming device, a network router or hub, a wireless access point (AP) or repeater, a set-top box, or a combination thereof. Further, while only a single machine or system is illustrated, the term “machine” or “system” shall also be taken to include any collection of machines or systems that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, system400includes processor401, memory403, and devices405-407via a bus or an interconnect410. Processor401may represent a single processor or multiple processors with a single processor core or multiple processor cores included therein. Processor401may represent one or more general-purpose processors such as a microprocessor, a central processing unit (CPU), or the like. More particularly, processor401may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor401may also be one or more special-purpose processors such as an application specific integrated circuit (ASIC), a cellular or baseband processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, a graphics processor, a network processor, a communications processor, a cryptographic processor, a co-processor, an embedded processor, or any other type of logic capable of processing instructions. Processor401, which may be a low power multi-core processor socket such as an ultra-low voltage processor, may act as a main processing unit and central hub for communication with the various components of the system. Such processor can be implemented as a system on chip (SoC). Processor401is configured to execute instructions for performing the operations discussed herein. System400may further include a graphics interface that communicates with optional graphics subsystem404, which may include a display controller, a graphics processor, and/or a display device. Processor401may communicate with memory403, which in one embodiment can be implemented via multiple memory devices to provide for a given amount of system memory. Memory403may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Memory403may store information including sequences of instructions that are executed by processor401, or any other device. For example, executable code and/or data of a variety of operating systems, device drivers, firmware (e.g., input output basic system or BIOS), and/or applications can be loaded in memory403and executed by processor401. An operating system can be any kind of operating systems, such as, for example, Windows® operating system from Microsoft®, Mac OS®/iOS® from Apple, Android® from Google®, Linux®, Unix®, or other real-time or embedded operating systems such as VxWorks. System400may further include IO devices such as devices (e.g.,405,406,407,408) including network interface device(s)405, optional input device(s)406, and other optional IO device(s)407. Network interface device(s)405may include a wireless transceiver and/or a network interface card (NIC). The wireless transceiver may be a WiFi transceiver, an infrared transceiver, a Bluetooth transceiver, a WiMax transceiver, a wireless cellular telephony transceiver, a satellite transceiver (e.g., a global positioning system (GPS) transceiver), or other radio frequency (RF) transceivers, or a combination thereof. The NIC may be an Ethernet card. Input device(s)406may include a mouse, a touch pad, a touch sensitive screen (which may be integrated with a display device of optional graphics subsystem404), a pointer device such as a stylus, and/or a keyboard (e.g., physical keyboard or a virtual keyboard displayed as part of a touch sensitive screen). For example, input device(s)406may include a touch screen controller coupled to a touch screen. The touch screen and touch screen controller can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen. IO devices407may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices407may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. IO device(s)407may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips. Certain sensors may be coupled to interconnect410via a sensor hub (not shown), while other devices such as a keyboard or thermal sensor may be controlled by an embedded controller (not shown), dependent upon the specific configuration or design of system400. To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage (not shown) may also couple to processor401. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a solid state device (SSD). However, in other embodiments, the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also a flash device may be coupled to processor401, e.g., via a serial peripheral interface (SPI). This flash device may provide for non-volatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system. Storage device408may include computer-readable storage medium409(also known as a machine-readable storage medium or a computer-readable medium) on which is stored one or more sets of instructions or software (e.g., processing module, unit, and/or processing module/unit/logic428) embodying any one or more of the methodologies or functions described herein. Processing module/unit/logic428may represent any of the components described above. Processing module/unit/logic428may also reside, completely or at least partially, within memory403and/or within processor401during execution thereof by system400, memory403and processor401also constituting machine-accessible storage media. Processing module/unit/logic428may further be transmitted or received over a network via network interface device(s)405. Computer-readable storage medium409may also be used to store some software functionalities described above persistently. While computer-readable storage medium409is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The terms “computer-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of embodiments disclosed herein. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, or any other non-transitory machine-readable medium. Processing module/unit/logic428, components and other features described herein can be implemented as discrete hardware components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, processing module/unit/logic428can be implemented as firmware or functional circuitry within hardware devices. Further, processing module/unit/logic428can be implemented in any combination hardware devices and software components. Note that while system400is illustrated with various components of a data processing system, it is not intended to represent any particular architecture or manner of interconnecting the components; as such details are not germane to embodiments disclosed herein. It will also be appreciated that network computers, handheld computers, mobile phones, servers, and/or other data processing systems which have fewer components or perhaps more components may also be used with embodiments disclosed herein. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as those set forth in the claims below, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments disclosed herein also relate to an apparatus for performing the operations herein. Such a computer program is stored in a non-transitory computer readable medium. A non-transitory machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices). The processes or methods depicted in the preceding figures may be performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes or methods are described above in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially. Embodiments disclosed herein are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments disclosed herein. In the foregoing specification, embodiments have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the embodiments disclosed herein as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 53,382 |
11861077 | DESCRIPTION OF EMBODIMENTS The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. There is a need for electronic devices that provide efficient methods and interfaces for interacting with the devices without touching display screens or other physical input mechanisms. Such techniques can reduce the cognitive burden on a user who interacts with the devices, thereby enhancing productivity. Further, such techniques can reduce processor and battery power by allowing operations to be performed more quickly and efficiently. Below,FIGS.1A-1B,2,3,4A-4B, and5A-5Bprovide a description of exemplary devices with which a user interacts.FIGS.6A-6Sillustrate exemplary user interfaces for interacting with the devices.FIGS.12A-12Bare flow diagrams illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.6A-6Sare used to illustrate the processes described below, including the processes inFIGS.12A-12B.FIGS.7A-7Qillustrate exemplary user interfaces for interacting with the devices.FIGS.13A-13Bare flow diagrams illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.7A-7Qare used to illustrate the processes described below, including the processes inFIGS.13A-13B.FIGS.8A-8Iillustrate exemplary user interfaces for interacting with the devices.FIG.14is a flow diagram illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.8A-8BIare used to illustrate the processes described below, including the processes inFIG.14.FIGS.9B-9Hillustrate exemplary user interfaces for interacting with the devices.FIG.15is a flow diagram illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.9B-9Hare used to illustrate the processes described below, including the processes inFIG.15.FIGS.10A-10Pillustrate exemplary user interfaces for interacting with the devices.FIGS.16A-16Bare flow diagrams illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.10A-10Pare used to illustrate the processes described below, including the processes inFIGS.16A-16B.FIGS.11A-11Dillustrate exemplary user interfaces for interacting with the devices.FIGS.17A-1713are flow diagrams illustrating methods of performing one or more operations with the devices, in accordance with some embodiments. The user interfaces inFIGS.11A-11Dare used to illustrate the processes described below, including the processes inFIGS.17A-17B. Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick. The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application. The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user. Attention is now directed toward embodiments of portable devices with touch-sensitive displays.FIG.1Ais a block diagram illustrating portable multifunction device100with touch-sensitive display system112in accordance with some embodiments. Touch-sensitive display112is sometimes called a “touch screen” for convenience and is sometimes known as or called a “touch-sensitive display system.” Device100includes memory102(which optionally includes one or more computer-readable storage mediums), memory controller122, one or more processing units (CPUs)120, peripherals interface118, RF circuitry108, audio circuitry110, speaker111, microphone113, input/output (I/O) subsystem106, other input control devices116, and external port124. Device100optionally includes one or more optical sensors164. Device100optionally includes one or more contact intensity sensors165for detecting intensity of contacts on device100(e.g., a touch-sensitive surface such as touch-sensitive display system112of device100). Device100optionally includes one or more tactile output generators167for generating tactile outputs on device100(e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system112of device100or touchpad355of device300). These components optionally communicate over one or more communication buses or signal lines103. As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. It should be appreciated that device100is only one example of a portable multifunction device, and that device100optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown inFIG.1Aare implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits. Memory102optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller122optionally controls access to memory102by other components of device100. Peripherals interface118can be used to couple input and output peripherals of the device to CPU120and memory102. The one or more processors120run or execute various software programs and/or sets of instructions stored in memory102to perform various functions for device100and to process data. In some embodiments, peripherals interface118, CPU120, and memory controller122are, optionally, implemented on a single chip, such as chip104. In some other embodiments, they are, optionally, implemented on separate chips. RF (radio frequency) circuitry108receives and sends RF signals, also called electromagnetic signals. RF circuitry108converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry108optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RE circuitry108optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry108optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. Audio circuitry110, speaker111, and microphone113provide an audio interface between a user and device100. Audio circuitry110receives audio data from peripherals interface118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker111. Speaker111converts the electrical signal to human-audible sound waves. Audio circuitry110also receives electrical signals converted by microphone113from sound waves. Audio circuitry110converts the electrical signal to audio data and transmits the audio data to peripherals interface118for processing. Audio data is, optionally, retrieved from and/or transmitted to memory102and/or RF circuitry108by peripherals interface118. In some embodiments, audio circuitry110also includes a headset jack (e.g.,212,FIG.2). The headset jack provides an interface between audio circuitry110and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). I/O subsystem106couples input/output peripherals on device100, such as touch screen112and other input control devices116, to peripherals interface118. I/O subsystem106optionally includes display controller156, optical sensor controller158, intensity sensor controller159, haptic feedback controller161, and one or more input controllers160for other input or control devices. The one or more input controllers160receive/send electrical signals from/to other input control devices116. The other input control devices116optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s)160are, optionally, coupled to any (or none) of the following, a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g.,208,FIG.2) optionally include an up/down button for volume control of speaker111and/or microphone113. The one or more buttons optionally include a push button (e.g.,206,FIG.2). A quick press of the push button optionally disengages a lock of touch screen112or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g.,206) optionally turns power to device100on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen112is used to implement virtual or soft buttons and one or more soft keyboards. Touch-sensitive display112provides an input interface and an output interface between the device and a user. Display controller156receives and/or sends electrical signals from/to touch screen112. Touch screen112displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects. Touch screen112has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen112and display controller156(along with any associated modules and/or sets of instructions in memory102) detect contact (and any movement or breaking of the contact) on touch screen112and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen112. In an exemplary embodiment, a point of contact between touch screen112and the user corresponds to a finger of the user. Touch screen112optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen112and display controller156optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California. A touch-sensitive display in some embodiments of touch screen112is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen112displays visual output from device100, whereas touch-sensitive touchpads do not provide visual output. A touch-sensitive display in some embodiments of touch screen112is described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety. Touch screen112optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen112using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. In some embodiments, in addition to the touch screen, device100optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen112or an extension of the touch-sensitive surface formed by the touch screen. Device100also includes power system162for powering the various components. Power system162optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. Device100optionally also includes one or more optical sensors164.FIG.1Ashows an optical sensor coupled to optical sensor controller158in I/O subsystem106. Optical sensor164optionally includes charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor164receives light from the environment, projected through one or more lenses, and converts the light to data representing an image. In conjunction with imaging module143(also called a camera module), optical sensor164optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device100, opposite touch screen display112on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user's image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display. In some embodiments, the position of optical sensor164can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor164is used along with the touch screen display for both video conferencing and still and/or video image acquisition. Device100optionally also includes one or more contact intensity sensors165.FIG.1Ashows a contact intensity sensor coupled to intensity sensor controller159in I/O subsystem106. Contact intensity sensor165optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor165receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112). In some embodiments, at least one contact intensity sensor is located on the back of device100, opposite touch screen display112, which is located on the front of device100. Device100optionally also includes one or more proximity sensors166.FIG.1Ashows proximity sensor166coupled to peripherals interface118. Alternately, proximity sensor166is, optionally, coupled to input controller160in I/O subsystem106. Proximity sensor166optionally performs as described in U.S. patent application Ser. No. 11/241,839, “Proximity Detector In Handheld Device”; Ser. No. 11/240,788, “Proximity Detector In Handheld Device”; Ser. No. 11/620,702, “Using Ambient Light Sensor To Augment Proximity Sensor Output”; Ser. No. 11/586,862, “Automated Response To And Sensing Of User Activity In Portable Devices”; and Ser. No. 11/638,251, “Methods And Systems For Automatic Configuration Of Peripherals,” which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables touch screen112when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call). Device100optionally also includes one or more tactile output generators167.FIG.1Ashows a tactile output generator coupled to haptic feedback controller161in I/O subsystem106. Tactile output generator167optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Contact intensity sensor165receives tactile feedback generation instructions from haptic feedback module133and generates tactile outputs on device100that are capable of being sensed by a user of device100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device100) or laterally (e.g., back and forth in the same plane as a surface of device100). In some embodiments, at least one tactile output generator sensor is located on the back of device100, opposite touch screen display112, which is located on the front of device100. Device100optionally also includes one or more accelerometers168.FIG.1Ashows accelerometer168coupled to peripherals interface118. Alternately, accelerometer168is, optionally, coupled to an input controller160in I/O subsystem106. Accelerometer168optionally performs as described in U.S. Patent Publication No. 20050190059, “Acceleration-based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692, “Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device100optionally includes, in addition to accelerometer(s)168, a magnetometer (not shown) and a OPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device100. In some embodiments, the software components stored in memory102include operating system126, communication module (or set of instructions)128, contact/motion module (or set of instructions)130, graphics module (or set of instructions)132, text input module (or set of instructions)134, Global Positioning System (GPS) module (or set of instructions)135, and applications (or sets of instructions)136. Furthermore, in some embodiments, memory102(FIG.1A) or370(FIG.3) stores device/global internal state157, as shown inFIGS.1A and3. Device/global internal state157includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display112; sensor state, including information obtained from the device's various sensors and input control devices116; and location information concerning the device's location and/or attitude. Operating system126(e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Communication module128facilitates communication with other devices over one or more external ports124and also includes various software components for handling data received by RF circuitry108and/or external port124. External port124(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices. Contact/motion module130optionally detects contact with touch screen112(in conjunction with display controller156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module130includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module130receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module130and display controller156detect contact on a touchpad. In some embodiments, contact/motion module130uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device100). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). Contact/motion module130optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event. Graphics module132includes various known software components for rendering and displaying graphics on touch screen112or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. In some embodiments, graphics module132stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module132receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller156. Haptic feedback module133includes various software components for generating instructions used by tactile output generator(s)167to produce tactile outputs at one or more locations on device100in response to user interactions with device100. Text input module134, which is, optionally, a component of graphics module132, provides soft keyboards for entering text in various applications (e.g., contacts137, e-mail140, IM141, browser147, and any other application that needs text input). GPS module135determines the location of the device and provides this information for use in various applications (e.g., to telephone module138for use in location-based dialing; to camera module143as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). Applications136optionally include the following modules (or sets of instructions), or a subset or superset thereof:Contacts module137(sometimes called an address book or contact list);Telephone module138;Video conference module139;E-mail client module140;Instant messaging (IM) module141;Workout support module142;Camera module143for still and/or video images;Image management module144;Video player module;Music player module;Browser module147;Calendar module148;Widget modules149, which optionally include one or more of: weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, dictionary widget149-5, and other widgets obtained by the user, as well as user-created widgets149-6;Widget creator module150for making user-created widgets149-6;Search module151;Video and music player module152, which merges video player module and music player module;Notes module153;Map module154; and/orOnline video module155. Examples of other applications136that are, optionally, stored in memory102include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, contacts module137are, optionally, used to manage an address book or contact list (e.g., stored in application internal state192of contacts module137in memory102or memory370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone138, video conference module139, e-mail140, or IM141; and so forth. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, telephone module138are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch screen112, display controller156, optical sensor164, optical sensor controller158, contact/motion module130, graphics module132, text input module134, contacts module137, and telephone module138, video conference module139includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, e-mail client module140includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module144, e-mail client module140makes it very easy to create and send e-mails with still or video images taken with camera module143. In conjunction with RE circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, the instant messaging module141includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XPPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS). In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, GPS module135, map module154, and music player module, workout support module142includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data. In conjunction with touch screen112, display controller156, optical sensor(s)164, optical sensor controller158, contact/motion module130, graphics module132, and image management module144, camera module143includes executable instructions to capture still images or video (including a video stream) and store them into memory102, modify characteristics of a still image or video, or delete a still image or video from memory102. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and camera module143, image management module144includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, browser module147includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, e-mail client module140, and browser module147, calendar module148includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and browser module147, widget modules149are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, and dictionary widget149-5) or created by the user (e.g., user-created widget149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo!Widgets). In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and browser module147, the widget creator module150are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget). In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, search module151includes executable instructions to search for text, music, sound, image, video, and/or other files in memory102that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, and browser module147, video and music player module152includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen112or on an external, connected display via external port124). In some embodiments, device100optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, notes module153includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions. In conjunction with RE circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, GPS module135, and browser module147, map module154are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, text input module134, e-mail client module140, and browser module147, online video module155includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module141, rather than e-mail client module140, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety. Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module152,FIG.1A). In some embodiments, memory102optionally stores a subset of the modules and data structures identified above. Furthermore, memory102optionally stores additional modules and data structures not described above. In some embodiments, device100is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device100, the number of physical input control devices (such as push buttons, dials, and the like) on device100is, optionally, reduced. The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device100to a main, home, or root menu from any user interface that is displayed on device100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad. FIG.1Bis a block diagram illustrating exemplary components for event handling in accordance with some embodiments. In some embodiments, memory102(FIG.1A) or370(FIG.3) includes event sorter170(e.g., in operating system126) and a respective application136-1(e.g., any of the aforementioned applications137-151,155,380-390). Event sorter170receives event information and determines the application136-1and application view191of application136-1to which to deliver the event information. Event sorter170includes event monitor171and event dispatcher module174. In some embodiments, application136-1includes application internal state192, which indicates the current application view(s) displayed on touch-sensitive display112when the application is active or executing. In some embodiments, device/global internal state157is used by event sorter170to determine which application(s) is (are) currently active, and application internal state192is used by event sorter170to determine application views191to which to deliver event information. In some embodiments, application internal state192includes additional information, such as one or more of: resume information to be used when application136-1resumes execution, user interface state information that indicates information being displayed or that is ready for display by application136-1, a state queue for enabling the user to go back to a prior state or view of application136-1, and a redo/undo queue of previous actions taken by the user. Event monitor171receives event information from peripherals interface118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display112, as part of a multi-touch gesture). Peripherals interface118transmits information it receives from I/O subsystem106or a sensor, such as proximity sensor166, accelerometer(s)168, and/or microphone113(through audio circuitry110). Information that peripherals interface118receives from I/O subsystem106includes information from touch-sensitive display112or a touch-sensitive surface. In some embodiments, event monitor171sends requests to the peripherals interface118at predetermined intervals. In response, peripherals interface118transmits event information. In other embodiments, peripherals interface118transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). In some embodiments, event sorter170also includes a hit view determination module172and/or an active event recognizer determination module173. Hit view determination module172provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display112displays more than one view. Views are made up of controls and other elements that a user can see on the display. Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. Hit view determination module172receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module172identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. Active event recognizer determination module173determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module173determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module173determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. Event dispatcher module174dispatches the event information to an event recognizer (e.g., event recognizer180). In embodiments including active event recognizer determination module173, event dispatcher module174delivers the event information to an event recognizer determined by active event recognizer determination module173. In some embodiments, event dispatcher module174stores in an event queue the event information, which is retrieved by a respective event receiver182. In some embodiments, operating system126includes event sorter170. Alternatively, application136-1includes event sorter170. In yet other embodiments, event sorter170is a stand-alone module, or a part of another module stored in memory102, such as contact/motion module130. In some embodiments, application136-1includes a plurality of event handlers190and one or more application views191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view191of the application136-1includes one or more event recognizers180. Typically, a respective application view191includes a plurality of event recognizers180. In other embodiments, one or more of event recognizers180are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application136-1inherits methods and other properties. In some embodiments, a respective event handler190includes one or more of: data updater176, object updater177GUI updater178, and/or event data179received from event sorter170. Event handler190optionally utilizes or calls data updater176, object updater177, or GUI updater178to update the application internal state192. Alternatively, one or more of the application views191include one or more respective event handlers190. Also, in some embodiments, one or more of data updater176, object updater177, and GUI updater178are included in a respective application view191. A respective event recognizer180receives event information (e.g., event data179) from event sorter170and identifies an event from the event information. Event recognizer180includes event receiver182and event comparator184. In some embodiments, event recognizer180also includes at least a subset of metadata183, and event delivery instructions188(which optionally include sub-event delivery instructions). Event receiver182receives event information from event sorter170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device. Event comparator184compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator184includes event definitions186. Event definitions186contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers190. In some embodiments, event definition187includes a definition of an event for a respective user-interface object. In some embodiments, event comparator184performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display112, when a touch is detected on touch-sensitive display112, event comparator184performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler190, the event comparator uses the result of the hit test to determine which event handler190should be activated. For example, event comparator184selects an event handler associated with the sub-event and the object triggering the hit test. In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type. When a respective event recognizer180determines that the series of sub-events do not match any of the events in event definitions186, the respective event recognizer180enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. In some embodiments, a respective event recognizer180includes metadata183with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. In some embodiments, a respective event recognizer180activates event handler190associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer180delivers event information associated with the event to event handler190. Activating an event handler190is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer180throws a flag associated with the recognized event, and event handler190associated with the flag catches the flag and performs a predefined process. In some embodiments, event delivery instructions188include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process. In some embodiments, data updater176creates and updates data used in application136-1. For example, data updater176updates the telephone number used in contacts module137, or stores a video file used in video player module. In some embodiments, object updater177creates and updates objects used in application136-1. For example, object updater177creates a new user-interface object or updates the position of a user-interface object. GUI updater178updates the GUI. For example, GUI updater178prepares display information and sends it to graphics module132for display on a touch-sensitive display. In some embodiments, event handler(s)190includes or has access to data updater176, object updater177, and GUI updater178. In some embodiments, data updater176, object updater177, and GUI updater178are included in a single module of a respective application136-1or application view191. In other embodiments, they are included in two or more software modules. It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices100with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. FIG.2illustrates a portable multifunction device100having a touch screen112in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI)200. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers202(not drawn to scale in the figure) or one or more styluses203(not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. Device100optionally also include one or more physical buttons, such as “home” or menu button204. As described previously, menu button204is, optionally, used to navigate to any application136in a set of applications that are, optionally, executed on device100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen112. In some embodiments, device100includes touch screen112, menu button204, push button206for powering the device on/off and locking the device, volume adjustment button(s)208, subscriber identity module (SIM) card slot210, headset jack212, and docking/charging external port124. Push button206is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device100also accepts verbal input for activation or deactivation of some functions through microphone113. Device100also, optionally, includes one or more contact intensity sensors165for detecting intensity of contacts on touch screen112and/or one or more tactile output generators167for generating tactile outputs for a user of device100. FIG.3is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device300need not be portable. In some embodiments, device300is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device300typically includes one or more processing units (CPUs)310, one or more network or other communications interfaces360, memory370, and one or more communication buses320for interconnecting these components. Communication buses320optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device300includes input/output (R/O) interface330comprising display340, which is typically a touch screen display. I/O interface330also optionally includes a keyboard and/or mouse (or other pointing device)350and touchpad355, tactile output generator357for generating tactile outputs on device300(e.g., similar to tactile output generator(s)167described above with reference toFIG.1A), sensors359(e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors similar to contact intensity sensor(s)165described above with reference toFIG.1A). Memory370includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory370optionally includes one or more storage devices remotely located from CPU(s)310. In some embodiments, memory370stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory102of portable multifunction device100(FIG.1A), or a subset thereof. Furthermore, memory370optionally stores additional programs, modules, and data structures not present in memory102of portable multifunction device100. For example, memory370of device300optionally stores drawing module380, presentation module382, word processing module384, website creation module386, disk authoring module388, and/or spreadsheet module390, while memory102of portable multifunction device100(FIG.1A) optionally does not store these modules. Each of the above-identified elements inFIG.3is, optionally, stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing a function described above. The above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. In some embodiments, memory370optionally stores a subset of the modules and data structures identified above. Furthermore, memory370optionally stores additional modules and data structures not described above. Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device100. FIG.4Aillustrates an exemplary user interface for a menu of applications on portable multifunction device100in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device300. In some embodiments, user interface400includes the following elements, or a subset or superset thereof:Signal strength indicator(s)402for wireless communication(s), such as cellular and Wi-Fi signals;Time404;Bluetooth indicator405;Battery status indicator406;Tray408with icons for frequently used applications, such as:Icon416for telephone module138, labeled “Phone,” which optionally includes an indicator414of the number of missed calls or voicemail messages;Icon418for e-mail client module140, labeled “Mail,” which optionally includes an indicator410of the number of unread e-mails;Icon420for browser module147, labeled “Browser;” andIcon422for video and music player module152, also referred to as iPod (trademark of Apple Inc.) module152, labeled “iPod;” andIcons for other applications, such as:Icon424for IM module141, labeled “Messages;”Icon426for calendar module148, labeled “Calendar;”Icon428for image management module144, labeled “Photos;”Icon430for camera module143, labeled “Camera;”Icon432for online video module155, labeled “Online Video;”Icon434for stocks widget149-2, labeled “Stocks;”Icon436for map module154, labeled “Maps;”Icon438for weather widget149-1, labeled “Weather;”Icon440for alarm clock widget149-4, labeled “Clock;”Icon442for workout support module142, labeled “Workout Support;”Icon444for notes module153, labeled “Notes;” andIcon446for a settings application or module, labeled “Settings,” which provides access to settings for device100and its various applications136. It should be noted that the icon labels illustrated inFIG.4Aare merely exemplary. For example, icon422for video and music player module152is labeled “Music” or “Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. FIG.4Billustrates an exemplary user interface on a device (e.g., device300,FIG.3) with a touch-sensitive surface451(e.g., a tablet or touchpad355,FIG.3) that is separate from the display450(e.g., touch screen display112). Device300also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors359) for detecting intensity of contacts on touch-sensitive surface451and/or one or more tactile output generators357for generating tactile outputs for a user of device300. Although some of the examples that follow will be given with reference to inputs on touch screen display112(where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown inFIG.4B. In some embodiments, the touch-sensitive surface (e.g.,451inFIG.4B) has a primary axis (e.g.,452inFIG.4B) that corresponds to a primary axis (e.g.,453inFIG.4B) on the display (e.g.,450). In accordance with these embodiments, the device detects contacts (e.g.,460and462inFIG.4B) with the touch-sensitive surface451at locations that correspond to respective locations on the display (e.g., inFIG.4B,460corresponds to468and462corresponds to470). In this way, user inputs (e.g., contacts460and462, and movements thereof) detected by the device on the touch-sensitive surface (e.g.,451inFIG.4B) are used by the device to manipulate the user interface on the display (e.g.,450inFIG.4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein. Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously. FIG.5Aillustrates exemplary personal electronic device500. Device500includes body502. In some embodiments, device500can include some or all of the features described with respect to devices100and300(e.g.,FIGS.1A-4B). In some embodiments, device500has touch-sensitive display screen504, hereafter touch screen504. Alternatively, or in addition to touch screen504, device500has a display and a touch-sensitive surface. As with devices100and300, in some embodiments, touch screen504(or the touch-sensitive surface) optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied. The one or more intensity sensors of touch screen504(or the touch-sensitive surface) can provide output data that represents the intensity of touches. The user interface of device500can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device500. Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Serial No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Serial No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety. In some embodiments, device500has one or more input mechanisms506and508. Input mechanisms506and508, if included, can be physical Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device500has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device500with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device500to be worn by a user. FIG.5Bdepicts exemplary personal electronic device500. In some embodiments, device500can include some or all of the components described with respect toFIGS.1N,1B, and3. Device500has bus512that operatively couples I/O section514with one or more computer processors516and memory518. I/O section514can be connected to display504, which can have touch-sensitive component522and, optionally, intensity sensor524(e.g., contact intensity sensor). In addition, I/O section514can be connected with communication unit530for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques. Device500can include input mechanisms506and/or508. Input mechanism506is, optionally, a rotatable input device or a depressible and rotatable input device, for example. Input mechanism508is, optionally, a button, in some examples. Input mechanism508is, optionally, a microphone, in some examples. Personal electronic device500optionally includes various sensors, such as GPS sensor532, accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof, all of which can be operatively connected to I/O section514. Memory518of personal electronic device500can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors516, for example, can cause the computer processors to perform the techniques described below, including processes1200-1700(FIGS.12A-17B). A computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like. Personal electronic device500is not limited to the components and configuration ofFIG.5B, but can include other or additional components in multiple configurations. As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices100,300, and/or500(FIGS.1A,3, and5A-5B). For example, an image (e.g., icon), a button, and text (e.g., hyperlink) each optionally constitute an affordance. As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad355inFIG.3or touch-sensitive surface451inFIG.4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system112inFIG.1Aor touch screen112inFIG.4A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a “focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation. In some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. For example, a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted sliding-average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity. The intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures. An increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a “light press” input. An increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a “deep press” input. An increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch-surface. A decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. In some embodiments, the contact-detection intensity threshold is zero. In some embodiments, the contact-detection intensity threshold is greater than zero. In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. In some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a “down stroke” of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an “up stroke” of the respective press input). In some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an “up stroke” of the respective press input). Similarly, in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances). For ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact above the press-input intensity threshold, an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, a decrease in intensity of the contact below the press-input intensity threshold, and/or a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold. Additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold. Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that are implemented on an electronic device, such as portable multifunction device100, device300, or device500. FIGS.6A-6Sillustrate exemplary user interfaces for interacting with an electronic device without touching a display screen or other physical input mechanism, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIGS.12A-12B. In particular,FIGS.6A-6Sillustrate exemplary user interfaces for responding to an incoming telephone call with an electronic device500. The electronic device500includes a display screen504and a tilt sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen, and the tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.6A, an incoming call notification602is initially displayed when the incoming telephone call is received. In addition, an answer call affordance604and a decline call affordance606are displayed. Throughout the sequence of interactions shown inFIGS.6A-6G, the answer call affordance604or the decline call affordance606can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). As shown inFIG.6A, electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. As shown inFIG.6B, the incoming call notification602is replaced with an incoming call track608. In some embodiments, the incoming call track608is displayed a predetermined time after initially receiving the incoming telephone call and/or in response to a user action. For instance, in some embodiments, the incoming call track608is displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. The incoming call track608includes a right track segment610and a left track segment612. The right and left track segments610and612share a center segment of the incoming call track608. A graphical object614is displayed at an initial location on the center segment of the incoming call track608. In some embodiments, the graphical object614is a virtual representation of a physical object (e.g., a ball). From the shared center segment of the incoming call track608, the right track segment610leads to the answer call affordance604, and the left track segment612leads to the decline call affordance606. A first demarcation616is displayed at the end of the right track segment610proximate to the answer call affordance604, and a second demarcation618is displayed at the end of the left track segment612proximate to the decline call affordance606. As shown inFIG.6C, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). In response to this change in orientation of the electronic device500, the graphical object614moves along the center segment of the incoming call track608toward the top of the display screen504. In some embodiments, the movement of the graphical object614between the different locations shown inFIGS.6B-6Gis animated. The animation corresponds to a simulated physical movement of the graphical object614rolling along the incoming call track608. For instance, in some embodiments, the acceleration and velocity of the graphical object614as it moves along the incoming call track608is representative of how a physical ball would roll along a physical track being held in the same orientation as the electronic device500. While the graphical object614is displayed at the location shown inFIG.6C, if the user rotates their wrist back toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves back toward its initial location on the center segment of the incoming call track608as shown inFIG.6B. Alternatively, in some embodiments, the graphical object614remains at the furthest location it reached in the center segment of the incoming call track608, such as shown inFIG.6C(e.g., the graphical object614does not lose progress along the incoming call track608). Furthermore, while the graphical object614is displayed at the location shown inFIG.6C, if the user maintains the same orientation of the electronic device500or further rotates their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504), then the graphical object614continues to move toward the top of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting to the left or right) do not have an effect on the movement of the graphical object614. As shown inFIG.6D, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). In response to this orientation of the electronic device500, the graphical object614continues to move toward the top of the display screen504until it encounters an upper edge of the incoming call track608. From this upper edge location, the graphical object614cannot continue along the incoming call track608without the orientation of the electronic device500being changed to a different orientation (e.g., tilted to the left, right, or down). While the graphical object614is displayed at the location shown inFIG.6D, if the user rotates their wrist back toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves back toward its initial location on the center segment of the incoming call track608as shown inFIG.6B. Alternatively, in some embodiments, the graphical object614remains at the upper edge of the incoming call track608, as shown inFIG.6D(e.g., the graphical object614does not lose progress along the incoming call track608). Furthermore, while the graphical object614is displayed at the location shown inFIG.6D, if the user then changes the angle of their arm/hand to tilt the electronic device500to the right (e.g., the left side of the display screen504is moved upward relative to the right side of the display screen504), then the graphical object moves toward the right side of the display screen504along right track segment610. In addition, if the user then changes the angle of their arm/hand to tilt the electronic device500to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504), then the graphical object moves toward the left side of the display screen504along left track segment612. In some embodiments, other orientations of the electronic device500(e.g., tilting further up) do not have an effect on the movement of the graphical object614. As shown inFIG.6E, after the user rotates their wrist away from their body, the electronic device500is tilted to the right (e.g., the left side of the display screen504is moved upward relative to the right side of the display screen504) as a result of the user changing the angle of their arm/hand. In response to this orientation of the electronic device500, the graphical object614moves toward the right side of the display screen504along right track segment610, until the graphical object614encounters a right edge of the right track segment610. From this right edge location, the graphical object614cannot continue along the right track segment610without the orientation of the electronic device500being changed to a different orientation (e.g., tilted to the left or down). While the graphical object614is displayed at the location shown inFIG.6E, if the user changes the angle of their arm/hand to tilt the electronic device500to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504), then the graphical object614moves back along the upper edge of the incoming call track608as shown inFIG.6D. Alternatively, in some embodiments, the graphical object614remains at the right edge of the right track segment610, as shown inFIG.6E(e.g., the graphical object614does not lose progress along the incoming call track608). Furthermore, while the graphical object614is displayed at the location shown inFIG.6E, if the user then rotates their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves down the right track segment610toward the bottom of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting further up or further to the right) do not have an effect on the movement of the graphical object614. As shown inFIG.6F, after tilting the electronic device500to the right, the orientation of the electronic device500is further changed as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). In response to this orientation of the electronic device500, the graphical object614moves down the right track segment610toward the bottom of the display screen504. While the graphical object614is displayed at the location shown inFIG.6F, if the user then rotates their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504), then the graphical object614moves back toward the upper edge of the incoming call track608, as shown inFIG.6E. Alternatively, in some embodiments, the graphical object614remains at the furthest location it reached in the right track segment612, as shown inFIG.6F(e.g., the graphical object614does not lose progress along the incoming call track608). Furthermore, while the graphical object614is displayed at the location shown inFIG.6F, if the user maintains the same orientation of the electronic device500or further rotates their wrist toward their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504), then the graphical object614continues to move down the right track segment610toward the bottom of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting to the left or right) do not have an effect on the movement of the graphical object614. As shown inFIG.6G, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist toward their body. In response to this orientation of the electronic device500, the graphical object614continues to move toward the bottom of the display screen504until it reaches the first demarcation616at the end of the right track segment610. The graphical object614then stops moving and is displayed at the end of the right track segment610proximate to the answer call affordance604. In some embodiments, the end of the right track segment610intersects the answer call affordance604. In these embodiments, the graphical object614can be displayed adjacent to the answer call affordance604, on top of the answer call affordance604, behind the answer call affordance604, or at other positions proximate to the answer call affordance604. In response to the graphical element614being displayed at the end of the right track segment610proximate to the answer call affordance604, a call answering notification620is initiated, as shown inFIG.6H. The call answering notification620is an animated graphic that starts as a small circle proximate to the end of the right track segment610that then expands toward the center of the display screen504until it reaches a full-size. The full-size call answering notification620is shown inFIG.6I. The call answering notification620indicates to the user that an answer call operation has been initiated by electronic device500. In addition, in some embodiments, once the full-size call answering notification620is displayed, the answer call affordance604changes visual appearance to further indicate to the user that the answer call operation has been initiated. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. A different sequence of interactions, related to those described in reference toFIGS.6E-6I, can be carried out to decline an incoming telephone call with the electronic device500. Instead of tilting the electronic device500to the right as shown inFIG.6E, the electronic device is tilted to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504). In response to this orientation of the electronic device500, the graphical object614moves toward the left side of the display screen504along left track segment612, until the graphical object614encounters a left edge of the left track segment612. The user then rotates their wrist toward their body (similar to as shown inFIGS.6F-6G) to move the graphical object614down the left track segment612toward the bottom of the display screen504. The graphical object614continues to move toward the bottom of the display screen504until it reaches the second demarcation618at the end of the left track segment612. The graphical object614then stops moving and is displayed at the end of the left track segment612proximate to the decline call affordance606. In some embodiments, the end of the left track segment612intersects the decline call affordance606. In these embodiments, the graphical object614can be displayed adjacent to the decline call affordance606, on top of the decline call affordance606, behind the decline call affordance606, or at other positions proximate to the decline call affordance606. In response to the graphical element614being displayed at the end of the left track segment612proximate to the decline call affordance606, a call ending notification is initiated (similar to the call ending notification630shown inFIG.6Q). The call ending notification is an animated graphic that starts as a small circle proximate to the end of the left track segment612that then expands toward the center of the display screen504until it reaches a full-size. The call ending notification indicates to the user that a decline call operation has been initiated by electronic device500. The decline call operation instructs the electronic device500or other associated device to decline the incoming telephone call. Once in an active telephone call, the incoming call track608is replaced with an end call track622on the display screen504, as shown inFIG.6J. In addition, the answer call affordance604is replaced with a mute affordance624. The decline call affordance606remains in the same location on the display screen as inFIGS.6A-6I. Throughout the sequence of interactions shown inFIGS.6J-6P, the decline call affordance606or mute affordance624can be touched by the user to perform their respective operation with the electronic device500(e.g., end the active telephone call or mute the microphone, respectively). The graphical object614is displayed in the same location on the display screen504as inFIGS.6G-6I(e.g., at the end of what was formerly the right track segment610of the incoming call track608). The end call track608leads from this current location of the graphical object614to the decline call affordance606(e.g., from the end of what was formerly the right track segment610to the decline call affordance606). A third demarcation628is displayed at the end of the end call track622proximate to the decline call affordance606. In some embodiments, a call timer626is also displayed in a center region of the display screen504. The time shown in the call timer626increases to indicate how long the telephone call is active, as shown inFIG.6K. As shown inFIG.6L, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body. In response to the new orientation of the electronic device500, the graphical object614moves along the end call track622toward the top of the display screen504. In some embodiments, the movement of the graphical object614between the different locations shown inFIGS.6J-6Pis animated in a similar manner as described in reference toFIGS.6B-6G. The animation corresponds to a simulated physical movement of the graphical object614rolling along the end call track622. For instance, in some embodiments, the acceleration and velocity of the graphical object614as it moves along the end call track622is representative of how a physical ball would roll along a physical track being held in the same orientation as the electronic device500. While the graphical object614is displayed at the location shown inFIG.6L, if the user rotates their wrist back toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves back toward its initial location on the end call track622as shown inFIGS.6J-6K. Alternatively, in some embodiments, the graphical object614remains at the furthest location it reached in the end call track622, as shown inFIG.6L(e.g., the graphical object614does not lose progress along the end call track622). Furthermore, while the graphical object614is displayed at the location shown inFIG.6L, if the user maintains the same orientation of the electronic device500or further rotates their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504), then the graphical object614continues to move toward the top of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting to the left or right) do not have an effect on the movement of the graphical object614. As shown inFIG.6M, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist away from their body. In response to this orientation of the electronic device500, the graphical object614continues to move toward the top of the display screen504until it encounters an upper edge of the end call track622. From this upper edge location, the graphical object614cannot continue along the end call track622without the orientation of the electronic device500being changed to a different orientation (e.g., tilted to the left). While the graphical object614is displayed at the location shown inFIG.6M, if the user rotates their wrist back toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves back toward the bottom of the display screen504. Alternatively, in some embodiments, the graphical object614remains at the upper edge of the end call track608, as shown inFIG.6M(e.g., the graphical object614does not lose progress along the end call track608). Furthermore, while the graphical object614is displayed at the location shown inFIG.6M, if the user then changes the angle of their arm/hand to tilt the electronic device500to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504), then the graphical object moves toward the left side of the display screen504along end call track622. In some embodiments, other orientations of the electronic device500(e.g., tilting further up or to the right) do not have an effect on the movement of the graphical object614. As shown inFIG.6N, after the user rotates their wrist away from their body, the electronic device500is tilted to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504) as a result of the user changing the angle of their arm/hand. In response to this orientation of the electronic device500, the graphical object614moves toward the left side of the display screen504along end call track622, until the graphical object614encounters a left edge of the end call track622. From this left edge location, the graphical object614cannot continue along the end call track622without the orientation of the electronic device500being changed to a different orientation (e.g., tilted to the right or down). While the graphical object614is displayed at the location shown inFIG.6N, if the user changes the angle of their arm/hand to tilt the electronic device500to the right (e.g., the left side of the display screen504is moved upward relative to the right side of the display screen504), then the graphical object614moves back toward the right side of the display screen504. Alternatively, in some embodiments, the graphical object614remains at the left edge of the end call track622, as shown inFIG.6N(e.g., the graphical object614does not lose progress along the end call track622). Furthermore, while the graphical object614is displayed at the location shown inFIG.6N, if the user then rotates their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504), then the graphical object614moves down the end call track622toward the bottom of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting further up or further to the left) do not have an effect on the movement of the graphical object614. As shown inFIG.6O, after tilting the electronic device500to the left, the orientation of the electronic device500is further changed as a result of the user rotating their wrist toward their body. In response to this orientation of the electronic device500, the graphical object614moves down the left side of the end call track622toward the bottom of the display screen504. While the graphical object614is displayed at the location shown inFIG.6O, if the user then rotates their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504), then the graphical object614moves back toward the upper edge of the end call track622, as shown inFIG.6N. Alternatively, in some embodiments, the graphical object614remains at the furthest location it reached in the end call track622, as shown inFIG.6O(e.g., the graphical object614does not lose progress along the end call track622). Furthermore, while the graphical object614is displayed at the location shown inFIG.6O, if the user maintains the same orientation of the electronic device500or further rotates their wrist toward their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504), then the graphical object614continues to rove down the end call track622toward the bottom of the display screen504. In some embodiments, other orientations of the electronic device500(e.g., tilting to the left or right) do not have an effect on the movement of the graphical object614. As shown inFIG.6P, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist toward their body. In response to this orientation of the electronic device500, the graphical object614continues to move toward the bottom of the display screen504until it reaches the third demarcation628at the end of the end call track622. The graphical object614then stops moving and is displayed at the end of the end call track622proximate to the decline call affordance606. In some embodiments, the end of the end call track intersects the decline call affordance606. In these embodiments, the graphical object614can be displayed adjacent to the decline call affordance606, on top of the decline call affordance606, behind the decline call affordance606, or at other positions proximate to the decline call affordance606. In response to the graphical element614being displayed at the end of the end call track622proximate to the decline call affordance606, a call ending notification630is initiated, as shown inFIG.6Q. The call ending notification630is an animated graphic that starts as a small circle proximate to the end of the end call track622that then expands toward the center of the display screen504until it reaches a full-size. The full-size call ending notification630is shown inFIG.6R. The call ending notification630indicates to the user that an end call operation has been initiated by electronic device500. Once the full-size call answering notification620is displayed, the other elements on the display screen504can be removed. The end call operation instructs the electronic device500or other associated device to end the active telephone call. Once the active telephone call has ended, a call ended notification632is displayed on the display screen504, as shown inFIG.6S. FIGS.7A-7Qillustrate exemplary user interfaces for interacting with an electronic device without touching a display screen or other physical input mechanism, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIGS.13A-13B. As shown inFIG.7A, electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. In particular,FIGS.7A-7Qillustrate exemplary user interfaces for responding to an incoming telephone call with an electronic device500. The electronic device500includes a display screen504and a tilt sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen, and the tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.7A, an incoming call notification702is initially displayed when the incoming telephone call is received. In addition, an answer call affordance704and a decline call affordance706are displayed. Throughout the sequence of interactions shown inFIGS.7A-7P, the answer call affordance704or the decline call affordance706can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). As shown inFIGS.71B-7E, a sequence of movement indicators708a-708d(e.g., musical notes) are displayed in response to the incoming telephone call being received. In some embodiments, the first movement indicator708ais displayed a predetermined time after initially receiving the incoming telephone call and/or in response to a user action. For instance, in some embodiments, the first movement indicator708ais displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. The second, third, and fourth movement indicators708b-708dthen appear in sequence after the first movement indicator708ais displayed, where the second movement indicator708bis displayed a predetermined time after the first movement indicator708ais displayed, and so on. Alternatively, the entire sequence of movement indicators708a-708dcan be displayed approximately simultaneously in response to the incoming telephone call being received. The sequence of movement indicators708a-708dindicates a sequence of movements that can be made with the electronic device500to respond to the incoming telephone call. For instance, the first movement indicator708a(e.g., a “high” musical note) indicates that the first movement the user would make with the electronic device500is to rotate the display screen504away from their body (e.g., the user rotates their wrist to move the bottom of the display screen504upward relative to the top of the display screen504). The second movement indicator708b(e.g., a “low” musical note) indicates that the second movement the user would make with the electronic device500is to rotate the display screen504toward their body (e.g., the user rotates their wrist to move the top of the display screen504upward relative to the bottom of the display screen504). The third and fourth movement indicators708c-708d(e.g., high musical notes) indicate that the user would rotate the display screen504away from their body two more times. In some embodiments, each of the movements indicated by the sequence of movement indicators708a-708dalso include a rotation of the electronic device500back toward its original orientation within a predetermined time period (e.g., each movement is a “flicking” motion where the display screen504is quickly rotated away/toward the user and then is immediately rotated in the opposite direction). While four movement indicators708a-708dare shown inFIGS.7B-7E, the number of movement indicators can vary. For instance, a sequence of two, three, or five or more movement indicators can be displayed in response to the incoming telephone call being received. The direction of movement indicated by of the movement indicators can also vary. For instance, the sequence of movement indicators can indicate two rotations toward the user (e.g., two “low” musical notes) followed by two rotations away from the user (e.g., two “high” notes). The user can then input a sequence of movements corresponding to each of the displayed movement indicators, such as shown inFIGS.7G-7N. After all of movement indicators708a-708dare displayed (as shown inFIG.7E), the movement indicators708a-708dare removed from the display screen504, as shown inFIG.7F. In some embodiments, the movement indicators708a-708dare removed a predetermined time after the last movement indicator (e.g.,708d) is displayed. Each of the movement indicators708a-708dare then displayed again, in sequence or approximately simultaneously, as shown inFIGS.7B-7E. The display and removal of the movement indicators708a-708das shown inFIGS.7B-7Fcan repeat until a user input is received, a predetermined time period has elapsed, or the incoming telephone call is no longer being received. In some embodiments, the display of each of the movement indicators708a-708dcorresponds to an audio notification of the incoming telephone call (e.g., a “ringtone”). For instance, the audio notification can include a repeating sequence of four tones (e.g., one high tone, one low tone, followed by two more high tones). In some embodiments, each of the movement indicators708a-708dare displayed at approximately the same time as each of the tones of the audio notification. As shown inFIG.7G, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This orientation corresponds to the movement indicated by the first movement indicator708a. In response to the movement of the electronic device500to this orientation or a similar orientation, an input indicator710is displayed. The input indicator710is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator710is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator710is not displayed. In some embodiments, the input indicator710is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator710moves toward the top of the display screen504in response to the user rotating their wrist away from their body). The input indicator710can also be animated to enlarge in size during the movement of the electronic device504, as shown inFIG.7H. In some embodiments, the first movement indicator708ais displayed in response to the corresponding movement of the electronic device504by the user. As shown inFIG.7H, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). The input indicator710shown inFIG.7His displayed in an upper region of the display screen to indicate to the user the direction of rotation that was detected by the electronic device500. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.7Hor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.7F) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward the user). As shown inFIG.7I, the orientation of the electronic device500is changed as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). This orientation corresponds to the movement indicated by the second movement indicator708b. In response to the movement of the electronic device500to this orientation or a similar orientation, the input indicator710is displayed. The input indicator710is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator710is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator710is not displayed. In some embodiments, the input indicator710is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator710moves toward the bottom of the display screen504in response to the user rotating their wrist toward their body). The input indicator710can also be animated to enlarge in size during the movement of the electronic device504, as shown inFIG.7J. In some embodiments, the second movement indicator708bis displayed in response to the corresponding movement of the electronic device504by the user. As shown inFIG.7J, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist toward their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504). The input indicator710shown inFIG.7Jis displayed in a lower region of the display screen to indicate to the user the direction of rotation that was detected by the electronic device500. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.7Jor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.7F) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated toward the user and then is immediately rotated back away from the user). As shown inFIG.7K, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body a second time (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This orientation corresponds to the movement indicated by the third movement indicator708c. In response to the movement of the electronic device500to this orientation or similar orientation, the input indicator710is displayed. The input indicator710is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator710is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator710is not displayed. In some embodiments, the input indicator710is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator710moves toward the top of the display screen504in response to the user rotating their wrist away from their body). The input indicator710can also be animated to enlarge in size during the movement of the electronic device504, as shown inFIG.7L. In some embodiments, the third movement indicator708cis displayed in response to the corresponding movement of the electronic device504by the user. As shown inFIG.7L, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist away from their body the second time (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). The input indicator710shown inFIG.7Lis displayed in an upper region of the display screen to indicate to the user the direction of rotation that was detected by the electronic device500. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.7Lor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.7F) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward the user). As shown inFIG.7M, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body a third time (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This orientation corresponds to the movement indicated by the fourth movement indicator708d. In response to the movement of the electronic device500to this orientation or a similar orientation, the input indicator710is displayed. The input indicator710is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator710is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator710is not displayed. In some embodiments, the input indicator710is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator710moves toward the top of the display screen504in response to the user rotating their wrist away from their body). The input indicator710can also be animated to enlarge in size during the movement of the electronic device504, as shown inFIG.7N. In some embodiments, the fourth movement indicator708dis displayed in response to the corresponding movement of the electronic device504by the user. As shown inFIG.7N, the orientation of the electronic device500is further changed as a result of the user further rotating their wrist away from their body the third time (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). The input indicator710shown inFIG.7Nis displayed in an upper region of the display screen to indicate to the user the direction of rotation that was detected by the electronic device500. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.7Nor similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.7F) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward the user). In some embodiments, after the user inputs the sequence of movements corresponding to the movement indicators708a-708d, a success notification is displayed on the display screen504. As shown inFIGS.7O-7P, the success notification includes an animation of the movement indicators708a-708d. The movement indicators708a-708denlarge in size and fade out in appearance to indicate to the user that the sequence of movements inputted by the user satisfy the sequence of movements indicated by the movement indicators708a-708d. In response to the user successfully performing the sequence of movements corresponding to the movement indicators708a-708d, a call connecting notification712is displayed, as shown inFIG.7Q. The call connecting notification712indicates to the user that an answer call operation has been initiated by electronic device500. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. If the user fails to input the sequence of movements corresponding to the movement indicators708a-708dwithin a predetermined time, then the electronic device500forgoes performing the operation associated with the movement indicators708a-708d. For instance, if the user only inputs three out of the four movements indicated by the movement indicators708a-708dwithin the predetermined time, then no operation would be performed by the electronic device500. In some embodiments, the predetermined time corresponds to a number of times the display of the sequence of movement indicators708a-708dis repeated or an amount of time before the incoming telephone call is automatically canceled or forwarded to voicemail. FIGS.8A-8BIillustrate exemplary user interfaces for interacting with an electronic device without touching a display screen or other physical input mechanism, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIG.14. In particular,FIGS.8A-8Zillustrate exemplary user interfaces for responding to an incoming instant message with an electronic device500. The electronic device500includes a display screen504and a tilt sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen, and the tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.8A, electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. As shown inFIG.8A, an incoming instant message802is displayed. In addition, a reply affordance804and dismiss affordance806are displayed. In some embodiments, the incoming instant message802is displayed a predetermined time after the instant message802is received and/or in response to a user action. For instance, in some embodiments, the instant message802is displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. Throughout the sequence of interactions shown inFIGS.8A-8E, the reply affordance804or the dismiss affordance806can be touched by the user to perform their respective operations with the electronic device500(e.g., displaying a reply interface or dismissing the incoming instant message802, respectively). As shown inFIG.8B, the instant message802is replaced with an instructional graphic808in a center region810of the display screen504. In some embodiments, the instructional graphic808is displayed a predetermined time after initially displaying the instant message802and/or in response to a user action. For instance, in some embodiments, the instructional graphic808is displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. In addition, the reply affordance804is moved to an upper region of the display screen504and the dismiss affordance806is moved to a lower region of the display screen504. The instructional graphic808indicates movements a user can make with the electronic device500to perform the operations associated with the reply affordance804and the dismiss affordance806. In some embodiments, the instructional graphic808is animated to demonstrate the movements to the user. The movements include rotating the display screen504away from the user's body (e.g., the user rotates their wrist to move the bottom of the display screen504upward relative to the top of the display screen504) or rotating the display screen504toward the user's body (e.g., the user rotates their wrist to move the top of the display screen504upward relative to the bottom of the display screen504). In some embodiments, each of the movements indicated by the instructional graphic808also include a rotation of the electronic device500back toward its original orientation within a predetermined time period (e.g., each movement is a “flicking” motion where the display screen504is quickly rotated away/toward the user and then is immediately rotated in the opposite direction). The user can then input one of the movements indicated by the instructional graphic808, as shown inFIGS.8C-8EandFIGS.8X-8Z. As shown inFIG.8C, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This movement corresponds to one of the movements indicated by the instructional graphic808. In response to the movement of the electronic device500to this orientation or a similar orientation, an input indicator812is displayed. The input indicator812is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator812is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator812is not displayed. In some embodiments, the input indicator812is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator812moves toward the top of the display screen504in response to the user rotating their wrist away from their body). FIG.8Dillustrates the orientation of the electronic device500being further changed as a result of the user further rotating their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). The input indicator812is displayed overlapping the reply affordance804in the upper region of the display screen504to indicate to the user that the movement of the electronic device500corresponds to a selection of the reply operation associated with the reply affordance804. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.8Dor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.8B) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward the user). As shown inFIG.8E, following the rotation of the display screen504away from the user's body as shown inFIGS.8C-8D, the reply affordance804and overlapping input indicator812move toward the center region of the display screen504. The movement of the reply affordance804and overlapping input indicator812indicates that the previous movement of the electronic device500has resulted in a selection of the reply operation associated with the reply affordance804. As shown inFIG.8F, in response to the reply affordance804being selected, a list of predefined responses814a-814eare displayed. Each of the predefined responses814a-814ecorresponds to a message the user can send to respond to the instant message802shown inFIG.8A. For instance, the predefined responses814a-814ecan include “Yes,” “No,” “Maybe,” “See you there,” “Thank you,” or other common responses to an instant message. While five predefined responses814a-814eare shown inFIG.8E, the number of predefined responses being displayed can vary. Furthermore, in some embodiments, additional predefined responses are displayed by scrolling the list of predefined responses, as discussed in reference toFIGS.8G-8O. As shown inFIG.8G, the orientation of the electronic device500is changed as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). This movement corresponds to one of the movement indicated by the instructional graphic808shown inFIG.8B. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.8Gor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.8F) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated toward the user and then is immediately rotated back away from the user). In response to the movement of the electronic device500to the orientation shown inFIG.8Gor a similar orientation, the input indicator812is displayed. The input indicator812is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator812is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator812is not displayed. In some embodiments, the input indicator812is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator812moves toward the bottom of the display screen504in response to the user rotating their wrist toward their body). In response to this movement of the electronic device500, the predefined responses814a-814dare scrolled down such that a predefined response in the upper region of the display screen504(e.g., predefined response814b) is moved toward the center region of the display screen504and the predefined response at the bottom of the display screen504(e.g., predefined response814eshown inFIG.8F) is no longer displayed. As shown inFIG.8H, following the rotation of the display screen504toward the user's body as shown inFIG.8G, the predefined response814band input indicator812are displayed in the center region of the display screen504to indicate that the list of predefined responses814a-814dis no longer being scrolled and the predefined responses814a-814dwill remain in their displayed positions unless additional input is provided by the user. As shown inFIG.8I, after the list of predefined responses is scrolled and the predefined response814bis displayed in the center region of the screen, predefined response814bis highlighted. If no additional input from the user is received within a predetermined time, then predefined response814bwill be selected after the predetermined time. As shown inFIG.8J, before the predetermined time to select predefined response814bas shown inFIG.8Ihas elapsed, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This movement corresponds to one of the movement indicated by the instructional graphic808shown inFIG.8B. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.8Jor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.8I) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward from the user). In response to the movement of the electronic device500to the orientation shown inFIG.8Jor a similar orientation, the input indicator812is displayed. The input indicator812is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator812is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator812is not displayed. In some embodiments, the input indicator812is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator812moves toward the top of the display screen504in response to the user rotating their wrist away from their body). In response to this movement of the electronic device500, the predefined responses814a-814eare scrolled up such that a predefined response in the lower region of the display screen504(e.g., predefined response814c) is moved toward the center region of the display screen504. As a result of the upward scroll, the predefined response814ethat had been scrolled off the display screen504inFIGS.8G-8Iis displayed again in the bottom region of the display screen. As shown inFIG.8K, following the rotation of the display screen504away from the user's body as shown inFIG.8J, the predefined response814cand input indicator812are displayed in the center region of the display screen504to indicate that the list of predefined responses814a-814eis no longer being scrolled and the predefined responses814a-814ewill remain in their displayed positions unless additional input is provided by the user. As shown inFIG.8L, after the list of predefined responses is scrolled and the predefined response814cis displayed in the center region of the screen, the predefined response814cis highlighted. If no additional input from the user is received within a predetermined time, then predefined response814cwill be selected after the predetermined time. As shown inFIG.8M, before the predetermined time to select the predefined response814cas shown inFIG.8Lhas elapsed, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body again (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). This movement corresponds to one of the movement indicated by the instructional graphic808shown inFIG.8B. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.8Mor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.8L) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated away from the user and then is immediately rotated back toward from the user). In response to the movement of the electronic device500to the orientation shown inFIG.8Mor a similar orientation, the input indicator812is displayed. The input indicator812is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator812is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator812is not displayed. In some embodiments, the input indicator812is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator812moves toward the top of the display screen504in response to the user rotating their wrist away from their body). In response to this movement of the electronic device500, the predefined responses814a-814eare scrolled up such that a predefined response in the lower region of the display screen504(e.g., predefined response814d) is moved toward the center region of the display screen504. As a result of the upward scroll, the predefined response at the top of the display screen504(e.g., predefined response814ashown inFIG.8L) is no longer displayed and a new predefined response814fis displayed at the bottom of the display screen504. As shown inFIG.8N, following the rotation of the display screen504away from the user's body as shown inFIG.8M, the predefined response814dand input indicator812are displayed in the center region of the display screen504to indicate that the list of predefined responses814a-814eis no longer being scrolled and the predefined responses814b-814fwill remain in their displayed positions unless additional input is provided by the user. As shown inFIG.8O, after the list of predefined responses is scrolled and the predefined response814dis displayed in the center region of the screen, the predefined response814cis highlighted. If no additional input from the user is received within a predetermined time, then predefined response814dwill be selected after the predetermined time. As shown inFIGS.8P-8U, after highlighting the predefined response814d, the other predefined responses are removed from the display screen504if no additional input is received from the user within a predetermined time. In addition, a progress ring816and a selection notification818are displayed to indicate when the highlighted predefined response814dwill be sent as a response to the instant message802shown inFIG.8A. The progress ring816is a graphical element that forms a circle over a second predetermined period of time. The amount of time that has elapsed in the second predetermined period of time is indicated by the portion of the progress ring816that has been displayed. The selection notification818notifies the user that the highlighted predefined response814dwill be sent once the second predetermined period of time corresponding to the progress ring816has elapsed. The sending of the predefined response814dcan be canceled by performing a variety of inputs with the electronic device500. For instance, the user can rapidly and repeatedly rotate their wrist toward and away from their body to cancel the sending of the predefined response814d. Alternatively or in addition, the user can touch the display screen504or operate a physical input mechanism on the electronic device to cancel the sending of the predefined response814d. If no cancelation input is received before the predetermined period of time has elapsed, then the electronic device500proceeds with sending the predefined response814d, as shown inFIG.8V. FIGS.8W-8Zillustrate the dismiss affordance806being selected after the instant message802shown inFIG.8Ais received. As shown inFIG.8W, the instructional graphic808is displayed in the center region810of the display screen504, the reply affordance804is displayed in an upper region of the display screen504, and the dismiss affordance806is displayed in a lower region of the display screen504(same as shown inFIG.8B). As shown inFIG.8X, the orientation of the electronic device500is changed as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). This movement corresponds to one of the movements indicated by the instructional graphic808. In response to the movement of the electronic device500to this orientation or a similar orientation, an input indicator812is displayed. The input indicator812is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the input indicator812is displayed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the input indicator812is not displayed. In some embodiments, the input indicator812is an animated graphic that moves in the direction of movement of the electronic device504(e.g., the input indicator812moves toward the bottom of the display screen504in response to the user rotating their wrist toward their body). FIG.8Yillustrates the orientation of the electronic device500being further changed as a result of the user further rotating their wrist away from their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504). The input indicator812is displayed overlapping the dismiss affordance806in the lower region of the display screen504to indicate to the user that the movement of the electronic device500corresponds to a selection of the dismiss operation associated with the dismiss affordance806. In some embodiments, following the movement of the electronic device500into the orientation shown inFIG.8Yor a similar orientation, the electronic device500is rotated back toward its original orientation (e.g., the orientation shown inFIG.8W) within a predetermined time period (e.g., the user makes a “flicking” motion with the electronic device500, where the display screen504is quickly rotated toward the user and then is immediately rotated back away from the user). As shown inFIG.8Z, following the rotation of the display screen504toward the user's body as shown inFIGS.8X-8Y, the dismiss affordance806and overlapping input indicator812move toward the center region of the display screen504. The movement of the dismiss affordance806and overlapping input indicator812indicates that the previous movement of the electronic device500has resulted in a selection of the dismiss operation associated with the dismiss affordance806. The electronic device500then carries out the dismiss operation, which replaces the user interfaces ofFIGS.8A-8Zwith a default user interface820(e.g. a time display) as shown inFIG.8AA. FIGS.8AB-8ARillustrate exemplary user interfaces for responding to an incoming telephone call with an electronic device500. The electronic device500includes a display screen504and a tilt sensor, among other elements which can be found above and/or as discussed in reference toFIG.8A. The display screen504can be a touch-sensitive display screen, and the tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.8AB, electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. As shown inFIG.8AB, an incoming call notification822is displayed in a center region810of the display screen504when the incoming telephone call is received. Alternatively or in addition, an instructional graphic808(as shown inFIG.8B) can be displayed in the center region810of the display screen504. For instance, the center region810can alternately display the incoming call notification822followed by the instructional graphic808while the incoming telephone call is being received. In some embodiments, the incoming call notification822and/or instructional graphic808are displayed a predetermined time after the incoming telephone call is initially received and/or in response to a user action. For instance, in some embodiments, the incoming call notification822and/or instructional graphic808are displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. In addition, an answer call affordance824is displayed in a lower region of the display screen504and a decline call affordance826is displayed in an upper region of the display screen504. When the incoming telephone call is initially received, the answer call affordance824or the decline call affordance826can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). In addition, the user can change the orientation of the electronic device500to perform the operations associated with the answer call affordance824and the decline call affordance826. In some embodiments, the instructional graphic808indicates the changes in orientation of the electronic device500the user can make to perform the operations associated with the answer call affordance824and the decline call affordance826. In some embodiments, the instructional graphic808is animated to demonstrate the changes in orientation to the user. The changes in orientation include rotating the display screen504away from the user's body (e.g., the user rotates their wrist to move the bottom of the display screen504upward relative to the top of the display screen504) or rotating the display screen504toward the user's body (e.g., the user rotates their wrist to move the top of the display screen504upward relative to the bottom of the display screen504). As shown inFIG.8AC, the orientation of the electronic device500is changed as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the answer call affordance824is enlarged in size and moves toward the center region of the display screen504. The change in visual appearance and location of the answer call affordance824indicates to the user that the answer call operation will be selected as a result of change in orientation of the electronic device500shown inFIG.8AC. In addition, the visual appearance of the decline call affordance826can be changed (e.g., darkened in brightness or partially faded) to indicate that the decline call operation will not be selected as a result of change in orientation of the electronic device500shown inFIG.8AC. As shown inFIGS.8AD-8AH, the electronic device500is held in the same or similar orientation as shown inFIG.8AC. As a result of the electronic device500continuing to be held in this orientation, the answer call affordance824is displayed in the center region of the display screen504and a progress ring828is displayed with the answer call affordance824. The progress ring828indicates a predetermined amount of time the electronic device500should be held in this orientation or a similar orientation in order for the answer call operation to be carried out. As shown inFIGS.8AD-8AH, the progress ring828is a graphical element that forms a circle as the predetermined amount of time elapses. The amount of time that has elapsed is indicated by the amount of the progress ring828being displayed. In addition an answer call notification830is displayed on the display screen504. The answer call notification830notifies the user that the answer call operation will be carried out once the predetermined amount of time corresponding to the progress ring828has elapsed. The answer call operation can be canceled by changing the orientation of the electronic device to a substantially different orientation. If the electronic device continues to be held in the orientation ofFIG.8AC-8AHor a similar orientation until the predetermined amount of time has elapsed, then the electronic device500proceeds with the answer call operation, as shown inFIG.8AJ. As shown inFIG.8AI, if the electronic device500is held in the orientation ofFIG.8AC-8AHor a similar orientation until the predetermined amount of time has elapsed, the progress ring828is completed and the answer call affordance824and progress ring828are enlarged in size. The change in visual appearance of the answer call affordance824and progress ring828indicates that the electronic device500was successfully held in the orientation ofFIG.8AC-8AHor a similar orientation for the predetermined amount of time, and that the answer call operation is being initiated. In response to the electronic device500being held in the orientation ofFIG.8AC-8AHor a similar orientation for the predetermined amount of time, a call answering notification832is displayed, as shown inFIG.8AJ. The call answering notification832indicates to the user that an answer call operation has been initiated by the electronic device500. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. In addition, the decline call affordance826and a mute affordance834are displayed while the answer call operation is being carried out and while the telephone call is active. As shown inFIG.8AK, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the decline call affordance826is enlarged in size and moves toward the center region of the display screen504. The change in visual appearance and location of the decline call affordance826indicates to the user that the decline call operation will be selected as a result of change in orientation of the electronic device500shown inFIG.8AKIn addition, the visual appearance of the answer call affordance824can be changed (e.g., darkened in brightness or partially faded) to indicate that the decline call operation will not be selected as a result of change in orientation of the electronic device500shown inFIG.8AK. As shown inFIGS.8AL-8AP, the electronic device500is held in the same or similar orientation as shown inFIG.8AK. As a result of the electronic device500continuing to be held in this orientation, the decline call affordance826is displayed in the center region of the display screen504and the progress ring828is displayed with the decline call affordance826. The progress ring828indicates a predetermined amount of time the electronic device500should be held in this orientation or a similar orientation in order for the decline call operation to be carried out. As shown inFIGS.8AL-8AP, the progress ring828is a graphical element that forms a circle as the predetermined amount of time elapses. The amount of time that has elapsed is indicated by the amount of the progress ring828being displayed. In addition, a decline call notification836is displayed on the display screen504. The decline call notification836notifies the user that the decline call operation will be carried out once the predetermined amount of time corresponding to the progress ring828has elapsed. The decline call operation can be canceled by changing the orientation of the electronic device to a substantially different orientation. If the electronic device continues to be held in the orientation ofFIG.8AK-8Por a similar orientation until the predetermined amount of time has elapsed, then the electronic device500proceeds with the decline call operation, as shown inFIG.8AR. As shown inFIG.8AQ, if the electronic device500is held in the orientation ofFIG.8AK-8APor a similar orientation until the predetermined amount of time has elapsed, the progress ring828is completed and the decline call affordance826and progress ring828are enlarged in size. The change in visual appearance of the decline call affordance826and progress ring828indicates that the electronic device500was successfully held in the orientation ofFIG.8AK-8APor a similar orientation for the predetermined amount of time, and that the decline call operation is being initiated. In response to the electronic device500being held in the orientation ofFIG.8AK-8APor a similar orientation for the predetermined amount of time, a call ending notification838is displayed, as shown inFIG.8AR. The call ending notification838indicates to the user that a decline call operation has been initiated by the electronic device500. The decline call operation instructs the electronic device500or other associated device to decline the incoming telephone call. FIGS.8AS-8BIillustrate another exemplary user interface for responding to an incoming telephone call with an electronic device500. The electronic device500includes a display screen504and a tilt sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen, and the tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.8AS, electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. As shown inFIG.8AS, an incoming call notification822is displayed on the display screen504when the incoming telephone call is received. In some embodiments, the incoming call notification822is displayed a predetermined time after the incoming telephone call is initially received and/or in response to a user action. For instance, in some embodiments, the incoming call notification822is displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. In addition, an answer call affordance824is displayed in a right region of the display screen504and a decline call affordance826is displayed in a left region of the display screen504. When the incoming telephone call is initially received, the answer call affordance824or the decline call affordance826can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). In addition, the user can change the orientation of the electronic device500to perform the operations associated with the answer call affordance824and the decline call affordance826. The changes in orientation include tilting the display screen504to the right (e.g., the left side of the display screen504is moved upward relative to the right side of the display screen504) or tilting the display screen504to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504). As shown inFIG.8AT, the orientation of the electronic device500is changed as a result of the user tilting the display screen504to the right (e.g., the left side of the display screen504is moved upward relative to the right side of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the answer call affordance824is enlarged in size and moves toward the center region of the display screen504. The change in visual appearance and location of the answer call affordance824indicates to the user that the answer call operation will be selected as a result of change in orientation of the electronic device500shown inFIG.8AT. In addition, the visual appearance of the decline call affordance826can be changed (e.g., darkened in brightness or partially faded) to indicate that the decline call operation will not be selected as a result of change in orientation of the electronic device500shown inFIG.8AT. As shown inFIGS.8AU-8AY, the electronic device500is held in the same or similar orientation as shown inFIG.8AT. As a result of the electronic device500continuing to be held in this orientation, the answer call affordance824is displayed in the center region of the display screen504and a progress ring828is displayed with the answer call affordance824. The progress ring828indicates a predetermined amount of time the electronic device500should be held in this orientation or a similar orientation in order for the answer call operation to be carried out. As shown inFIGS.8AU-8AY, the progress ring828is a graphical element that forms a circle as the predetermined amount of time elapses. The amount of time that has elapsed is indicated by the amount of the progress ring828being displayed. In addition, an answer call notification830is displayed on the display screen504. The answer call notification830notifies the user that the answer call operation will be carried out once the predetermined amount of time corresponding to the progress ring828has elapsed. The answer call operation can be canceled by changing the orientation of the electronic device to a substantially different orientation. If the electronic device continues to be held in the orientation ofFIG.8AT-8AYor a similar orientation until the predetermined amount of time has elapsed, then the electronic device500proceeds with the answer call operation, as shown inFIG.8BA. As shown inFIG.8AZ, if the electronic device500is held in the orientation ofFIG.8AT-8AYor a similar orientation until the predetermined amount of time has elapsed, the progress ring828is completed and the answer call affordance824and progress ring828are enlarged in size. The change in visual appearance of the answer call affordance824and progress ring828indicates that the electronic device500was successfully held in the orientation ofFIG.8AT-8AYor a similar orientation for the predetermined amount of time, and that the answer call operation is being initiated. In response to the electronic device500being held in the orientation ofFIG.8AT-8AYor a similar orientation for the predetermined amount of time, a call answering notification832is displayed, as shown inFIG.8BA. The call answering notification832indicates to the user that an answer call operation has been initiated by the electronic device500. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. In addition, the decline call affordance826and a mute affordance834are displayed while the answer call operation is being carried out and while the telephone call is active. As shown inFIG.8BB, the orientation of the electronic device500is changed as a result of the user tilting the display screen504to the left (e.g., the right side of the display screen504is moved upward relative to the left side of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the decline call affordance826is enlarged in size and moves toward the center region of the display screen504. The change in visual appearance and location of the decline call affordance826indicates to the user that the decline call operation will be selected as a result of change in orientation of the electronic device500shown inFIG.83B. In addition, the visual appearance of the answer call affordance824can be changed (e.g., darkened in brightness or partially faded) to indicate that the decline call operation will not be selected as a result of change in orientation of the electronic device500shown inFIG.8BB. As shown inFIGS.8BC-8BG, the electronic device500is held in the same or similar orientation as shown inFIG.8BB. As a result of the electronic device500continuing to be held in this orientation, the decline call affordance826is displayed in the center region of the display screen504and the progress ring828is displayed with the decline call affordance826. The progress ring828indicates a predetermined amount of time the electronic device500should be held in this orientation or a similar orientation in order for the decline call operation to be carried out. As shown inFIGS.8BC-8BG, the progress ring828is a graphical element that forms a circle as the predetermined amount of time elapses. The amount of time that has elapsed is indicated by the amount of the progress ring828being displayed. In addition, a decline call notification836is displayed on the display screen504. The decline call notification836notifies the user that the decline call operation will be carried out once the predetermined amount of time corresponding to the progress ring828has elapsed. The decline call operation can be canceled by changing the orientation of the electronic device to a substantially different orientation. If the electronic device continues to be held in the orientation ofFIG.8BB-8BGor a similar orientation until the predetermined amount of time has elapsed, then the electronic device500proceeds with the decline call operation, as shown inFIG.8BI. As shown inFIG.8BH, if the electronic device500is held in the orientation ofFIG.8BB-8BGor a similar orientation until the predetermined amount of time has elapsed, the progress ring828is completed and the decline call affordance826and progress ring828are enlarged in size. The change in visual appearance of the decline call affordance826and progress ring828indicates that the electronic device500was successfully held in the orientation ofFIG.8BB-8BGor a similar orientation for the predetermined amount of time, and that the decline call operation is being initiated. In response to the electronic device500being held in the orientation ofFIG.8BB-8BGor a similar orientation for the predetermined amount of time, a call ending notification838is displayed, as shown inFIG.8BI. The call ending notification838indicates to the user that a decline call operation has been initiated by the electronic device500. The decline call operation instructs the electronic device500or other associated device to decline the incoming telephone call. FIG.9Aillustrates an exemplary blood flow pattern associated with a positioning of a user's hand.FIGS.9B-9Hillustrate exemplary user interfaces for interacting with an electronic device based on a blood flow pattern, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIG.15. As shown inFIG.9A, blood flow950changes in intensity over time based on a positioning of a user's hand, as measured with a blood flow sensor located on the user's wrist. When the user's hand is a relaxed, open position, the blood flow is at a low intensity, as shown at952. When the user clenches their hand (e.g., makes a fist with their hand), the blood flow increases in intensity, as shown at954. As the strength of the clench increases (e.g., the user makes a tighter fist with their hand), the blood flow intensity also increases, as shown at956. These different patterns of blood flow intensity can be measured with an electronic device and used to perform operations with the electronic device. FIGS.9B-9Hillustrate exemplary user interfaces for performing operations with an electronic device500based on a positioning of a user's hand. The electronic device500includes a display screen504and a biological sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen, and the biological sensor can be an optical sensor positioned in the electronic device to measure blood flow indicative of a clenched hand of the user. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.9B, an incoming call notification902is displayed when an incoming telephone call is received. In addition, an answer call affordance904and a decline call affordance906are displayed. In some embodiments, the incoming call notification902, answer call affordance904, and decline call affordance906are displayed a predetermined time after the incoming telephone call is initially received and/or in response to a user action. For instance, in some embodiments, the incoming call notification902, answer call affordance904, and decline call affordance906are displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. Throughout the sequence of interactions shown inFIGS.9B-9F, the answer call affordance904or the decline call affordance906can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). As shown inFIG.9B, when the incoming call is initially received, the user's hand is in a relaxed, open position. As shown inFIG.9C, the user has changed the positioning of their hand from the relaxed, open position to a clenched position (e.g., the user makes a fist with their hand). In response to the clenched positioning of the user's hand, the answer call affordance904is enlarged in size. The change in visual appearance of the answer call affordance indicates to the user that their clenched hand has been detected by the electronic device500, and the answer call operation will be carried out if the user's hand is held in the clenched position for a predetermined time. In some embodiments, the clenched position of the user's hand is determined by the electronic device500by detecting a predefined pattern (e.g., an increase in blood flow intensity as shown inFIG.9A) for a predetermined time with the biological sensor. As shown inFIGS.9D-9F, the user continues to hold their hand in a clenched position as shown inFIG.9C. As a result of the user continuing to hold their hand in the clenched position, a progress ring908is displayed with the answer call affordance904. The progress ring908indicates the predetermined amount of time the user should continue clenching their hand in order for the answer call operation to be carried out. As shown inFIGS.9D-9F, the progress ring908is a graphical element that forms a circle as the predetermined amount of time elapses. The amount of time that has elapsed is indicated by the amount of the progress ring908being displayed. If the user stops holding their hand in a clenched position, then the progress ring908will stop being displayed and the electronic device500will forgo performing the answer call operation unless another input is received from the user before the incoming call stops being received. If the user continues holding their hand in the clenched position until the predetermined amount of time has elapsed, then the electronic device500proceeds with the answer call operation, as shown inFIG.9H. As shown inFIG.9G, if the user continues holding their hand in the clenched position until the predetermined amount of time has elapsed, then the progress ring828is completed and the answer call affordance904and progress ring828are enlarged in size. The change in visual appearance of the answer call affordance904and progress ring828indicates that the user successfully held their hand in the clenched position for the predetermined amount of time, and that the answer call operation is being initiated. In response to the user holding their hand in the clenched position for the predetermined amount of time, a call answering notification910is displayed, as shown inFIG.91H. The call answering notification910indicates to the user that an answer call operation has been initiated by the electronic device500. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. In addition, the decline call affordance906and a mute affordance912are displayed while the answer call operation is being carried out and while the telephone call is active. FIGS.10A-10Pillustrate exemplary user interfaces for interacting with an electronic device without touching a display screen or other physical input mechanism, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIGS.16A-16B. In particular,FIGS.10A-10Pillustrate exemplary user interfaces for performing operations with an electronic device500based on a positioning of a user's hand and an orientation of the electronic device. The electronic device500includes a display screen504, a tilt sensor, and a biological sensor, among other elements which can be found above and/or as discussed in reference toFIG.5AThe display screen504can be a touch-sensitive display, screen. The biological sensor can be an optical sensor positioned in the electronic device to measure blood flow indicative of a clenched hand of the user. The tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.10A, an incoming call notification1002is displayed in a center region1008of the display screen504when an incoming telephone call is received. In addition, an answer call affordance1004and a decline call affordance1006are displayed. In some embodiments, the incoming call notification1002, answer call affordance1004, and decline call affordance1006are displayed a predetermined time after the incoming telephone call is initially received and/or in response to a user action. For instance, in some embodiments, the incoming call notification1002, answer call affordance1004, and decline call affordance1006are displayed in response to the user lifting their arm into a raised position where the display screen504is visible to the user. Throughout the sequence of interactions shown inFIGS.10A-10D and10G-10H, the answer call affordance1004or the decline call affordance1006can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). As shown inFIG.10A, when the incoming call is initially received, the user's hand is in a relaxed, open position. As shown inFIG.10B, the user has changed the positioning of their hand from the relaxed, open position to a clenched position (e.g., the user makes a fist with their hand). In response to the clenched position of the user's hand, a clenching indicator1010is displayed. In some embodiments, the clenched position of the user's hand is determined by the electronic device500by detecting a predefined pattern (e.g., an increase in blood flow intensity as shown inFIG.9A) for a predetermined time with the biological sensor. The clenching indicator1010is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the clenching indicator1010is an animated graphic that enlarges in size based on the strength of the clench of the user's hand (e.g., the user makes a tighter fist with their hand and the blood flow intensity increases as shown inFIG.9A). The clenching indicator1010indicates to the user that their clenched hand has been detected by the electronic device500, and that additional input can be provided to the electronic device500. As shown inFIG.10B, the electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. While the user's hand is clenched as shown inFIG.10B, the user can change the orientation of the electronic device500to perform the operations associated with the answer call affordance1004or the decline call affordance1006. The changes in orientation include rotating the display screen504away from the user's body (e.g., the user rotates their wrist to move the bottom of the display screen504upward relative to the top of the display screen504) or rotating the display screen504toward the user's body (e.g., the user rotates their wrist to move the top of the display screen504upward relative to the bottom of the display screen504). In some embodiments, the operations are performed when the movement of the electronic device500meets a minimum velocity or acceleration threshold. If the minimum velocity or acceleration threshold is not met, then the operations are not performed. As shown inFIG.10C, after the user clenches their hand, the orientation of the electronic device500is changed into a downward orientation as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the answer call affordance1004is enlarged in size and moves toward the center region of the display screen504, as shown inFIG.10D) The change in visual appearance and location of the answer call affordance1004indicates to the user that the answer call operation will be selected as a result of this change in orientation of the electronic device500while the user's hand is in the clenched position. In addition, the visual appearance of the decline call affordance1006can be changed (e.g., darkened in brightness or partially faded) to indicate that the decline call operation will not be selected as a result of this change in orientation of the electronic device500while the user's hand is in the clenched position. As shown inFIG.10D, the electronic device500is held in the same or similar downward orientation as shown inFIG.10C. Alternatively, in some embodiments, the orientation of the electronic device500is further changed into a further downward orientation as a result of the user further rotating their wrist toward their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504). As a result of the electronic device500being held in this downward orientation, the answer call affordance1004is displayed in the center region of the display screen504. In addition, an answer call notification1012is displayed on the display screen504. The answer call notification1012notifies the user that the answer call operation will be carried out in response the user clenching their hand and moving the electronic device500into the downward orientation. The answer call operation can be canceled if the user changes the orientation of the electronic device500to a substantially different orientation or if the user stops holding their hand in the clenched position. If the user continues holding their hand in the clenched position and holding the electronic device500in the downward orientation, then the electronic device500proceeds with the answer call operation, as shown inFIG.10F. As shown inFIG.10E, the user releases their hand from the clenched position (e.g., the user moves their hand to a relaxed, open position) while continuing to hold the electronic device500in the downward orientation. In response to the user releasing their hand from the clenched position, the answer call affordance1004is further enlarged in size. The change in appearance of the answer call affordance1004indicates that as a result of the user releasing their hand from the clenched position while holding the electronic device500in the downward orientation, the answer call operation is being initiated. Alternatively, in some embodiments, the answer call operation is initiated if the user holds their hand in the clenched position and holds the electronic device500in the downward orientation for a predetermined amount of time. In response to the user releasing their hand from the clenched position while holding the electronic device500in the downward orientation, a call answering notification1014is displayed, as shown inFIG.10F. Alternatively, in some embodiments, the call answering notification1014is displayed in response to the user holding their hand in the clenched position and holding the electronic device500in the downward orientation for a predetermined amount of time. The call answering notification1014indicates to the user that an answer call operation has been initiated by the electronic device500. The answer call operation instructs the electronic device500or other associated device to answer the incoming telephone call. As shown inFIG.10G, after the user clenches their hand, the orientation of the electronic device500is changed into a upward orientation as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation the decline call affordance1006is enlarged in size and moves toward the center region of the display screen504, as shown inFIG.10H. The change in visual appearance and location of the decline call affordance1006indicates to the user that the decline call operation will be selected as a result of this change in orientation of the electronic device500while the user's hand is in the clenched position. In addition, the visual appearance of the answer call affordance1004can be changed (e.g., darkened in brightness or partially faded) to indicate that the answer call operation will not be selected as a result of this change in orientation of the electronic device500while the user's hand is in the clenched position. As shown inFIG.10H, the electronic device500is held in the same or similar upward orientation as shown inFIG.10G. Alternatively, in some embodiments, the orientation of the electronic device500is further changed into a further upward orientation as a result of the user further rotating their wrist away from their body (e.g., the electronic device500is tilted up further such that the bottom of the display screen504is further moved upward relative to the top of the display screen504). As a result of the electronic device500being held in this upward orientation, the decline call affordance1006is displayed in the center region of the display screen504. In addition, a decline call notification1016is displayed on the display screen504. The decline call notification1016notifies the user that the decline call operation will be carried out in response the user clenching their hand and moving the electronic device500into the upward orientation. The decline call operation can be canceled if the user changes the orientation of the electronic device500to a substantially different orientation or if the user stops holding their hand in the clenched position. If the user continues holding their hand in the clenched position and holding the electronic device500in the upward orientation, then the electronic device500proceeds with the decline call operation, as shown inFIG.10J. As shown inFIG.10I, the user releases their hand from the clenched position (e.g., the user moves their hand to a relaxed, open position) while continuing to hold the electronic device500in the upward orientation. In response to the user releasing their hand from the clenched position, the decline call affordance1006is further enlarged in size. The change in appearance of the decline call affordance1006indicates that as a result of the user releasing their hand from the clenched position while holding the electronic device500in the upward orientation, the decline call operation is being initiated. Alternatively, in some embodiments, the decline call operation is initiated if the user holds their hand in the clenched position and holds the electronic device500in the upward orientation for a predetermined amount of time. In response to the user releasing their hand from the clenched position while holding the electronic device500in the upward orientation, a call ending notification1018is displayed, as shown inFIG.10J. Alternatively, in some embodiments, the call ending notification1018is displayed in response to the user holding their hand in the clenched position and holding the electronic device500in the upward orientation for a predetermined amount of time. The call ending notification1018indicates to the user that a decline call operation has been initiated by the electronic device500. The decline call operation instructs the electronic device500or other associated device to decline the incoming telephone call. As shown inFIG.10K, a portion of an electronic document1020is displayed on the display screen504. The electronic document1020can be a portion of any type of graphical content, such as a website, email, book, photograph, or message. As shown in FI.10K, when the portion of the electronic document1020initially displayed, the user's hand is in a relaxed, open position. As shown inFIG.10L, the orientation of the electronic device500is changed as a result of the user rotating their wrist away from their body (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). Because the user's hand remains in the relaxed, open position, the portion of the electronic document1020being displayed does not change in response to this change in orientation of the electronic device500. As shown inFIG.10M, the user has changed the positioning of their hand from the relaxed, open position to a clenched position (e.g., the user makes a fist with their hand). In response to the clenched position of the user's hand, a clenching indicator1010is displayed. In some embodiments, the clenched position of the user's hand is determined by the electronic device500by detecting a predefined pattern (e.g., an increase in blood flow intensity as shown inFIG.9A) for a predetermined time with the biological sensor. The clenching indicator1010is semi-transparent and is displayed overlapping the other elements on the display screen504. In some embodiments, the clenching indicator1010is an animated graphic that enlarges in size based on the strength of the clench of the user's hand (e.g., the user makes a tighter fist with their hand and the blood flow intensity increases as shown inFIG.9A). The clenching indicator1010indicates to the user that their clenched hand has been detected by the electronic device500, and that additional input can be provided to the electronic device500. While the user's hand is clenched, the user can change the orientation of the electronic device500to scroll the portion of the electronic document1020being displayed. The changes in orientation include rotating the display screen504away from the user's body to scroll upward (e.g., the user rotates their wrist to move the bottom of the display screen504upward relative to the top of the display screen504) or rotating the display screen504toward the user's body to scroll downward (e.g., the user rotates their wrist to move the top of the display screen504upward relative to the bottom of the display screen504). As shown inFIG.10N, after the user clenches their hand, the orientation of the electronic device500is changed into a downward orientation as a result of the user rotating their wrist toward their body (e.g., the electronic device500is tilted down such that the top of the display screen504is moved upward relative to the bottom of the display screen504). In response to the movement of the electronic device500to this orientation or a similar orientation, the electronic document1020is scrolled downward so that a different portion of the electronic document1020is displayed on the display screen504, as shown inFIG.10N. The clenching indicator1010continues to be displayed while the user's hand is clenched. Similarly, the electronic document1020can be scrolled upward by the user clenching their hand and rotating their wrist away from their body to change the orientation of the electronic device500into an upward orientation (e.g., the electronic device500is tilted up such that the bottom of the display screen504is moved upward relative to the top of the display screen504). As shown inFIG.10O, the orientation of the electronic device500is further changed into a further downward orientation as a result of the user further rotating their wrist toward their body (e.g., the electronic device500is tilted down further such that the top of the display screen504is further moved upward relative to the bottom of the display screen504). As a result of the electronic device500being held in this downward orientation, the electronic document1020is scrolled further downward so that a different portion of the electronic document1020is displayed on the display screen504, as shown inFIG.10O. The clenching indicator1010continues to be displayed while the user's hand is clenched. In some embodiments, if the user continues holding their hand in the clenched position and holding the electronic device500in the downward orientation, then the electronic device500continues to scroll the electronic document downward. Similarly, in some embodiments, if the user continues holding their hand in the clenched position and holding the electronic device500in the upward orientation (e.g., by rotating their wrist away from their body), then the electronic device500continues to scroll the electronic document upward. As shown inFIG.10P, the user releases their hand from the clenched position and holds their hand in a relaxed, open position. In response to the user moving their hand to the relaxed, open position, the clenching indicator1010is removed from the display. In addition, the electronic document1020stops being scrolled regardless of the orientation of the electronic device500. FIGS.11A-11Dillustrate exemplary user interfaces for interacting with an electronic device without touching a display screen or other physical input mechanism, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIGS.17A-17B. In particular,FIGS.11A-11Dillustrate exemplary user interfaces for performing operations with an electronic device500based on an orientation of the electronic device. The electronic device500includes a display screen504, a tilt sensor, and a biological sensor, among other elements which can be found above and/or as discussed in reference toFIG.5A. The display screen504can be a touch-sensitive display screen. The biological sensor can be an optical sensor positioned in the electronic device to measure blood flow indicative of a clenched hand of the user. The tilt sensor can be an accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof. In the present example, device500is a wearable device on a user's wrist, such as a smart watch. As shown inFIG.11A, the electronic device500is initially in an orientation where the display screen504is not visible to the user and/or the display screen504is in a passive mode. For instance, the user's arm can be to side of the user's body. Alternatively, in some embodiments, the display screen504of the electronic device500is visible to user but in the passive mode (e.g., the display screen504is off or displaying passive information, such as a time). As shown inFIG.11B, an incoming call notification1102, answer call affordance1104, and decline call affordance1106are displayed on the display screen504(similar toFIGS.6A,7A,8AS, and9B). When an incoming telephone call is being received, the incoming call notification1102, answer call affordance1104, and decline call affordance1106are displayed in response to the user changing the orientation of the electronic device500such that the display screen504is visible to the user. In some embodiments, the change in orientation is the result of the user lifting their arm and/or rotating their wrist. For example, as shown inFIG.11B, the electronic device500is worn on the user's left wrist and is being held in a position such that display screen504is directly visible to the user's eyes (while being substantially perpendicular to the ground), such as is typical for users when they are checking the time. Throughout the sequence of interactions described in reference toFIGS.11B-11C, the answer call affordance1004or the decline call affordance1006can be touched by the user to perform their respective operations with the electronic device500(e.g., answering the incoming call or declining the incoming telephone call, respectively). When the incoming call notification1102, answer call affordance1104, and decline call affordance1106are displayed in response to the user changing the orientation of the electronic device500, the electronic device500enters an active mode where the user can provide additional input to perform an operation (e.g., answering the incoming call or declining the incoming telephone call). In some embodiments, the additional input includes changing the positioning of the user's hand to a clenched position, as described in reference toFIG.9A-9H. Alternatively or in addition, in some embodiments, the additional input includes further changing the orientation of the electronic device500(e.g., tilting the electronic device500to the left or right), as described in reference toFIG.8AS-8BI). Alternatively or in addition, in some embodiments, prior to receiving the additional input from the user, the electronic device500displays additional movement indicators (e.g., the incoming call track608ofFIG.6Bor the movement indicators708a-708dofFIGS.7B-7E). After the additional movement indicators are displayed, additional input is received from the user, which includes further changing the orientation of the electronic device500(e.g., tilting the electronic device500), as described in reference toFIG.6C-6I or7G-7Q. In some embodiments, if the electronic device500does not first enter the active mode before receiving additional input from the user, then the electronic device500forgoes performing the operations when the additional input is received. For instance, if the user does not first change the orientation of the electronic device500such that the display screen504is visible to the user, then the user clenching their hand will not perform an answer call operation. As shown inFIG.11C, the answer call affordance1104is displayed in a lower region of the display screen504and the decline call affordance1106is displayed in an upper region of the display screen504(similar toFIG.8AB). The incoming call notification1102is displayed in a center region1108of the display screen504. Similar to as described in reference toFIG.11B, when an incoming telephone call is being received, the incoming call notification1102, answer call affordance1104, and decline call affordance1106are displayed in response to the user changing the orientation of the electronic device500such that the display screen504is visible to the user, which also results in the electronic device500entering an active mode where the user can provide additional input to perform an operation (e.g., answering the incoming call or declining the incoming telephone call). In some embodiments, the additional input includes further changing the orientation of the electronic device500(e.g., rotating the display screen504away from or toward the user), as described in reference toFIGS.8AB-8AR). In some embodiments, if the electronic device500does not first enter the active mode before receiving additional input from the user, then the electronic device500forgoes performing the operations when the additional input is received. For instance, if the user does not first change the orientation of the electronic device500such that the display screen504is visible to the user, then the user rotating the display screen504will not perform an answer call or decline call operation. As shown inFIG.11D, a reply affordance1114is displayed in an upper region of the display screen504and a dismiss affordance1116is displayed in a lower region of the display screen (similar toFIG.8B). In addition, an instructional graphic1110can be displayed in the center region1108of the display screen. Similar to as described in reference toFIG.11B, after an instant message is received, the reply affordance1114and dismiss affordance1116are displayed in response to the user changing the orientation of the electronic device500such that the display screen504is visible to the user, which also results in the electronic device500entering an active mode where the user can provide additional input to perform an operation (e.g., answering the incoming call or declining the incoming telephone call). In some embodiments, the additional input includes further changing the orientation of the electronic device500(e.g., rotating the display screen504away from or toward the user), as described in reference toFIGS.8B-8E and8W-8Z). In some embodiments, if the electronic device500does not first enter the active mode before receiving additional input from the user, then the electronic device500forgoes performing the operations when the additional input is received. For instance, if the user does not first change the orientation of the electronic device500such that the display screen504is visible to the user, then the user rotating the display screen504will not perform a reply or dismiss operation. In some embodiments, if the electronic device500enters the active mode and no additional input is received from the user, then the display screen504is activated and displays a time or a default “home” interface. FIGS.12A-12Bare flow diagrams illustrating a method1200for performing one or more operations with an electronic device, in accordance with some embodiments. Method1200can be performed at a device (e.g.,100,300,500) with a display screen and a tilt sensor. In some examples, the tilt sensor includes an accelerometer, directional sensor (e.g., compass), gyroscope, motion sensor, and/or a combination thereof. Some operations in method1200are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1200provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to the user's hand, arm, and/or wrist movement. Performing an operation in response to the user's hand, arm, and/or wrist movement enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1200, in some embodiments, the device (1202) displays a first graphical element at a first location on the display screen (e.g., graphical object614ofFIG.6B). The device also (1204) displays a second graphical element at a second location on the display screen (e.g., answer call affordance604ofFIG.6B). The second graphical element is associated with a first operation (e.g., answering an incoming telephone call). In some examples, the device displays a graphical indication of a path for simulated movement of the first graphical element (e.g., displays incoming call track608ofFIG.6B). As shown in method1200, in some embodiments, the device (1206) receives a tilt sensor input associated with movement of the electronic device (e.g., the orientation of the electronic device500is changed as a result of the user moving their arm/wrist/hand as shown inFIGS.6C-6G). The device (1220) can optionally, while receiving the tilt sensor input, display the first graphical element at locations on the display screen based on the tilt sensor input (e.g., graphical object614is displayed at intermediate locations along the incoming call track608as shown inFIGS.6C-6F). As shown in method1200, in some embodiments, in accordance with a determination that the tilt sensor input satisfies a first predefined tilt sensor condition (e.g., the tilt sensor input results in the graphical object614moving to the answer call affordance604as shown inFIG.6G), the device (1208) displays the first graphical element proximate to the second location on the display screen (e.g., graphical object614is displayed at the end of the right track segment610as shown inFIG.6G). Furthermore, in accordance with the determination that the tilt sensor input satisfies the first predefined tilt sensor condition, the device also (1210) performs the first operation associated with the second graphical element (e.g., answers an incoming telephone call as shown inFIGS.6I-6K). Performing the first operation in response to the determination that the tilt sensor input satisfies the first predefined tilt sensor condition allows the first operation to be performed with fewer physical inputs from the user (e.g. finger touches on the display screen). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some examples, the first predefined tilt sensor condition includes simulated physical movement of a virtual object (having virtual mass) corresponding to the first graphical element (e.g., the acceleration and velocity of the graphical object614as it moves along the incoming call track608is representative of how a physical ball would roll along a physical track being held in the same orientation as the electronic device500as shown inFIGS.6C-6F). In some examples, the simulated physical movement of the virtual object is based at least in part on a tilt angle of the device over a period of time. In some examples, the predefined tilt sensor condition is satisfied when the simulated physical movement of the virtual object results in the virtual object being moved proximate to the second location on the display screen (e.g., graphical object614is moved to the end of the right track segment610as shown inFIG.6G). In some examples, the device includes a haptic feedback mechanism, and, further in accordance with the determination that the tilt sensor input satisfies the first predefined tilt sensor condition, the device provides a haptic feedback via the haptic feedback mechanism. In some examples, the first operation includes answering an incoming telephone call or declining an incoming telephone call. In some examples, the device (1214) optionally displays a third graphical element at a fourth location on the display screen (e.g., decline call affordance606ofFIG.6B). The third graphical element is associated with a second operation (e.g., declining an incoming telephone call). In some examples, the second graphical element is an affordance associated with the first operation (e.g., answer call affordance604ofFIG.6B) and the third graphical element is an affordance associated with the second operation (e.g., decline call affordance606ofFIG.6B). In accordance with a determination that the tilt sensor input satisfies a second predefined tilt sensor condition (e.g., the tilt sensor input results in the ball rolling to the decline call affordance), the device (1216) displays the first graphical element proximate to the fourth location on the display screen (e.g., graphical object614is displayed at the end of the left track segment612of FIG.6G) and (1218) performs the second operation associated with the third graphical element (e.g., declines the incoming telephone call). As shown in method1200, in some embodiments, in accordance with a determination that the tilt sensor input fails to satisfy the first or second predefined tilt sensor conditions (e.g., the tilt sensor input does not result in the graphical object614moving to the answer call affordance604or the decline call affordance606as shown inFIG.6G), the device (1212) displays the first graphical element at a third location on the display screen based on the tilt sensor input (e.g., graphical object614is displayed at its initial location on the incoming call track608as shown inFIG.6Bor at intermediate locations along the incoming call track608as shown inFIGS.6C-6F). In some examples, the third location is the same as the first location (e.g., graphical object614is displayed at its initial location on the incoming call track608as shown inFIG.6B). In some examples, the predefined tilt sensor condition is not satisfied when the simulated physical movement of the virtual object does not result in the virtual object being moved proximate to the second location on the display screen (e.g., graphical object614is moved to its initial location on the incoming call track608as shown inFIG.6Bor moved to intermediate locations along the incoming call track608as shown inFIGS.6C-6F). Note that details of the processes described above with respect to method1200(e.g.,FIGS.12A-12B) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to method1200. For example, the mode change criteria of method1700can be satisfied prior to receiving the tilt sensor input of method1200(e.g., the display screen is held in view of a user for a predetermined time as a precondition to pto receiving the tilt sensor input). For brevity, these details are not repeated below. FIGS.13A-13Bare flow diagrams illustrating a method1300for performing one or more operations with an electronic device, in accordance with some embodiments. Method1300is performed at a device (e.g.,100,300,500) with a display screen and a tilt sensor. In some examples, the tilt sensor includes an accelerometer, directional sensor (e.g., compass), gyroscope, motion sensor, and/or a combination thereof. Some operations in method1300are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1300provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to the user's hand, arm, and/or wrist movement. Performing an operation in response to the user's hand, arm, and/or wrist movement enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1300, in some embodiments, the device (1302) displays a first plurality of graphical elements (e.g., movement indicators708a-708dofFIGS.7B-7E) indicating a predefined sequence of movements associated with an operation (e.g., answering an incoming telephone call). The first plurality of graphical elements include a first graphical element indicating a first movement (e.g., a “high” musical note as shown inFIG.7B) and a second graphical element indicating a second movement (e.g., a “low” musical note as shown inFIG.7C). The first movement includes rotation of the device in a first direction around a central axis from a neutral position to a first position and back toward the neutral position within a first predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated away from the user and then is immediately rotated back toward the user). The second movement comprises a rotation of the electronic device in a second direction opposite the first direction around the central axis from the neutral position to a second position and back toward the neutral position within a second predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated toward the user and then is immediately rotated back away from the user). In some examples, the central axis corresponds to an axis of rotation of a user's wrist. In some examples, the first or second movement includes rotation of the device at a velocity greater than a predetermined minimum velocity. In some examples, the first or second movement includes rotation of the electronic device with an acceleration greater than a predetermined minimum acceleration. As shown in method1300, in some embodiments, the device (1304) receives a plurality of tilt sensor inputs associated with movements of the electronic device (e.g., the orientation of the device is changed as shown inFIGS.7G-7N). In some examples, while receiving the plurality of tilt sensor inputs, the device (1310) optionally displays a second plurality of graphical elements indicating movements of the electronic device (e.g., movement indicators708a-708dofFIGS.7G-7N). In some examples, while receiving the plurality of tilt sensor inputs, the device (1312) optionally displays an indicator that indicates the direction of rotation of the electronic device (e.g., input indicator710ofFIGS.7G-7N). As shown in method1300, in some embodiments, in accordance with a determination that the plurality of tilt sensor inputs corresponds to the predefined sequence of movements indicated by the first plurality of graphical elements (e.g., the tilt sensor input corresponds to the movement indicators708a-708dshown inFIGS.7B-7E), the device (1306) performs the operation associated with the predefined sequence of movements. In some examples, the operation includes answering an incoming telephone call or declining the incoming telephone call. Performing the operation in response to the determination that the plurality of tilt sensor inputs corresponds to the predefined sequence of movements indicated by the first plurality of graphical elements allows the first operation to be performed with fewer physical inputs from the user (e.g. finger touches on the display screen). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. As shown in method1300, in some embodiments, in accordance with a determination that the plurality of tilt sensor inputs does not correspond to the predefined sequence of movements indicated by the first plurality of graphical elements (e.g., the tilt sensor input does not correspond to the movement indicators708a-708dshown inFIGS.7B-7E), the device (1308) forgoes performing the operation associated with the predefined sequence of movements. In some examples, in accordance with a determination that at least one of the plurality of tilt sensor inputs is not greater than a predetermined minimum velocity, the device (1314) optionally forgoes performing the operation associated with the predefined sequence of movements. In some examples, in accordance with a determination that at least one of the plurality of tilt sensor inputs is not greater than a predetermined minimum acceleration, the device (1316) optionally forgoes performing the operation associated with the predefined sequence of movements. Note that details of the processes described above with respect to method1300(e.g.,FIGS.13A-13B) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to method1300. For example, the mode change criteria of method1700can be satisfied prior to receiving the tilt sensor inputs of method1300(e.g., the display screen is held in view of a user for a predetermined time as a precondition to receiving the tilt sensor inputs). For brevity, these details are not repeated below. FIG.14is a flow diagram illustrating a method1400for performing one or more operations with an electronic device, in accordance with some embodiments. Method1400is performed at a device (e.g.,100,300,500) with a display screen and a tilt sensor. In some examples, the tilt sensor includes an accelerometer, directional sensor (e.g., compass), gyroscope, motion sensor, and/or a combination thereof. Some operations in method1400are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1400provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to the user's hand, arm, and/or wrist movement. Performing an operation in response to the user's hand, arm, and/or wrist movement enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1400, in some embodiments, the device (1402) displays a first item (e.g., reply affordance804ofFIG.8B, a predefined response814a-814eofFIG.8E, or decline call affordance826ofFIG.8AB or8AS) at a first position on the display screen and a second item (e.g., dismiss affordance806ofFIG.8B, a predefined response814a-814eofFIG.8F, or answer call affordance824ofFIG.8AB or8AS) at a second position on the display screen. The first position and second position correspond to positions along a line substantially perpendicular to an axis of rotation of the electronic device (e.g., items are positioned vertically or horizontally on the display screen so that rotation of the device is toward one item or the other, such as shown inFIG.8B,8F,8AB, or8AS). In some examples, the axis of rotation of the device corresponds to an axis of rotation of a user's wrist. In some examples, the first position is in an upper half of the display screen and the second position is in a lower half of the display screen. In some examples, the first or second item is a reply command for a received text message, a dismiss command for a received text message, a predefined response to a received text message, an answer command for a telephone call, or a decline command for a telephone call. As shown in method1400, in some embodiments, the device (1404) receives a tilt sensor input associated with movement of the electronic device (e.g., the orientation of the device is changed as shown inFIG.8C-8E,8G,8J,8M,8X-8Y,8AC-8AI,8AK-8AQ,8AT-8AZ, or8BB-8BH). In some examples, the device (1412) optionally displays an indicator that indicates the direction of rotation of the electronic device (e.g., input indicator812ofFIG.8C-8D,8G,8J,8M, or8X-8Y). In some examples, the tilt sensor input corresponds to rotation of the device in the first or second direction at a velocity greater than a predetermined minimum velocity. In some examples, the tilt sensor input corresponds to rotation of the device in the first or second direction with an acceleration greater than a predetermined minimum acceleration. As shown in method1400, in some embodiments, in accordance with a determination that the tilt sensor input corresponds to a rotation of the electronic device in a first direction around the axis of rotation from a neutral position to a first position (e.g., the orientation of the device is changed as shown inFIG.8C-8E,8J,8M,8AK-8AQ, or8BB-8BH), the device (1406) moves the first item from the first position on the display screen to a third position along the line substantially perpendicular to the axis of rotation (e.g., reply affordance804, a predefined response814a-814e, or decline call affordance826is moved toward a center region of the display screen, as shown inFIG.8C-8E,8J,8M,8AK-8AQ, or8BB-8BH). In some examples, the tilt sensor input corresponds to a rotation of the electronic device in the first direction around the axis of rotation from the neutral position to the first position and from the first position back toward the neutral position within a first predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated away from the user and then is immediately rotated back toward the user). As shown in method1400, in some embodiments, in accordance with a determination that the tilt sensor input corresponds to a rotation of the electronic device in a second direction opposite the first direction around the axis of rotation from the neutral position to a second position (e.g., the orientation of the device is changed as shown inFIG.8G,8X-8Y,8AC-8AI, or8AT-8AZ), the device (1408) moves the second item from the second position on the display screen to a fourth position along the line substantially perpendicular to the axis of rotation (e.g., dismiss affordance806, a predefined response814a-814e, or answer call affordance824is moved toward a center region of the display, as shown inFIG.8G,8X-8Y,8AC-8AI, or8AT-8AZ). In some examples, the tilt sensor input corresponds to a rotation of the electronic device in the second direction opposite the first direction around the axis of rotation from the neutral position to the second position and from the second position back toward the neutral position within a second predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated toward the user and then is immediately rotated back away from the user). Moving the first or second items to different positions on the display screen in response to the determination that the tilt sensor input corresponds to a rotation of the device allows the device to be operated with fewer physical inputs from the user (e.g. finger touches on the display screen). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some examples, the third and fourth positions are in a center region of the display screen. In some examples, the device includes a haptic feedback mechanism (e.g., vibration mechanism), and, in accordance with the determination that the tilt sensor input corresponds to rotation of the device in the first or second direction, the device provides a haptic feedback, via the haptic feedback mechanism. In some examples, in accordance with a determination that the tilt sensor input is not greater than a predetermined minimum velocity, the device forgoes moving the first or second item. In some examples, in accordance with a determination that the tilt sensor input is not greater than a predetermined minimum acceleration, the device forgoes moving the first or second item. In some examples, after moving the first or second item, and in accordance with a determination that no additional tilt sensor input above a threshold value is received within a predetermined time, the device (1410) optionally selects the moved item (e.g., the reply affordance804is selected as shown inFIG.8E, the predefined response814dis selected as shown inFIG.8U, the dismiss affordance806is selected as shown inFIG.8Z, the answer call affordance824is selected as shown inFIG.8AI or8AZ, or the decline call affordance is selected as shown inFIG.8AQ or8BH). In some examples, displaying a countdown indicator corresponding to the predetermined time (e.g., progress ring816ofFIG.8P-8Uor progress ring828ofFIG.8AD-8AI,8AL-8AQ,8AU-8AZ, or8BC-8BH). In some examples, the device includes a haptic feedback mechanism (e.g., vibration mechanism), and, in accordance with the determination that no additional tilt sensor input above the threshold value is received within the predetermined time, the device provides a haptic feedback, via the haptic feedback mechanism. Note that details of the processes described above with respect to method1400(e.g.,FIG.14) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to method1400. For example, the mode change criteria of method1700can be satisfied prior to receiving the tilt sensor input of method1400(e.g., the display screen is held in view of a user for a predetermined time as a precondition to receiving the tilt sensor input). For brevity, these details are not repeated below. FIG.15is a flow diagram illustrating a method1500for performing an operation with an electronic device, in accordance with some embodiments. Method1500is performed at a device (e.g.,100,300,500) with a display screen and a biological sensor. In some examples, the biological sensor is an optical sensor positioned in the device to measure blood flow indicative of a clenched hand of the user. Some operations in method1500are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1500provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to a positioning of a user's hand. Performing an operation in response to the positioning of the user's hand enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1500, in some embodiments, the device (1502) displays an affordance on the display screen (e.g., answer call affordance904ofFIG.9B). As shown in method1500, in some embodiments, the device (1504) receives biological sensor input associated with a positioning of a user's hand (e.g., the user changes the positioning of their hand to a clenched position as shown inFIG.9C). As shown in method1500, in some embodiments, in accordance with a determination that the biological sensor input corresponds to a predefined pattern for a predetermined time, the predefined pattern being associated with the positioning of the user's hand (e.g., the user continues to hold their hand in the clenched position for a predetermined amount of time as shown inFIGS.9D-9G), the device (1506) displays an indication that the biological sensor input corresponds to the predefined pattern for the predetermined time (e.g., the answer call affordance904is enlarged in size and a progress ring908is displayed, as shown inFIGS.9D-9G). In some examples, the device (1512) optionally modifies a visual appearance of the affordance (e.g., the answer call affordance904is enlarged in size as shown inFIGS.9D-9G). In some examples, the device (1514) optionally displays a graphical element indicating the predetermined time (e.g., progress ring908ofFIGS.9D-9G). In some examples, the device includes a haptic feedback mechanism (e.g., vibration mechanism), and, further in accordance with the determination that the biological sensor input corresponds to the predefined pattern, the device provides a haptic feedback, via the haptic feedback mechanism (e.g., the device vibrates when the user clenches their hand). The device then (1508) performs an operation associated with the affordance. In some examples, the operation includes answering a telephone call. Performing the operation in response to the determination that the biological sensor input corresponds to the predefined pattern for the predetermined time allows the operation to be performed with fewer physical inputs from the user (e.g. finger touches on the display screen) Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. As shown in method1500, in some embodiments, in accordance with a determination that the sensor input does not correspond to the predefined pattern for the predetermined time (e.g., the user stops holding their hand in a clenched position), the device (1510) forgoes performing the operation associated with the affordance (e.g., the incoming telephone call is not answered). Note that details of the processes described above with respect to method1500(e.g.,FIG.15) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to method1500. For example, the mode change criteria of method1700can be satisfied prior to receiving the biological sensor input of method1500(e.g., the display screen is held in view of a user for a predetermined time as a precondition to receiving the biological sensor input). For brevity, these details are not repeated below. FIGS.16A-16Bare flow diagrams illustrating a method1600for performing an operation with an electronic device, in accordance with some embodiments. Method1600is performed at a device (e.g.,100,300,500) with a display screen, a biological sensor, and a tilt sensor. In some examples, the biological sensor is an optical sensor positioned in the device to measure blood flow indicative of a clenched hand of the user. In some examples, the tilt sensor includes an accelerometer, directional sensor (e.g., compass), gyroscope, motion sensor, and/or a combination thereof. Some operations in method1600are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1600provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to a positioning of a user's hand and an orientation of the device. Performing an operation in response to the positioning of the user's hand and the orientation of the device enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1600, in some embodiments, the device (1602) displays a user interface on the display screen (e.g., an incoming telephone call interface as shown inFIG.10Aor an electronic document1020as shown inFIG.10K). The user interface is responsive to at least a first operation and a second operation associated with movement of the device (e.g., an answer call operation, a decline call operation, a scroll up operation, or a scroll down operation). In some examples, the user interface includes a first graphical element associated with the first operation and a second graphical element associated with the second operation. In some examples, the first graphical element is a first affordance associated with an answer call operation (e.g., answer call affordance1004ofFIG.10A), and the second graphical element is a second affordance associated with a decline call operation (e.g., decline call affordance1006ofFIG.10A). In some examples, the user interface includes a portion of an electronic document (e.g., electronic document1020ofFIG.10K). As shown in method1600, in some embodiments, the device (1604) receives biological sensor input associated with positioning of a user's hand (e.g., the user changes the positioning of their hand to a clenched position as shown inFIGS.10B and10M). As shown in method1600, in some embodiments, in accordance with a determination that the biological sensor input corresponds to a predefined pattern for a predetermined time, the predefined pattern being associated with the positioning of the user's hand (e.g., the user holds their hand in the clenched position as shown inFIG.10B-10D,10G-10H, or10M-10O), the device (1606) displays an indication in the user interface that the sensor input corresponds to the predefined pattern (e.g., clenching indicator1010ofFIG.10B-10C,10G, or10M-10O). In some examples, the device includes a haptic feedback mechanism (e.g., vibration mechanism), and, further in accordance with the determination that the biological sensor input corresponds to the predefined pattern, providing a haptic feedback, via the haptic feedback mechanism. As shown in method1600, in some embodiments, while the biological sensor input corresponds to the predefined pattern (e.g., while the user's hand is in the clenched position as shown inFIG.10B-10D,10G-10H, or10M-10O), the device (1608) receives a tilt sensor input associated with movement of the electronic device (e.g., the orientation of the device is changed as shown inFIG.10C-10E,10G-10I, or10N-10O). As shown in method1600, in some embodiments, in accordance with a determination that the tilt sensor input corresponds to movement of a first type (e.g., the orientation of the device is changed as shown inFIG.10C-10E or10N-10O), the device (1610) performs the first operation (e.g., answers the incoming telephone call as shown inFIG.10For scrolls the electronic document1020in a downward direction as shown inFIGS.10N-10O). In some examples, in accordance with the determination that the tilt sensor input corresponds to movement of the first type, the device (1614) optionally modifies a visual appearance of the first graphical element (e.g., enlarges the size of the answer call affordance1004as shown inFIGS.10D-10E). In some examples, the first operation includes scrolling the electronic document in a first direction (e.g., electronic document1020ofFIG.10Kis scrolled downward). In some examples, the movement of the first type includes a rotation of the device in a first direction around a central axis from a neutral position to a first position (e.g., the orientation of the device is changed as shown inFIG.10C-10E or10N-10O). In some examples, the movement of the first type further comprises rotation from the first position back toward the neutral position within a first predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated toward the user and then is immediately rotated back away from the user). In some examples, after receiving the tilt sensor input and prior to performing the first operation, the device (1624) determines the biological sensor input ceases to correspond to the predefined pattern (e.g., the user releases their hand from the clenched position), and in response, the device (1610) performs the first operation (e.g., releasing the clenched hand initiates the answer call operation as shown inFIG.10E). As shown in method1600, in some embodiments, in accordance with a determination that the tilt sensor input corresponds to movement of a second type (e.g., the orientation of the device is changed as shown inFIGS.10G-10I), the device (1612) performs the second operation (e.g., declines the incoming telephone call as shown inFIG.10Jor scrolls the electronic document1020in an upward direction). In some examples, in accordance with the determination that the tilt sensor input corresponds to movement of the second type, the device (1616) optionally modifies a visual appearance of the second graphical element (e.g., enlarges the size of the decline call affordance1006as shown inFIGS.10G-10I). In some examples, the second operation including scrolling the electronic document in a second direction (e.g., electronic document1020ofFIG.10Kis scrolled upward). In some examples, the movement of the second type includes a rotation of the electronic device in a second direction around a central axis from a neutral position to a second position (e.g., the orientation of the device is changed as shown inFIGS.10G-10I). In some examples, the movement of the second type further includes rotation from the second position back toward the neutral position within a second predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated away from the user and then is immediately rotated back toward the user). In some examples, after receiving the tilt sensor input and prior to performing the second operation, the device (1626) determines the biological sensor input ceases to correspond to the predefined pattern (e.g., the user releases their hand from the clenched position), and in response, the device (1612) performs the second operation (e.g., releasing the clenched hand initiates the decline call operation as shown inFIG.10I). Performing the first or second operations based on tilt sensor input and biological sensor input allows the device to be operated with fewer physical inputs from the user (e.g., finger touches on the display screen). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. In some examples, the central axis corresponds to an axis of rotation of a user's wrist. In some examples, the movement of the first or second type corresponds to movement of the electronic device at a velocity greater than a predetermined minimum velocity. In some examples, in accordance with a determination that the tilt sensor input is not greater than a predetermined minimum velocity, the device forgoes performing the first or second operations. In some examples, the movement of the first or second type corresponds to movement of the electronic device with an acceleration greater than a predetermined minimum acceleration. In some examples, in accordance with a determination that the tilt sensor input is not greater than a predetermined minimum acceleration, the device forgoes performing the first or second operations. In some examples, the device (1618) optionally determines that the biological sensor input does not correspond to the predefined pattern for the predetermined time (e.g., the user releases their hand from the clenched position). In some examples, while the biological sensor input does not correspond to the predefined pattern (e.g., while the user's hand is not clenched), the device (1620) receives the tilt sensor input associated with movement of the electronic device and (1622) optionally forgoes performing the first or second operations. Note that details of the processes described above with respect to method1600(e.g.,FIGS.16A-16B) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to method1600. For example, the mode change criteria of method1700can be satisfied prior to receiving the biological sensor input of method1600(e.g., the display screen is held in view of a user for a predetermined time as a precondition to receiving the biological sensor input). For brevity, these details are not repeated below. FIGS.17A-17Bare flow diagrams illustrating a method1700for performing an operation with an electronic device, in accordance with some embodiments. Method1700is performed at a device (e.g.,100,300,500) with a display screen and a sensor. In some examples, the sensor includes a tilt sensor. In some examples, the tilt sensor is an accelerometer, directional sensor (e.g., compass), gyroscope, motion sensor, and/or a combination thereof. In some examples, the sensor includes a biological sensor. In some examples, the biological sensor is an optical sensor positioned in the device to measure blood flow indicative of a clenched hand of the user. Some operations in method1700are, optionally, combined, the order of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method1700provides an intuitive way for interacting with the device. In some cases, the device performs an operation in response to an orientation of the device. Performing an operation in response to the orientation of the device enhances the operability of the device by enabling the user to interact with the device without touching the display screen or other physical input mechanisms. This also allows operations to be performed more quickly and efficiently with the device. As shown in method1700, in some embodiments, the device (1702) receives, via the sensor, a sensor input (e.g., the orientation of the device is changed as a result of the user lifting their arm and or rotating their wrist, as described in reference toFIGS.11A-11B). As shown in method1700, in some embodiments, in response to receiving the sensor input, the device (1704) determines whether the device satisfies a mode change criteria, the mode change criteria including an orientation criterion satisfied based on an orientation of the electronic device (e.g., the orientation of the device is changed such that the display screen is visible to the user, as described in reference toFIGS.11A-11B). In some examples, the mode change criteria further includes a time criterion that is satisfied based on maintaining the orientation of the device for a predetermined time (e.g., device is held for a predetermined time in an orientation where the display screen is visible to the user). In some examples, the mode change criteria further include, prior to satisfying the orientation criterion, a movement criterion that is satisfied when the sensor input corresponds to a predetermined pattern indicative of a particular movement of the device (e.g., upward movement of the user's arm). In some examples, the orientation of the device that satisfies the orientation criterion corresponds to a raised position of the device. As shown in method1700, in some embodiments, in accordance with a determination that that the mode change criteria is satisfied, the device (1706) transitions the device to a first mode (e.g., active mode as described in reference toFIGS.11A-11B). The device (1708) also modifies the user interface to indicate that the device is in the first mode (e.g., displays the answer call affordance1104and decline call affordance1106ofFIG.11B or11C, or displays the reply affordance1114and dismiss affordance1116ofFIG.11D). In some examples, the user interface includes a first graphical element associated with a first operation (e.g., answer call affordance1104ofFIG.11B or11Cor dismiss affordance1116ofFIG.11D), and a second graphical element associated with a second operation (e.g., decline call affordance1106ofFIG.11B or11Cor reply affordance1114ofFIG.1D). In some examples, the device includes a haptic feedback mechanism (e.g., vibration mechanism), and, further in accordance with the determination that that the mode change criteria is satisfied, the device provides a haptic feedback, via the haptic feedback mechanism. As shown in method1700, in some embodiments, in accordance with a determination that the mode change criteria is not satisfied, the device (1710) forgoes transitioning the device to the first mode (e.g., the orientation of the device is not changed such that the display screen is visible to the user, as described in reference toFIGS.11B-11D). As shown in method1700, in some embodiments, subsequent to receiving the sensor input, the device (1712) receives a user input (e.g., user input changing the positioning of the user's hand to a clenched position, as described in reference toFIG.9A-9H, or user input further changing the orientation of the device, as described in reference toFIG.6C-6I,7G-7Q,8B-8F,8W-8Z, or8AB-8BI). In some examples, the user input includes receiving a tilt sensor input associated with movement of the electronic device. In some examples, the user input is detected via the sensor. As shown in method1700, in some embodiments, in response to the user input and in accordance with a determination that the device satisfies a first operation criteria (e.g., user input results in graphical object614being displayed at the end of the right track segment610as shown inFIG.6G, user input corresponds to the movement indicators708a-708dshown inFIGS.7B-7E, user input further changes the orientation of the device as described in references to FIG.8C-8E,8G,8J,8M,8X-8Y,8AC-8AI,8AK-8AQ,8AT-8AZ, or8BB-8BH, and/or user input changes the positioning of the user's hand to a clenched position as shown inFIG.9C), the first operation criteria including a mode criterion that is satisfied when the device is in the first mode, the device (1714) performs a first operation (e.g., answers an incoming telephone call, declines the incoming call, displays a reply interface for a received instant message, or dismisses the received instant message). In some examples, the first operation criteria further includes a first tilt criterion that is satisfied when the device is rotated in a first direction around a central axis from a neutral position to a first position (e.g., the display screen is rotated away from or toward the user). In some examples, the first tilt criterion further includes a rotation criterion that is satisfied when the device is rotated from the first position back toward the neutral position within a predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated in a first direction and then is immediately rotated back in the opposite direction). In some examples, the central axis corresponds to an axis of rotation of a user's wrist. In some examples, the first operation criteria further includes a hand position criterion that is satisfied when the user input corresponds to a predefined pattern for a predetermined time, the predefined pattern being associated with positioning of the user's hand (e.g., the user holds their hand in a clenched position for a predetermined amount of time such as shown inFIGS.9C-9G). In some examples, in accordance with the determination that the device satisfies the first operation criteria, the device (1720) optionally modifies a visual appearance of the first graphical element (e.g., the first graphical element is enlarged in size, the first graphical element moves toward a center region of the display screen, and/or a progress ring is displayed). In some examples, the first graphical element is a first affordance associated with an answer call operation (e.g., answer call affordance1104ofFIG.11B or11C). In some examples, in response to the user input and in accordance with a determination that the device satisfies a second operation criteria, the second operation criteria including a criterion that is satisfied when the device is in the first mode, the device (1718) optionally performs a second operation (e.g., answers an incoming telephone call, declines the incoming call, displays a reply interface for a received instant message, or dismisses the received instant message). The second operation criteria is different than the first operation criteria and the second operation is different than the first operation. In some examples, the second operation criteria includes a second tilt criterion that is satisfied when the electronic device is rotated in a second direction around a central axis from a neutral position to a second position (e.g., the display screen is rotated away from or toward the user). In some examples, the second tilt criterion further includes rotation from the second position back toward the neutral position within a predetermined time (e.g., the user makes a “flicking” motion with the device, where the display screen is quickly rotated in a first direction and then is immediately rotated back in the opposite direction). In some examples, the central axis corresponds to an axis of rotation of a user's wrist. In some examples, in accordance with the determination that the device satisfies the second operation criteria, the device (1722) optionally modifies a visual appearance of the second graphical element (e.g., the second graphical element is enlarged in size, the second graphical element moves toward a center region of the display screen, and/or a progress ring is displayed). In some examples, the second graphical element is a second affordance associated with a decline call operation (e.g., decline call affordance1106ofFIG.11B or11C). Performing the first or second operations in response to receiving a user input after an orientation criterion is satisfied allows the device to be operated with fewer physical inputs from the user (e.g., finger touches on the display screen). Reducing the number of inputs needed to perform an operation enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. As shown in method1700, in some embodiments, in accordance with a determination that the device does not satisfy the first operation criteria (e.g., user input does not result in graphical object614being displayed at the end of the right track segment610as shown inFIG.6G, user input does not correspond to the movement indicators708a-708dshown inFIGS.7B-7E, user input does not change the orientation of the device as described in references toFIG.8C-8E,8G,8J,8M,8X-8Y,8AC-8AI,8AK-8AQ,8AT-8AZ, or8BB-8BH, and/or user input does not change the positioning of the user's hand to a clenched position as shown inFIG.9(C), the device (1716) forgoes performing the first operation. In some examples, in response to the user input and in accordance with a determination that the device does not satisfy the first operation criteria, performing a third operation, wherein the third operation is different than the first operation and the second operation (e.g., the device displays a time or a default “home” interface). Note that details of the processes described above with respect to method1700(e.g.,FIG.17A-17B) are also applicable in an analogous manner to other methods described herein. For example, method1700optionally includes one or more of the characteristics of the various methods described above with reference to methods1200,1300,1400,1500, or1600. For example, the mode change criteria of method1700can be satisfied prior to receiving the biological sensor input of method1500(e.g., the display screen is held in view of a user for a predetermined time as a precondition to receiving the biological sensor input). For brevity, these details are not repeated above. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated. Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims. | 261,491 |
11861078 | DETAILED DESCRIPTION Described below with reference to the accompanying drawings are embodiments of an electronic pen central rod according to the present disclosure along with embodiments of an electronic pen according to this disclosure. The electronic pen embodying the invention and explained hereunder constitutes an exemplary electronic pen that supports position pointing based on coupling with a position detection device via electromagnetic induction. Configuration Example of the Electronic Pen1 FIGS.1A to1Care views each depicting a configuration example of an electronic pen used in conjunction with an electronic pen central rod. An electronic pen1of this embodiment houses an electronic pen main body part3inside a hollow part2aof a cylindrical housing2. The electronic pen1has a knock cam mechanical part4with a knock mechanism by which the pen tip side of the electronic pen main body part3is extended and retracted through an opening2bat one end of the housing2in the longitudinal direction. In this embodiment, the electronic pen main body part3has a configuration of a cartridge type, which allows for its attachment to and detachment from the housing2. The electronic pen main body part3includes an electronic pen central rod (simply referred to as the central rod hereunder)7embodying the present disclosure. The central rod7can be attached to and detached from the electronic pen main body part3. In the example ofFIGS.1A and1B, the housing2of the electronic pen1is formed with a transparent synthetic resin, so that the inside of the housing2is visible. The electronic pen1of this embodiment is configured to be interchangeable with a commercially available knock type ballpoint pen. The housing2and the knock cam mechanical part4disposed therein have substantially the same configurations and dimensions as those of their counterparts of a well-known commercially available knock type ballpoint pen. As depicted inFIGS.1A and1B, the knock cam mechanical part4has a well-known configuration that includes a cam body41, a knocking rod42, and a rotator43in combination. In the state ofFIG.1A, pressing an end part42aof the knocking rod42causes the knock cam mechanical part4to lock the electronic pen main body part3into the state ofFIG.1Binside the housing2. In this state, the pen tip side of the electronic pen main body part3projects from the opening2bof the housing2. In the state ofFIG.1B, again pressing the end part42aof the knocking rod42causes the knock cam mechanical part4to unlock the electronic pen main body part3. A return spring5brings the electronic pen main body part3back to its position in the housing2in the state ofFIG.1A. The detailed configurations and operations of the knock cam mechanical part4are well known and will not be discussed further. Configuration Example of the Electronic Pen Main Body Part3 FIG.1Cis a view depicting a configuration example of the electronic pen main body part3.FIGS.2A and2Bare partially enlarged views explaining how the pen tip side of the electronic pen main body part3is configured. In the electronic pen main body part3of this embodiment, as depicted inFIG.1C, a magnetic material core, which is a ferrite core32in this example and about which a coil31is wound, is coupled with a cylindrical part33. The central rod7is inserted into a through-hole (not depicted inFIG.1C) of the ferrite core32and attached detachably to a writing pressure detection part6(not depicted inFIGS.1A to1C; seeFIGS.2A and2B) disposed in the cylindrical part33, thereby forming a portion of the electronic pen main body part3. As depicted inFIGS.1C and2B, the central rod7has the end part of the pen tip side projected from the ferrite core32. As depicted inFIG.2A, the ferrite core32of this example is made of, for example, a cylindrical ferrite material which has a through-hole32aformed in the axial direction thereof, the through-hole32ahaving a predetermined diameter r1(e.g., r1=1 mm) and allowing the central rod7to be inserted. The ferrite core32has a tapered part32bformed on the pen tip side, the tapered part32bgradually being tapered toward the pen tip side. The tapered part32bis configured to provide stronger magnetic coupling with a sensor of the position detection device than if there is no tapered part32b. The central rod7is made up of a front-end member71, a connection member72, and a back-end member73, as will be discussed later in detail. In this embodiment, as depicted inFIG.2A, the winding position of the coil31over the ferrite core32is located disproportionately toward the opposite side of the pen tip side and covers approximately half the total length of the ferrite core32. A portion ranging from the end part of the ferrite core32on the pen tip side to one end of the coil wound part constitutes a coil unwound part with no coil wound thereon. Near the portion of the cylindrical part33that couples with the ferrite core32, the writing pressure detection part6is provided. In this example, the writing pressure detection part6is configured by use of a semiconductor element that varies in capacitance according to writing pressure, as disclosed in JP 2013-161307 A. Alternatively, the writing pressure detection part6can also be configured with a variable capacitance capacitor whose capacitance varies according to writing pressure by use of writing pressure detecting section with a known mechanical configuration disclosed in JP 2011-186803 A. The writing pressure detection part6is configured to be pressed by a pressure transmission member36fitted with the back-end member73of the central rod7. The pressure transmission member36has a fitting recessed part36a. Fitting the back-end member73of the central rod7into the fitting recessed part36aattaches the central rod7to the pressure transmission member36. The pressure transmission member36is disposed in a manner not to be detached or dislodged from the inside the electronic pen main body part3, but the pressure transmission member36may be pushed in and returned slidably by a predetermined distance in the longitudinal direction of the pressure transmission member36according to the writing pressure applied to the central rod7. In this manner, the central rod7is detachably attached to the electronic pen main body part3via the pressure transmission member36, and the writing pressure applied to the central rod7can be transmitted to the writing pressure detection part6. The cylindrical part33further houses a printed-circuit board34. The printed-circuit board34carries a capacitor35connected in parallel with the coil31to form a resonance circuit. The variable capacitance capacitor constituted by the writing pressure detection part6is connected in parallel with the capacitor35on the printed-circuit board34to form part of the above-mentioned resonance circuit. The electronic pen1of this embodiment is connected by electromagnetic induction with a loop coil of a position detection sensor of the position detection device by means of the resonance circuit. The electronic pen1exchanges signals interactively with the loop coil. The position detection device detects the position pointed to by the electronic pen1by detecting the position of the signal received on the position detection sensor from the electronic pen1. Also, the position detection device detects the writing pressure applied to the electronic pen1by detecting changes in frequency or phase of the signal received from the electronic pen1. As depicted inFIG.2B, the coil unwound part of the ferrite core32on the opposite side of the pen tip side is fitted into a recessed part33aof the cylindrical part33. The ferrite core32accordingly is coupled with the cylindrical part33. Although not illustrated, when the ferrite core32is coupled with the cylindrical part33, two ends31aand31bof the coil31are electrically connected in parallel with the capacitor35disposed on the printed-circuit board34in the cylindrical part33. First Configuration Example of the Electronic Pen Central Rod7 FIGS.3A to3Fare views each explaining a configuration example of the central rod7attached to the electronic pen main body part3. As depicted in the external view ofFIG.3A, the central rod7is configured with the front-end member71, connection member72, and the back-end member73. The front-end member71and the back-end member73are each made of resin, synthetic rubber, or natural rubber, for example. In this embodiment, these components are made of polyacetal resin (generally called POM). The front-end member71can be formed with a textile material such as a felt (non-woven fabric) so as to soften the writing feel. The connection member72is made of a high-hardness material such as metal or hard resin to increase the strength of the central rod7. In this embodiment, the connection member72is made of stainless steel (generally called SUS). As depicted inFIG.3B, the connection member72is a hollow (through-hole) cylindrical (pipe-like) member with an inner diameter of R1and an outer diameter of R2. The through-hole32aof the ferrite core32through which the central rod7is inserted as explained above with reference toFIG.2Ahas a diameter r1of 1 mm, for example. It follows that the outer diameter R2is 1 mm or less to permit insertion into the through-hole32a. In the case of this example, a portion of the connection member72toward the side of the front-end member71(left end side inFIG.3B) forms a front-end hole part FH to which the front-end member71is fitted; a portion of the connection member72toward the side of the back-end member73(right side inFIG.3B) constitutes a back-end hole part BH to which the back-end member73is fitted. Furthermore, the connection member72has ring-shaped recessed parts721aand721bwhich are pressed down along the outer circumference toward the side of the front-end member71(left end side inFIG.3B). Consequently, ring-shaped protruding parts721cand721dare formed to protrude internally from those positions on the inner wall surface (inner side surface) which correspond to the ring-shaped recessed parts721aand721bof the connection member72. These ring-shaped protruding parts721cand721dfunction as a front-end holding part. Through-holes722aand722bare formed in a direction intersecting with an axial center direction on the side surface (side wall) of the connection member72toward the side of the back-end member73(right side inFIG.3B). Further, although not depicted inFIGS.3A and3B, through-holes722cand722dare formed in a direction intersecting with the axial center direction on the side surface (side wall) of the connection member72at the positions opposite from the through-holes722aand722bacross a hollow space inside. That is, on the side surface of the connection member72, the through-hole722cis formed in a position opposite to the through-hole722aacross the internal hollow space, and the through-hole722dis disposed in a position opposite to the through-hole722bacross the internal hollow space. There are thus formed four through-holes722a,722b,722c, and722d. These four through-holes722a,722b,722c, and722d, together with protruding parts formed on the back-end member73to be discussed later, function as a back-end holding part. As depicted inFIG.3C, the front-end member71is a rod-like body that includes a pen tip part711with its appearance shaped like a dome, and includes a back-end extension part712extending from a back-end face711aof the pen tip part711in an opposite direction from the pen tip part711. The back-end face711aof the pen tip part711has a diameter slightly larger than the outer diameter R2of the connection member72. In this embodiment, the back-end extension part712of the front-end member71has a cylindrical shape with the diameter R1. As depicted in the cross-sectional view of the central rod7inFIG.3E, the front-end member71shaped as described above is press-inserted into the front-end hole part FH of the connection member72, from the back-end side of the back-end extension part712through the opening on the left end side of the connection member72. In this manner, the front-end member71is attached to the connection member72. In this case, the back-end face of the pen tip part711of the front-end member71is butted against the front-end face (left end face) of the connection member72. This prevents the front-end member71from further entering into the connection member72. Furthermore, the ring-shaped protruding parts721cand721dformed on the inner wall surface of the connection member72function as a front-end holding part. This brings about a state in which the side surface of the back-end extension part712of the front-end member71is held down. The front-end member71can thus be attached to the connection member72with a constant level of holding strength. The back-end extension part712can have a diameter close to the length R1as long as the back-end extension part712can be inserted through the opening on the left end side of the connection member72. However, it is not desirable to make the diameter of the back-end extension part712unnecessarily smaller than the length R1because this causes the back-end extension part712to be easily detached or dislodged. As depicted inFIG.3D, the back-end member73is a rod-like body that includes an attachment part731to be detachably attached to the electronic pen main body part3, an engagement part732, and a front-end extension part733extending from a front-end face732aof the engagement part732in an opposite direction from the attachment part731. The attachment part731and the front-end extension part733of the back-end member73are both cylindrically shaped and disposed on both sides of the engagement part732. The engagement part732, shaped as a circular plate with a predetermined thickness, separates the attachment part731and the front-end extension part733from each other. The attachment part731of the back-end member73has fitting protrusions731aand731bas depicted inFIG.3D. Meanwhile, as depicted inFIG.3D, the front-end extension part733has protruding parts733aand733bpositioned opposite from protruding parts733cand733drespectively, with a main body portion of the front-end extension part733interposed therebetween. The protruding parts733aand733bof the front-end extension part733are positioned corresponding to the through-holes722aand722bof the connection member72, respectively. The protruding parts733cand733dare positioned corresponding to the through-holes722cand722dof the connection member72, respectively. The main body portion of the front-end extension part733has the diameter R1as depicted inFIG.3D. As depicted in the cross-sectional view of the central rod7inFIG.3E, the back-end member73is inserted into the back-end hole part BH of the connection member72through the opening on the right end side of the connection member72from the front-end side of the front-end extension part733. In this manner, the back-end member73is attached to the connection member72. In this case, the members are aligned in such a manner that the protruding parts733aand733bof the front-end extension part733are fitted into the through-holes722aand722bof the connection member72, respectively, and that the protruding parts733cand733dof the front-end extension part733are fitted into the through-holes722cand722dof the connection member72, respectively. This is how the back-end member73is securely attached to the connection member72. With the front-end face of the engagement part732butted against the back-end face of the connection member72, the back-end member73is prevented from further entering into the connection member72. For the purpose of simplification, the protruding parts733athrough733dare explained specifically to correspond to the through-holes722aand722d, respectively. However, given that the protruding parts733aand733bare located opposite from the protruding parts733cand733dand that the through-holes722aand722bare also located opposite from the protruding parts733cand733d, it is possible to fit the protruding parts733athrough733binto the through-holes722cand722dand the protruding parts733cand733dinto the through-holes722aand722bwith the same effect in attaching the members. Obviously, it is possible to detach the back-end member73from the connection member72. In this case, the back-end member73is axially rotated relative to the connection member72or forcibly extracted therefrom in a manner to detach or dislodge the protruding parts733a,733b,733c, and733dfrom the through-holes722a,722b,722c, and722dof the connection member72. The front-end extension part733can have a diameter close to the length R1as long as the front-end extension part733can be inserted through the opening on the right end side of the connection member72. However, it is not desirable to make the diameter of the front-end extension part733unnecessarily smaller than the length R1because this causes the front-end extension part733to be easily detached or dislodged. Meanwhile, as explained above with reference toFIGS.2A and2B, the attachment part731of the back-end member73is fitted into the fitting recessed part36aof the pressure transmission member36inside the electronic pen main body part3. The fitting protrusions731aand731bcome into strong contact with the inner wall surface of the fitting recessed part36a. That is, as depicted in the cross-sectional view of the back-end member73and the pressure transmission member36inFIG.3F, the attachment part731excluding the fitting protrusions731aand731bhas a length R3, which is the same as the diameter of the fitting recessed part36aof the pressure transmission member36. Thus, the attachment part731of the back-end member73has its front end press-inserted and fitted into the fitting recessed part36aof the pressure transmission member36. In this case, the fitting protrusions731aand731bcome into strong contact with the inner wall surface of the fitting recessed part36a. The fitting protrusions731aand731band the inner wall surface of the fitting recessed part36aare engaged with one another with a predetermined level of holding strength, allowing the central rod7to be attached to the pressure transmission member36. With the central rod7thus attached to the pressure transmission member36, the writing pressure applied to the central rod7pushes up the pressure transmission member36. In turn, a pressing part36bof the pressure transmission member36presses the writing pressure detection part6. When the writing pressure to the central rod7is released, the pressure transmission member36and the central rod7are pushed back and return to their initial positions. In this embodiment, the connecting portion between the connection member72and the front-end extension part733of the back-end member73has the highest holding strength when the through-holes722aet al. and the protruding parts733aet al. are fitted to one another. This ensures hard-to-detach connection between the connection member72and the back-end member73. The connecting portion between the attachment part731of the back-end member73and the pressure transmission member36has the second-highest holding strength with a wide area of contact formed between the fitting protrusions731aand731bon one hand, and the inner wall surface of the fitting recessed part36aof the pressure transmission member36on the other hand. The connecting portion between the connection member72and the back-end extension part712of the front-end member71has the third-highest holding strength because of a relatively small portion of contact between the ring-shaped protruding parts721cand721don the inner wall surface of the connection member72on one hand, and the back-end extension part712on the other hand. There are three types of holding strength: the holding strength with which the connection member72holds the back-end extension part712of the front-end member71; the holding strength with which the connection member72holds the front-end extension part733of the back-end member73; and the holding strength with which the pressure transmission member36holds the attachment part731of the back-end member73. Specifically, the holding strength means a strength high enough for members to maintain their connected state by means of frictional force of the connecting portion between the members, or by use of engaging force of (force to engage with) through-holes or recesses and projections formed over the connecting portion between the members. For this embodiment, it is assumed that a value A stands for the holding strength of the connecting portion between the connection member72and the front-end extension part733of the back-end member73, that a value B denotes the holding strength of the connecting portion between the attachment part731of the back-end member73and the pressure transmission member36, and that a value C represents the holding strength of the connecting portion between the connection member72and the back-end extension part712of the front-end member71. In a case where the central rod7depicted inFIGS.3A to3Fis used, the relations between these types of holding strength are A>B>C. That is, the connecting portion between the connection member72and the front-end extension part733of the back-end member73has the highest holding strength; the connecting portion between the attachment part731of the back-end member73and the pressure transmission member36has the second-highest holding strength; and the connecting portion between the connection member72and the back-end extension part712of the front-end member71has the third-highest holding strength. Thus, when the central rod7is thus attached to the electronic pen main body part3, the central rod7can be pulled, by a user pinching the pen tip part711of the front-end member71with nails, for example, and pulling the central rod7with a force that is lower than the holding strength B but greater than the holding strength C. In such case, while the attachment part731of the back-end member73of the central rod7remains attached to the pressure transmission member36, the front-end member71can be extracted (separated) from the connection member72. That is, with the central rod7attached to the electronic pen main body part3, the front-end member71alone can be replaced. Because the back-end extension part712of the front-end member71is relatively long, the front-end member71can be extracted from the connection member72, for example, by repeatedly pulling the central rod7, each time for a short period of time with a force that is greater than the holding strength C. With the central rod7attached to the electronic pen main body part3, the central rod7can be pulled, by a user holding the pen tip part711of the front-end member71with nails, for example, and pulling the central rod7with a force that is lower than the holding strength A and higher than the holding strength B. In this case, because the back-end extension part712of the front-end member71is long, the entire central rod7can be extracted from the electronic pen main body part3before the front-end member71is extracted out of the connection member72. That is, the central rod7as a whole can be replaced, as needed. As depicted inFIG.3E, a void space74is provided between the back-end face of the back-end extension part712of the front-end member71and the front-end face of the front-end extension part733of the back-end member73. However, the presence of the void space74is not always required. Either of or both of the back-end extension part712of the front-end member71and the front-end extension part733of the back-end member73may be elongated to make the void space74as narrow as possible, which will enhance the strength of the central rod7. Second Configuration Example of the Electronic Pen Central Rod7 FIGS.4A to4Fare views each explaining a central rod7A as another configuration example of the central rod7to be attached to the electronic pen main body part3. Of the components of the central rod7A inFIGS.4A to4F, those configured substantially the same as their counterparts of the central rod7inFIGS.3A to3Fare designated by the same reference signs. As depicted in the external view ofFIG.4A, the central rod7A of this example is configured with a front-end member71, a connection member72A, and a back-end member73A. That is, the members making up the central rod7A are similar to the three types of members in the case of the central rod7explained above with reference toFIGS.3A to3F, except that the connection member72A and the back-end member73A are configured differently from the connection member72and the back-end member73of the central rod7inFIGS.3A to3F. With the central rod7A of the present example, the front-end member71, the connection member72A, and the back-end member73are made of the same materials as those of the corresponding members of the central rod7explained above with reference toFIGS.3A to3F. With the central rod7A of this example, as depicted inFIG.4B, the connection member72A is also a pipe-like (cylindrical) member with a hollow interior. The connection member72A has inner diameter R1and outer diameter R2, which are the same as those of the connection member72inFIG.3B. In this example as well, a portion of the connection member72A on the side toward the front-end member71(on the left end side inFIG.4B) forms a front-end hole part FH to which the front-end member71is fitted; and a portion of the connection member72A toward the side of the back-end member73A (on the right end side inFIG.4B) constitutes a back-end hole part BH to which the back-end member73A is fitted. Also, the connection member72A inFIG.4Bhas ring-shaped recessed parts721aand721bwhich are pressed down along the outer circumference toward the side of the front-end member71(on the left end side inFIG.4B). Consequently, ring-shaped protruding parts721cand721dare formed to protrude internally from those positions of the inner wall surface (inner side surface) which correspond to the ring-shaped recessed parts721aand721bof the connection member72A. These ring-shaped protruding parts721cand721dfunction as a front-end holding part. Further, the connection member72A of this example has ring-shaped recessed parts723aand723bwhich are pressed down along the outer circumference toward the side of the back-end member73A (right end side inFIG.4B). Consequently, ring-shaped protruding parts723cand723dare formed to protrude internally from those positions of the inner wall surface (inner side surface) which correspond to the ring-shaped recessed parts723aand723bof the connection member72A. The ring-shaped protruding parts723cand723dfunction as a back-end holding part. As depicted inFIG.4B, the ring-shaped recessed parts723aand723btoward the side of the back-end member73A are wider in the longitudinal direction than the ring-shaped recessed parts721aand721bon the side of the front-end member71. As depicted inFIG.4C, the front-end member71is configured similarly to the front-end member71explained above with reference toFIG.3Cand thus will not be discussed further in detail to avoid duplication. As depicted in the cross-sectional view of the central rod7A inFIG.4E, the front-end member71is press-inserted into the front-end hole part FH of the connection member72A through the opening on the left end side of the connection member72A, from the back-end side of the back-end extension part712. This allows the front-end member71to be attached to the connection member72A in a manner similar to that of the central rod7explained above with reference toFIGS.3A to3F. That is, the ring-shaped protruding parts721cand721dformed on the inner wall surface of the connection member72A function as a front-end holding part. This brings about a state in which the side surface of the back-end extension part712of the front-end member71is held down. The front-end member71can thus be attached to the connection member72A with a constant level of holding strength. In this case, the back-end face of the pen tip part711of the front-end member71and the front-end face (left end side) of the connection member72A are butted against each other. This prevents the front-end member71from further entering into the connection member72. As in the case of the central rod7inFIGS.3A to3F, the back-end extension part712of the front-end member71can have a diameter close to the length R1as long as the back-end extension part712can be inserted through the opening on the left end side of the connection member72A. However, it is not desirable to make the diameter of the back-end extension part712unnecessarily smaller than the length R1because this may cause the back-end extension part712to be easily detached or dislodged. As depicted inFIG.4D, the back-end member73A is a rod-like body that includes an attachment part731A to be detachably attached to the electronic pen main body part3, an engagement part732, and a front-end extension part733A extending from the front-end face732aof the engagement part732in the opposite direction from the attachment part731A. The attachment part731A and the front-end extension part733A of the back-end member73A are both cylindrically shaped and disposed on both sides of the engagement part732. The engagement part732, shaped as a circular plate with a predetermined thickness, separates the attachment part731A and the front-end extension part733A from each other. The attachment part731A of the back-end member73A has fitting protrusions731cand731bas depicted inFIG.4D. The fitting protrusion731cis shaped differently from the fitting protrusion731aof the attachment part731of the back-end member73depicted inFIGS.3A to3F. Meanwhile, the front-end extension part733A as depicted inFIG.4Dhas a simple cylindrical shape with no protrusions thereon, differently from the front-end extension part733of the back-end member73of the central rod7inFIG.3D. In this example also, the front-end extension part733A of the back-end member73A has the diameter R1as depicted inFIG.4D. As illustrated in the cross-sectional view of the central rod7A inFIG.4E, the back-end member73A is attached to the connection member72A when press-inserted into the back-end hole part BH of the connection member72A, from the front-end side of the front-end extension part733A, through the opening on the right end side of the connection member72A. In this case, the front-end face of the engagement part732of the back-end member73A and the back-end face (right end face) of the connection member72A are butted against each other. This prevents the back-end member73A from further entering into the connection member72A. Further, the ring-shaped protruding parts723cand723dformed on the inner wall surface of the connection member72A function as a back-end holding part that holds down the side surface of the front-end extension part733A of the back-end member73A. The back-end member73A can thus be attached to the connection member72A with a constant level of holding strength. In this case, as can be seen inFIG.4E, the ring-shaped protruding parts723cand723dof the connection member72A toward the side of the back-end member73A are wider in the longitudinal direction than the ring-shaped protruding parts721cand721dof the connection member72A toward the side of the front-end member71. Consequently, the connection member72A can hold the back-end member73A with higher holding strength than the front-end member71. In this example as well, the front-end face of the engagement part732is butted against the back-end face of the connection member72A. This prevents the back-end member73A from further entering into the connection member72A. Meanwhile, the attachment part731A of the back-end member73A is fitted to a fitting recessed part36cof a pressure transmission member36A inside the electronic pen main body part3. The attachment part731A has fitting protrusions731cand731bformed thereon. The fitting recessed part36cof the pressure transmission member36A in this example has an engagement part36cxthat engages with the fitting protrusion731cof the attachment part731A of the back-end member73A as depicted inFIG.4F, differently from the fitting recessed part36aof the pressure transmission member36explained above with reference toFIGS.3A to3F. That is, as depicted in the cross-sectional view of the back-end member73A and the pressure transmission member36A inFIG.4F, the attachment part731A excluding the fitting protrusions731aand731bhas the diameter R3, which is the same as that of the fitting recessed part36cof the pressure transmission member36A. Thus, the attachment part731A of the back-end member73A has its front end press-inserted and fitted into the fitting recessed part36cof the pressure transmission member36A. In this example, the fitting protrusion731cof the attachment part731A of the back-end member73A is engaged with the engagement part36cxof the fitting recessed part36cof the pressure transmission member36A. The engagement causes the fitting protrusion731bof the attachment part731A to come into strong contact with the inner wall surface of the fitting recessed part36c. The contact allows the pressure transmission member36A to hold the attachment part731A with a predetermined level of holding strength. This in turn allows the central rod7A to be attached to the pressure transmission member36A with a predetermined level of holding strength. In this example as well, in the connecting portion between the connection member72A and the front-end extension part733A of the back-end member73A, the ring-shaped protruding parts723cand723dof the connection member72A strongly hold down the circumference of the front-end extension part733A. This ensures hard-to-detach connection between the connection member72A and the back-end member73A. In the connecting portion between the attachment part731A of the back-end member73A and the pressure transmission member36A, the fitting protrusion731cand the engagement part36cxare fitted to each other, with a wide area of contact made between the fitting protrusion731band the inner wall surface of the fitting recessed part36cof the pressure transmission member36A to ensure the hold. Whereas the connecting portion between the connection member72A and the back-end extension part712of the front-end member71provides a relatively small area of contact between the ring-shaped protruding parts721cand721don the inner wall surface of the connection member72A and the back-end extension part712, the components are held in place with a predetermined level of holding strength. As described above, the holding strength signifies a strength (force) high enough for members to maintain their connected state by means of frictional force of the connecting portion between the members, or by use of engaging force of (force to engage with) through-holes or recesses and projections formed over the connecting portion between the members. In this example as well, it is assumed that the value A stands for the holding strength of the connecting portion between the connection member72A and the front-end extension part733A of the back-end member73A, that the value B denotes the holding strength of the connecting portion between the attachment part731A of the back-end member73A and the pressure transmission member36A, and that the value C represents the holding strength of the connecting portion between the connection member72A and the back-end extension part712of the front-end member71. In a case where the central rod7A depicted inFIGS.4A to4Fis adopted, the relations between these types of holding strength are also A>B>C. That is, the connecting portion between the connection member72A and the front-end extension part733A of the back-end member73A has the highest holding strength; the connecting portion between the attachment part731A of the back-end member73A and the pressure transmission member36A has the second-highest holding strength; and the connecting portion between the connection member72A and the back-end extension part712of the front-end member71has the third-highest holding strength. Thus, when the central rod7A is thus attached to the electronic pen main body part3, the central rod7A can be pulled by a user pinching the pen tip part711of the front-end member71with nails to pull the central rod7A with a force that is lower than the holding strength B but greater than the holding strength C. In this case, while the attachment part731A of the back-end member73A of the central rod7A remains attached to the pressure transmission member36A, the front-end member71can be extracted (separated) from the connection member72A. That is, with the central rod7A attached to the electronic pen main body part3, the front-end member71alone can be replaced. Because the back-end extension part712of the front-end member71is relatively long, the front-end member71can be extracted from the connection member72A, for example, by repeatedly pulling the central rod7A, each time for a short period of time with a force that is greater than the holding strength C. Likewise, suppose that the central rod7A of this example is attached to the electronic pen main body part3. In this case, the user may pinch the pen tip part711of the front-end member71with nails, for example, to pull the central rod7A with a force smaller than the holding strength A and higher than the holding strength B. This allows the central rod7A as a whole to be extracted from the electronic pen main body part3. That is, the entire central rod7A attached to the electronic pen main body part3can be replaced. The strength of the central rod7A can further be increased when either or both of the back-end extension part712of the front-end member71and the front-end extension part733A of the back-end member73A is elongated to make the void space74inFIG.4Eas narrow as possible. Advantageous Effects of the Embodiment The central rod7or7A of this embodiment is made as a high-strength central rod, using the stainless connection member72or72A. The central rod is securely attached to the connection member72by press-inserting the front-end member71and the back-end member73into the connection member72, or is firmly attached to the connection member72A by press-inserting the front-end member71and the back-end member73A into the connection member72A. The attachment can thus withstand rough usages such as writing for a long period of time or writing with high writing pressure. Neither of the front-end member71or the back-end member73is adhered to the connection member72, and neither of the front-end member71or the back-end member73A is adhered to the connection member72A. This simplifies the manufacturing process, and provides the central rod7or7A for which the front-end member71serving as the pen tip can be easily replaced. In the case of the central rod7, the holding strength is the highest for the connecting portion between the connection member72and the back-end member73, the second-highest for the connecting portion between the back-end member73and the pressure transmission member36, and the third-highest for the connecting portion between the connection member72and the front-end member71. Likewise, in the case of the central rod7A, the holding strength is the highest for the connecting portion between the connection member72A and the back-end member73A, the second-highest for the connecting portion between the back-end member73A and the pressure transmission member36A, and the third-highest for the connecting portion between the connection member72A and the front-end member71. Thus, with the central rod7or7A attached to the electronic pen main body part3via the pressure transmission member36or36A, only the front-end member71held by the connection member72or72A can be pulled out for replacement. It is therefore possible to replace the front-end member71without extracting the central rod7or7A from the pressure transmission member36or36A for every replacement. The replacement in this fashion prevents the fitting recessed part36aor36cof the pressure transmission member36or36A from being subjected to heavy load (wear and tear). This makes it possible to achieve a central rod having a high affinity with the electronic pen main body part3, to which the central rod7or7A can be detachably attached via the pressure transmission member36or36A. Alternative Examples In the case of the central rod7explained above with reference toFIGS.3A to3F, the ring-shaped protruding parts721cand721dare formed on the inner wall surface of the connection member72toward the side of the front-end member71. The ring-shaped protruding parts721cand721dcan have an appropriate width large enough to ensure necessary holding strength. Whereas two ring-shaped protruding parts721cand721dare provided, there may be a single ring-shaped protruding part or three or more ring-shaped protruding parts, as long as these parts ensure necessary holding strength. That is, the width and the number of ring-shaped protruding parts to be formed on the inner wall surface of the connection member72on the front-end member side may be varied such that these parts ensure necessary holding strength. Likewise, in the case of the central rod7A explained above with reference toFIGS.4A to4F, the ring-shaped protruding parts formed on the inner wall surface of the connection member72A (i.e., ring-shaped protruding parts721c,721d,723c, and723dinFIG.4E) can be varied in width and number as long as these parts provide necessary holding strength. The protruding parts formed on the inner wall surface of the connection member72or72A are not limited to the ring-shaped type.FIG.5is a diagram explaining an alternative example of the central rod7or7A.FIG.5depicts a connection member72B, which is cut in half along its longitudinal direction with its frontal half removed. As depicted inFIG.5, protruding parts of suitable shapes and sizes may be formed on the inner wall surface of the connection member72B in place of the ring-shaped protruding parts721c,721d,723c, and723d. That is, the protruding parts may be provided in diverse shapes, sizes, and locations. In such cases, there is no need to change the configuration of the front-end member71and back-end member73A. Recessed parts can be formed on the side surface of the front-end member71and back-end member73A in a manner corresponding to the protruding parts provided on the inner wall surface of the connection member72or72A such that the protruding parts and the recessed parts are fitted to each other. This technique may be effective when the front-end member71and the back-end member73A each has a diameter large enough so as to maintain its structural strength. In other words, the members that constitute the central rod7or7A can be adjusted in diameter and length according to the size of the electronic pen main body part for which these members are used. In the case of the central rod7explained above with reference toFIGS.3A to3F, the through-holes722a,722b,722c, and722dare formed on the side surface of the connection member72toward the side of the back-end member73in a direction intersecting with the axial center direction. In correspondence to these through-holes, the protruding parts733a,733b,733c, and733dare formed on the front-end extension part733of the back-end member73. However, the present disclosure is not limited to these specific examples. Alternatively, through-holes of suitable sizes and numbers can be formed in appropriate locations on the connection member72toward the side of the back-end member73. In correspondence to these through-holes on the connection member72toward the side of the back-end member73, there may be provided fitting protrusions of the corresponding sizes and numbers on the front-end extension part733of the back-end number73. With the central rod7depicted inFIGS.3A to3F, the ring-shaped protruding parts721cand721dare formed on the connection member72toward the side of the front-end member71. With the central rod7A depicted inFIGS.4A to4F, the ring-shaped protruding parts723cand723dare provided on the connection member72A toward the side of the back-end member73A. However, the present disclosure is not limited to these specific examples. Alternatively, these members on which the protruding parts are formed may be reversed in position.FIGS.6A to6Care diagrams each explaining another alternative example of the central rod7or7A. As depicted inFIG.6A, a connection member72C is assumed to be a cylindrical (pipe-like) member with no protruding parts on its inner wall surface. In contrast, ring-shaped protruding parts712aand712bare formed on a back-end extension part712A of the front-end member71as depicted inFIG.6B. Ring-shaped protruding parts733eand733fare formed on a front-end extension part733B of a back-end member73B as illustrated inFIG.6C. Given these protruding parts, when the front-end member71A and the back-end member73B are press-inserted into the connection member72C, the connection member72C holds the front-end member71A and the back-end member73B with a predetermined level of holding strength. In this case as well, there can be provided ring-shaped protruding parts of suitable sizes and numbers on the front-end member71A and back-end member73B in order to attain a predetermined level of holding strength. Furthermore, the ring-shaped protruding parts are not limited to be formed on the front-end member71A and the back-end member73B. In a manner similar to the case explained above with reference toFIG.5, there can also be provided protruding parts of desired shapes, sizes, and numbers on the side surface of the back-end extension part712A of the front-end member71A and on the side surface of the front-end extension part733B of the back-end member73B. In the case of the alternative example depicted inFIGS.6A to6C, there may be provided recessed parts on the inner wall surface of the connection member72C in a manner corresponding to the protruding parts formed on the side surface of the back-end extension part712A of the front-end member71A and on the side surface of the front-end extension part733B of the back-end member73B. Also, the relations between the back-end member and the pressure transmission part in the cases depicted inFIGS.3A to3F and4A to4Fcan be reversed.FIG.7is a diagram explaining another alternative example of the central rod7or7A. In the case of this example, an attachment part731B of a back-end member73C is configured not to have protrusions (protruding parts) or grooves (recessed parts) formed thereon. In contrast, a ring-shaped protruding part36dxis formed on the inner wall surface of a fitting recessed part36dof the pressure transmission member36. This enables the back-end member73C to be attached to a pressure transmission member36B with a predetermined level of holding strength. The width and height of the ring-shaped protruding part36dxcan be varied in a manner so as to achieve the desired level of holding strength. It is not always necessary to detachably attach the connection member to the back-end member, e.g., the connection member72to the back-end member73, the connection member72A to the back-end member73A, or the connection member72to the back-end member73B.FIG.8is a diagram explaining another alternative example of the connection member. A connection member72D depicted inFIG.8has the connection member and the back-end member that are integrally formed. As depicted inFIG.8, the connection member72D is made of a front-end pipe part72FT and a back-end attachment part72BK integrally formed. In this case, the back-end attachment part72BK performs the function of the back-end member73or73A in the above-described embodiment. The front-end member71explained above with reference toFIGS.3A to3F and4A to4Fcan be attached to the connection member72D. The front-end side of the connection member72D may be configured to have such protrusions as those depicted inFIG.5. It is also possible to configure the front-end pipe part72FT of the connection member72D to have neither protrusions nor recessed parts on the inner wall surface, as explained above with reference toFIG.6A. In this case, the front-end member71A depicted inFIG.6Bis to be attached. Also, the back-end attachment part72BK can be configured as depicted inFIGS.4A to4For inFIG.7. In the above-described embodiments, the central rod7or7A is configured to apply pressure to the writing pressure detection part6via the pressure transmission member36or36A. However, the present disclosure is not limited to these specific examples. In a case where the writing pressure detection part6is configured to have a mechanism to hold a central rod, the central rod7or7A may be attached directly to the writing pressure detection part6such that the attachment part731or731A may serve as a member (i.e., a pressing element) for pushing the writing pressure detection part6. Application to the Electronic Pen Operating Based on the Capacitance Method The above embodiments have been explained in connection with examples of the electronic pen operating based on the electromagnetic induction method. Alternatively, the electronic pen central rod of this disclosure can also be applied to an electronic pen operating based on the capacitance method. In this case, the front-end member, the connection member, and the back-end member are all configured to be electrically conductive, so that the central rod as a whole may become electrically conductive. The electrical conductivity is achieved by using metal or resin mixed with metal powder as the raw materials for forming various members. It is to be understood that while the invention has been described in conjunction with specific embodiments with reference to the accompanying drawings, it is evident that many alternatives, modifications, and variations will become apparent to those skilled in the art in light of the foregoing description. Accordingly, it is intended that the present invention embrace all such alternatives, modifications, and variations as fall within the spirit and scope of the appended claims. | 50,146 |
11861079 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Please refer toFIGS.2,3and4.FIG.2is a schematic perspective view illustrating the appearance of a luminous touch pad module according to a first embodiment of the present invention.FIG.3is a schematic exploded view illustrating the luminous touch pad module as shown inFIG.2.FIG.4is a schematic cross-sectional view illustrating the stack structure of the luminous touch pad module as shown inFIG.3. As shown inFIGS.2,3and4, the luminous touch pad module2is a multifunctional touch pad module that is a combination of a touch panel and a backlight module. The luminous touch pad module2is installed in a fixing frame of a bottom housing of a notebook computer. The luminous touch pad module2is electrically connected with a processor of the notebook computer. At least a portion of the luminous touch pad module2is exposed outside so as to be touched by the user's finger. Consequently, the notebook computer can be operated by the user. In a general usage mode, the luminous touch pad module2is not illuminated. Under this circumstance, the operations of the luminous touch pad module2are similar to those of the general touch panel. When the user's finger is placed on the luminous touch pad module2and slid on the luminous touch pad module2, a mouse cursor is correspondingly moved. Moreover, in case that the luminous touch pad module2is pressed down by the user's finger, the notebook computer executes a specific function. In a luminous usage mode, specific patterns, characters or symbols are displayed on the luminous touch pad module2. For example, a virtual numeric keypad is displayed on the luminous touch pad module2. According to the touch signal from the luminous touch pad module2, the processor of the notebook computer performs the operation corresponding to the pressed number or symbol. The other structures of the luminous touch pad module2will be described in more details as follows. Please refer toFIGS.2,3and4again. In an embodiment, the luminous touch pad module2comprises a touch member20, a touch sensing circuit board21, a first light guide plate22, a second light guide plate23, a first light-guiding element24, at least one first light-emitting element25and at least one second light-emitting element26. The touch sensing circuit board21is located under the touch member20. The first light guide plate22is arranged between the touch member20and the touch sensing circuit board21. The second light guide plate23is arranged between the touch member20and the first light guide plate22. The first light guide plate22is arranged between the second light guide plate23and the touch sensing circuit board21. The at least one first light-emitting element25is installed on a top side of the touch sensing circuit board21. In addition, the first light-emitting element25is aligned with the first light guide plate22. The at least one second light-emitting element26is located under the touch sensing circuit board21. In addition, the at least one second light-emitting element26is aligned with the first light-guiding element24. The at least one first light-guiding element24is arranged around the touch sensing circuit board21, the first light guide plate22, the second light guide plate23and the at least one first light-emitting element25. Preferably but not exclusively, the touch member20is made of glass or any other appropriate material. The material of the touch member may be varied according to the practical requirements. In this embodiment, a first adhesive layer S1is arranged between the touch member and the second light guide plate23. The touch member20and the second light guide plate23are combined together through the first adhesive layer S1. A second adhesive layer S2is arranged between the second light guide plate23and the first light guide plate22. The second light guide plate23and the first light guide plate22are combined together through the second adhesive layer S2. A third adhesive layer S3is arranged between the first light guide plate22and the touch sensing circuit board21. The first light guide plate21and the touch sensing circuit board21are combined together through the third adhesive layer S3. Preferably but not exclusively, each of the first adhesive layer S1, the second adhesive layer S2and the third adhesive layer S3is a pressure sensitive adhesive (PSA). It is noted that the examples of the first adhesive layer S1, the second adhesive layer S2and the third adhesive layer S3are not restricted. Please refer toFIGS.2,3and4again. In an embodiment, the first light-guiding element24comprises a lateral wall241and a bottom wall242. The lateral wall241of the first light-guiding element24is arranged around the touch sensing circuit board21, the first light guide plate22, the second light guide plate23and the at least one first light-emitting element25. The bottom wall242is located under the touch sensing circuit board21. In addition, the bottom wall242comprises a first hollow portion2420. The lateral wall241is extended from the outer edge of the bottom wall242and extended in the direction toward the touch member20. A first accommodation space C1is defined by the lateral wall241and the bottom wall242of the first light-guiding element24collaboratively. In addition, the first accommodation space C1is in communication with the first hollow portion2420. The touch sensing circuit board21, the first light guide plate22, the second light guide plate23and the at least one first light-emitting element25are disposed within the first accommodation space C1. In addition, the at least one second light-emitting element26is disposed within the first hollow portion2420and aligned with the bottom wall242of the first light-guiding element24. In this embodiment, the cross section of the lateral wall241and the bottom wall242of the first light-guiding element24is L-shaped. It is noted that the shape of the cross section of the lateral wall241and the bottom wall242of the first light-guiding element24is not restricted. In this embodiment, the lateral wall241of the first light-guiding element24is arranged around a portion of the touch sensing circuit board21, a portion of the first light guide plate22, a portion of the second light guide plate23and a portion of the first light-emitting element25. When the touch sensing circuit board21, the first light guide plate22, the second light guide plate23and the at least one first light-emitting element25are stacked on each other and formed as a stack structure, the lateral wall241of the light guiding element24is arranged around two opposite lateral sides of the stack structure. It is noted that numerous modifications and alterations may be made while retaining the teachings of the invention. For example, in another embodiment, the lateral wall241of the first light-guiding element241is arranged around all lateral sides of the stack structure. Please refer toFIGS.2,3and4again. In an embodiment, the luminous touch pad module2further comprises second light-guiding element27and at least one third light-emitting element28. The second light-guiding element27is arranged around the touch member20and the first light-guiding element24. The at least one third light-emitting element28is located under the touch sensing circuit board21. In addition, the at least one third light-emitting element28is aligned with the second light-guiding element27. The light beam emitted by the at least one third light-emitting element28is transmitted to a position near the outer edge of the touch member20through the second light-guiding element27. Please refer toFIGS.2,3and4again. In this embodiment, the second light-guiding element27comprises a surrounding wall271and a base plate272. The surrounding wall271of the second light-guiding element27is arranged around the touch member20and the first light-guiding element24. The base plate272has a second hollow portion2720. The surrounding wall271of the second light-guiding element27is extended from the outer edge of the base plate272and extended in a direction toward a position near the touch member20. A second accommodation space C2is defined by the surrounding wall271and the base plate272collaboratively. In addition, the second accommodation space C2is in communication with the second hollow portion2720. The touch member20, the touch sensing circuit board21, the first light guide plate22, the second light guide plate23, the first light-guiding element24, the at least one first light-emitting element25and the at least one second light-emitting element26are disposed within the second accommodation space C2, which is defined by the surrounding wall271and the base plate272collaboratively. The at least one third light-emitting element28is disposed within the second hollow portion2720and aligned with the base plate272. In this embodiment, the cross section of the surrounding wall271and the base plate272of the second light-guiding element27is L-shaped. It is noted that the cross section of the surrounding wall271and the base plate272of the second light-guiding element27is not restricted. In this embodiment, the light beam emitted by the at least one first light-emitting element25is transmitted to the touch member20through the first light guide plate22. After the touch member20is illuminated by the light beam from the at least one first light-emitting element25, a virtual numeric keypad with specific patterns, words or symbols can be displayed on the luminous touch pad module2. In this embodiment, the light beam from the at least one second light-emitting element26is transmitted to the touch member20through the bottom wall242of the first light-guiding element24, the lateral wall241of the first light-guiding element24and the second light guide plate23sequentially. In this embodiment, the light beam from the at least one third light-emitting element28is transmitted to the position near the outer edge of the touch member20through the base plate272of the second light-guiding element27and the surrounding wall271of the second light-guiding element27sequentially. Consequently, a bright light ring can be created at the periphery region of the luminous touch pad module2. In this embodiment, the luminous touch pad module2further comprises a light-shading layer S4. The light-shading layer S4is located over the at least first one light-emitting element25. Due to the arrangement of the light-shading layer S4, the light beam from the at least one first light-emitting element25is blocked by the light-shading layer S4. Since the light beam from the at least one first light-emitting element25is not directly irradiated on the second light guide plate23, any local region of the second light guide plate23will not be too bright. Please refer toFIGS.2,3and4again. In this embodiment, the touch sensing circuit board21comprises a first surface211and a second surface212. The first surface211and the second surface212are opposed to each other. The first surface211of the touch sensing circuit board21is arranged between the first light guide plate22and the second surface212of the touch sensing circuit board21. In this embodiment, the at least one first light-emitting element25is installed on the first surface211of the touch sensing circuit board21. In addition, the at least one first light-emitting element25is electrically connected with the touch sensing circuit board21. The at least one second light-emitting element26is installed on the second surface212of the touch sensing circuit board21. In addition, the at least one second light-emitting element26is electrically connected with the touch sensing circuit board21. The at least one third light-emitting element28is installed on the second surface212of the touch sensing circuit board21. In addition, the at least one third light-emitting element28is electrically connected with the touch sensing circuit board21. Moreover, the at least one third light-emitting element28is located beside a side of the at least one second light-emitting element26. In this embodiment, the at least one first light-emitting element25includes plural first light-emitting elements25. In addition, the plural first light-emitting elements25are installed on two opposite sides of the first surface211of the touch sensing circuit board21. Similarly, the at least one second light-emitting element26includes plural second light-emitting element26. In addition, the plural second light-emitting elements26are installed on two opposite sides of the second surface212of the touch sensing circuit board21. Similarly, the at least one third light-emitting element28includes plural third light-emitting elements28. In addition, the plural third light-emitting elements28are installed on the second surface212of the touch sensing circuit board21. Especially, the plural third light-emitting elements28are installed on the second surface212of the touch sensing circuit board21and aligned with the second hollow portion2720of the base plate272of the second light-guiding element27. It is noted that the numbers and the locations of the at least one first light-emitting element25, the at least one second light-emitting element26and the at least one third light-emitting element28are not restricted. That is, the numbers and the locations of the at least one first light-emitting element25, the at least one second light-emitting element26and the at least one third light-emitting element28may be varied according to the practical requirements. Preferably but not exclusively, the first light-emitting elements25, the second light-emitting elements26and the third light-emitting element28are polychromatic light emitting diodes or monochromatic light emitting diodes. FIG.5schematically illustrates a usage scenario of the luminous touch pad module according to the first embodiment of the present invention.FIG.6schematically illustrates another usage scenario of the luminous touch pad module according to the first embodiment of the present invention. Please refer toFIG.5. The user may operate the notebook computer to drive the at least one first light-emitting element25and the at least one third light-emitting element28to emit light beams. In this embodiment, the light beam emitted by the at least one first light-emitting element25is transmitted to the touch member20through the first light guide plate22. Consequently, a virtual numeric keypad with specific patterns, words or symbols can be displayed on the luminous touch pad module2. The light beam from the at least one third light-emitting element28is transmitted to the position near the outer edge of the touch member20through the base plate272of the second light-guiding element27and the surrounding wall271of the second light-guiding element27sequentially. Consequently, as shown inFIG.5, a bright light ring surrounding the periphery region of the virtual numeric keypad can be displayed on the luminous touch pad module2. Please refer toFIG.6. The user may operate the notebook computer to drive the at least one second light-emitting element26and the at least one third light-emitting element28to emit light beams. The light beam from the at least one second light-emitting element26is transmitted to the touch member20through the bottom wall242of the first light-guiding element24, the lateral wall241of the first light-guiding element24and the second light guide plate23sequentially. Similarly, the light beam from the at least one third light-emitting element28is transmitted to the position near the outer edge of the touch member20through the base plate272of the second light-guiding element27and the surrounding wall271of the second light-guiding element27sequentially. Consequently, as shown inFIG.6, the luminous touch pad module2can provide a multilayered luminous effect. In this embodiment, due to the cooperation of the first light guide plate22and the at least one first light-emitting element25, the cooperation of the first light-guiding element24, the second light guide plate23and the at least one second light-emitting element26and the cooperation of the second light-guiding element27and the at least one third light-emitting element28, a bright light ring surrounding the periphery region of the virtual numeric keypad can be displayed on the luminous touch pad module2(i.e., in the situation ofFIG.5), or a multilayered luminous effect can be provided (i.e., in the situation ofFIG.6). It is noted that numerous modifications and alterations may be made while retaining the teachings of the invention. For example, in some other embodiments, the second light-guiding element27and the at least one third light-emitting element28are omitted. Due to the cooperation of the first light guide plate22and the at least one first light-emitting element25and the cooperation of the first light-guiding element24, the second light guide plate23and the at least one second light-emitting element26, a bright light ring surrounding the periphery region of the virtual numeric keypad can be displayed on the luminous touch pad module2(i.e., in the situation ofFIG.5), or a multilayered luminous effect can be provided (i.e., in the situation ofFIG.6). FIG.7is a schematic exploded view illustrating the luminous touch pad module according to a second embodiment of the present invention.FIG.8is a schematic cross-sectional view illustrating the stack structure of the luminous touch pad module as shown inFIG.7. The structures of the luminous touch pad module2aof this embodiment are similar to the structures of the luminous touch pad module2as shown inFIGS.2,3and4. In comparison with the luminous touch pad module2, the luminous touch pad module2aof this embodiment further comprises a light source circuit board29. The at least one second light-emitting element26and the at least one third light-emitting element28are supported by the light source circuit board29. The light source circuit board29is arranged between the first light-guiding element24and the second light-guiding element27. That is, the light source circuit board29is arranged between the bottom wall242of the first light-guiding element24and the base plate272of the second light-guiding element27. The light source circuit board29comprises a top surface291and a bottom surface292. The top surface291and the bottom surface292are opposed to each other. The top surface291faces the at least one first light-guiding element24. The bottom surface292faces the second light-guiding element27. The at least one first light-emitting element25is installed on the first surface211of the touch sensing circuit board21. The at least one second light-emitting element26is installed on the top surface291of the light source circuit board29. The at least one third light-emitting element28is installed on the bottom surface292of the light source circuit board29. As mentioned above, the touch sensing circuit board21has to process touch sensing signals. However, if too many light-emitting elements are installed on the touch sensing circuit board21, the circuit of the touch sensing circuit board21is possibly abnormal, or the touch sensitivity of the touch sensing circuit board21is possibly deteriorated. In this embodiment, the luminous touch pad module2ais additionally equipped with the light source circuit board29. Some of the light-emitting elements are installed on the light source circuit board29. Since the number of light-emitting elements installed on the touch sensing circuit board21is reduced, the circuit of the touch sensing circuit board21is normal, and the touch sensitivity of the touch sensing circuit board21is enhanced. FIG.9is a schematic exploded view illustrating the luminous touch pad module according to a third embodiment of the present invention.FIG.10is a schematic cross-sectional view illustrating the stack structure of the luminous touch pad module as shown inFIG.9. The structures of the luminous touch pad module2bof this embodiment are similar to the structures of the luminous touch pad module2as shown inFIGS.2,3and4. In comparison with the luminous touch pad module2, the luminous touch pad module2bof this embodiment further comprises a flexible touch sensing circuit board30and a printed circuit board31to replace the touch sensing circuit board21of the luminous touch pad module2. In this embodiment, the printed circuit board31is located under the touch member20. The first light guide plate22is arranged between the touch member20and the printed circuit board31. The second light guide plate23is arranged between the touch member20and the first light guide plate22. The flexible touch sensing circuit board30is arranged between the touch member20and the second light guide plate23. The second light guide plate23is arranged between the flexible touch sensing circuit board30and the first light guide plate22. In addition, the flexible touch sensing circuit board30is electrically connected with the printed circuit board31. In this embodiment, the flexible touch sensing circuit board is made of a light-transmissible material. It is noted that the material of the flexible touch sensing circuit board30is not restricted. The at least one first light-emitting element25is installed on the printed circuit board31. In addition, the at least one first light-emitting element25is aligned with the first light guide plate22. The at least one second light-emitting element26is located under the printed circuit board31. In addition, the second light-emitting element26is aligned with the first light-guiding element24. The first light-guiding element24is arranged around the printed circuit board31, the first light guide plate22, the second light guide plate23and the at least one first light-emitting element25. The second light-guiding element27is arranged around the touch member20, the first light-guiding element24and the flexible touch sensing circuit board30. The at least one third light-emitting element28is located under the printed circuit board31. In addition, the at least one third light-emitting element28is aligned with the second light-guiding element27. A first accommodation space C1is defined by the lateral wall241and the bottom wall242of the first light-guiding element24collaboratively. The printed circuit board31, the first light guide plate22and the second light guide plate23are disposed within the first accommodation space C1. The at least one second light-emitting element26is disposed within the first hollow portion2420of the bottom wall242of the first light-guiding element24. In addition, the at least one second light-emitting element26is aligned with the bottom wall242of the first light-guiding element24. A second accommodation space C2is defined by the surrounding wall271and the base plate272of the second light-guiding element27collaboratively. The touch member20, the printed circuit board31, the first light guide plate22, the second light guide plate23, the first light-guiding element24, the flexible circuit board30, the at least one first light-emitting element25and the at least one second light-emitting element26are disposed within the second accommodation space C2. The at least one third light-emitting element28is disposed within the second opening2720of the base plate272. In addition, the at least one third light-emitting element28is aligned with the base plate272. Please refer toFIGS.9and10again. In this embodiment, the printed circuit board31comprises a top surface311and a bottom surface312. The top surface311and the bottom surface312are opposed to each other. The top surface311of the printed circuit board31is arranged between the first light guide plate22and the bottom surface312of the printed circuit board31. The at least one first light-emitting element25is installed on the top surface311of the printed circuit board31. In addition, the at least one first light-emitting element25is electrically connected with the printed circuit board31. The at least one second light-emitting element26is installed on the bottom surface312of the printed circuit board31. In addition, the at least one second light-emitting element26is electrically connected with the printed circuit board31. The at least one third light-emitting element28is installed on the bottom surface312of the printed circuit board31. In addition, the at least one third light-emitting element28is electrically connected with the printed circuit board31. The at least one third light-emitting element28is located beside a side of the at least one second light-emitting element26. As shown inFIG.10, the luminous touch pad module2bfurther comprises an electrical connector32. The electrical connector32is located under the printed circuit board31. That is, the electrical connector32is installed on the bottom surface312of the printed circuit board31. The flexible touch sensing circuit board30is electrically connected with the printed circuit board31through the electrical connector32. In this embodiment, the luminous touch pad module2bis equipped with the flexible touch sensing circuit board30and the printed circuit board31. Moreover, the flexible touch sensing circuit board30is extended to the region between the touch member20and the second light guide plate23. Due to this structural design, the sensitivity of performing the touch control operation on the touch member20is effectively enhanced. Please refer toFIGS.11and12.FIG.11is a schematic exploded view illustrating the luminous touch pad module according to a fourth embodiment of the present invention.FIG.12is a schematic cross-sectional view illustrating the stack structure of the luminous touch pad module as shown inFIG.11. The structures of the luminous touch pad module2cof this embodiment are similar to those of the luminous touch pad module2bas shown inFIGS.9and10. In comparison with the luminous touch pad module2b, the luminous touch pad module2cfurther comprises a light source circuit board33. The at least one second light-emitting element26and the at least one third light-emitting element28are supported by the light source circuit board33. The light source circuit board33is arranged between the first light-guiding element24and the second light-guiding element27. The at least one first light-emitting element25is installed on the top surface311of the printed circuit board31. The at least one second light-emitting element26is installed on the top surface331of the light source circuit board33. The at least one third light-emitting element28is installed on the bottom surface332of the light source circuit board33. In this embodiment, the printed circuit board31has to process touch sensing signals from the flexible touch sensing circuit board30. However, if too many light-emitting elements are installed on the printed circuit board31, the circuit of the flexible touch sensing circuit board30is possibly abnormal, or the touch sensitivity of the flexible touch sensing circuit board30is possibly deteriorated. In this embodiment, the luminous touch pad module2cis additionally equipped with the light source circuit board33. Some of the light-emitting elements are installed on the light source circuit board33. Since the number of light-emitting elements installed on the printed circuit board31is reduced, the circuit of the flexible touch sensing circuit board30is normal, and the touch sensitivity of the flexible touch sensing circuit board30is enhanced. FIG.13is a schematic exploded view illustrating a luminous touch pad module according to a fifth embodiment of the present invention. The structures of the luminous touch pad module2dof this embodiment are similar to the structures of the luminous touch pad module2as shown inFIGS.2,3and4. In comparison with the luminous touch pad module as shown inFIGS.2,3and4, the luminous touch pad module2dof this embodiment further comprises a light blocking structure34. The light blocking structure34is installed on the first light guide plate22. The first light guide plate22is divided into a first light-guiding part P1and a second light-guiding part P2by the light blocking structure34. The first light-guiding part P1and the second light-guiding part P2represent different luminous regions. The at least one first light-emitting element25on the first side of the first surface211of the touch sensing circuit board21is aligned with the first light-guiding part P1of the first light guide plate22. The at least one first light-emitting element25on the second side of the first surface211of the touch sensing circuit board21is aligned with the second light-guiding part P2of the first light guide plate22. The user may operate the notebook computer to selectively drive the first light-emitting element25in the first light-guiding part P1of the first light guide plate22or the first light-emitting element25in the second light-guiding part P2of the first light guide plate22to emit the light beam. Consequently, the partition illumination efficacy of the luminous touch pad module2dcan be achieved. In the above embodiment, the first light guide plate22is divided into the first light-guiding part P1and the second light-guiding part P2by the light blocking structure34. It is noted that numerous modifications and alterations may be made while retaining the teachings of the invention. For example, in some other embodiments, the first light guide plate22is divided into at least three luminous regions by the light blocking structure34. In addition, the shapes of the luminous regions that are formed in the first light guide plate22and defined by the light blocking structure34are not restricted. From the above descriptions, the present invention provides the luminous touch pad module. Due to the cooperation of the first light guide plate and the at least one first light-emitting element, the cooperation of the first light-guiding element, the second light guide plate and the second light-emitting element and the cooperation of the second light-guiding element and the at least one third light-emitting element, the problem of resulting in a non-functional area of the luminous touch pad module at the region corresponding to the installation position of the light-emitting element will be overcome. Consequently, a bright light ring surrounding the periphery region of the virtual numeric keypad can be displayed on the luminous touch pad module, and a multilayered luminous effect can be provided. While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures. | 30,964 |
11861080 | DETAILED DESCRIPTION To explain in detail the technical content, the structural features, the achieved purposes and the achieved effects of the present application, the descriptions are given in conjunction with embodiments and drawings. Referring toFIG.1, in embodiment one, the present application provides an unlocking method100by using a knob. The method is applied to an electronic device provided with an operation knob. The method100includes the following steps: detecting (11) an action of the operation knob in a screen locked state in real time; and starting (121) an unlocking program in response to rotation of the operation knob: calculating (122) a rotational circumferential distance of the operation knob, determining (123) whether the rotational circumferential distance of the operation knob reaches a preset circumferential distance, controlling (124) the electronic device to be unlocked in response to the rotational circumferential distance of the operation knob reaching the preset circumferential distance, and unlocking failed and returning to step (1) in response to the rotational circumferential distance of the operation knob not reaching the preset circumferential distance. In an embodiment, a rotational direction of the operation knob and a rotational angular velocity of the operation knob are acquired by using a photosensor, and the rotational circumferential distance of the operation knob is calculated according to the rotational direction, a rotational duration and the rotational angular velocity. In an embodiment, the photosensor includes a laser light source unit and an image detection unit. The laser light source unit is configured to emit weak laser light on a surface of the operation knob, and the image detection unit is configured to determine the rotational direction of the operation knob and the rotational angular velocity of the operation knob according to light reflected by the surface of the operation knob. In an embodiment, referring toFIGS.7ato7c, starting an unlocking program also includes the following steps: playing (125) an unlock picture21on a screen of the electronic device, where the unlock picture21includes an arced progress bar211corresponding to the preset circumferential distance; synchronously filling the progress bar according to the rotational direction of the operation knob and the rotational angular velocity of the operation knob; and canceling the unlock picture in response to an unlocking failure. In an embodiment, an unlock pattern212is also displayed at an end of the arced progress bar. When the rotational circumferential distance reaches the preset circumferential distance, and the progress bar211is full (as shown inFIG.7b), an unlock icon is displayed in the unlock pattern212, and the electronic device is unlocked. When the progress bar211is not full (as shown inFIG.7a), a lock icon is displayed in the unlock pattern212, and the electronic device remains in a locked state. The filling direction of the progress bar211is determined by the rotational direction of the current operation knob. For example, inFIGS.7aand7b, the current operation knob is rotated clockwise for unlocking, and inFIG.7c, the current operation knob is rotated anticlockwise for unlocking. In an embodiment, after the electronic device is unlocked (124), a rotation signal of the operation knob detected within a preset time after the electronic device is unlocked is omitted, or rotation signals of the operation knob are omitted for preset first several times. A rotation instruction of the operation knob after the electronic device is unlocked is abandoned under the control of software so that such operations as the turn on the interface caused by a misoperation when the electronic device is in an unlocked state are prevented. In this embodiment, the system abandons (omits) the turn (rotation) instruction of the first operation knob and the turn (rotation) instruction of the second operation knob and responds to the turn (rotation) instruction of the third operation knob. Referring toFIG.2, based on the embodiment one, in embodiment two, step (11) specifically includes: acquiring the rotational direction of the operation knob and the rotational angular velocity of the operation knob in the screen locked state in real time. In step (121′), when the rotational direction of the operation knob and/or the rotational angular velocity of the operation knob reaches a present range, the unlocking program is started (M). In step (121′), a valid rotational direction is configured, or a preset valid rotational direction is acquired, and the current rotational direction is recorded. Starting (M) the unlocking program includes the following steps: calculating (122) a rotational circumferential distance of the operation knob continuously rotational along the valid rotational direction and regarding the rotational circumferential distance of the operation knob continuously rotational along the valid rotational direction as the valid rotational circumferential distance, determining (123) whether the valid rotational circumferential distance reaches the preset circumferential distance, controlling (124) the electronic device to be unlocked in response to the valid rotational circumferential distance reaching the present circumferential distance, and returning to step (11) in response to the valid rotational circumferential distance not reaching the preset circumferential distance. That is, whether the rotational circumferential distance (the rotational circumferential distance of one rotation of the operation knob) of the operation knob rotational along the current rotational direction reaches the preset circumferential distance is calculated, the electronic device is controlled to unlock in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction reaching the preset circumferential distance, or the method returns to step (11) in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction not reaching the preset circumferential distance. Referring toFIG.3, based on the embodiment two, in embodiment three, a condition for starting the unlocking program includes: starting (121a) the unlocking program in response to the rotational angular velocity exceeding a preset velocity, and regarding the current rotational direction as the valid rotational direction. In an embodiment, the rotational angular velocity of the operation knob and the rotational direction of the operation knob are detected in real time, whether the rotational angular velocity is greater than or equal to the preset velocity (or greater than the preset velocity) is determined, and the unlocking program is started in response to the rotational angular velocity being greater or equal to the preset velocity (or greater than the preset velocity). The preset velocity is a velocity greater than 0. In this embodiment, to prevent the unlocking program from being started by an accidental touch, a large enough preset velocity needs to be selected. If the rotational direction of the operation knob suddenly changes, and the rotational angular velocity of the operation knob exceeds the preset velocity, the current rotational direction is renewed as the valid rotational direction, and the unlocking program is restarted according to the valid rotational direction. Starting the unlocking program includes the following steps: calculating (122) a rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction and regarding the rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction as the valid rotational circumferential distance, determining (123) whether the valid rotational circumferential distance reaches the preset circumferential distance, controlling (124) the electronic device to unlock in response to the valid rotational circumferential distance reaching the preset circumferential distance, or returning to step (11) in response to the valid rotational circumferential distance not reaching the preset circumferential distance. That is, whether the rotational circumferential distance (the rotational circumferential distance of one rotation of the operation knob) of the operation knob rotational along the current rotational direction reaches the preset circumferential distance is calculated, the electronic device is controlled to unlock in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction reaching the preset circumferential distance, or the method returns to step (11) in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction not reaching the preset circumferential distance. In an embodiment, the valid rotational circumferential distance is determined by one of: calculating a rotational circumferential distance of continuous rotation at a velocity greater than the preset velocity along the valid rotational direction and regarding the rotational circumferential distance of the continuous rotation at the velocity greater than the preset velocity along the valid rotational direction as the valid rotational circumferential distance. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being valid, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to a rotational duration and the rotational angular velocity; in response to the rotational angular velocity not exceeding the preset velocity, stopping the calculation of the valid rotational circumferential distance, and returning to step (11); in response to changing the rotational direction, and the rotational angular velocity not exceeding the preset velocity, stopping the calculation of the valid rotational circumferential distance, and returning to step (11); or in response to changing the rotational direction, and the rotational angular velocity exceeding the preset velocity, regarding the changed rotational direction as the valid rotational direction, and restarting the unlocking program to recalculate the valid rotational circumferential distance. Based on the embodiment three, in embodiment four, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the valid rotational direction within a preset time after the unlocking program is started as the valid rotational circumferential distance, where the rotational angular velocity exceeds the preset velocity. Specifically, calculating the valid rotational circumferential distance includes the following steps: performing the following operations within the preset time after the unlocking program is started, and timing begins: in response to the rotational direction being valid, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; regarding the rotational circumferential distance accumulated within the preset time as the valid rotational circumferential distance; determining whether the valid rotational circumferential distance exceeds the preset circumferential distance in real time; unlocking the electronic device immediately in response to the valid rotational circumferential distance exceeding the preset circumferential distance; and returning to step (1) in response to the valid rotational circumferential distance not exceeding the preset circumferential distance when the preset time is over. When the rotational angular velocity does not exceed the preset velocity, the calculation of the current rotational circumferential distance is stopped temporarily until the rotational angular velocity is recovered. When the rotational direction changes, and the rotational angular velocity exceeds the preset velocity, the changed rotational direction is regarded as the valid rotational direction, and the unlocking program is restarted to recalculate the valid rotational circumferential distance. Based on the embodiment three, in embodiment five, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the valid rotational direction after the unlocking program is started as the valid rotational circumferential distance, where the rotational angular velocity exceeds the preset velocity, and a halt time of the rotation does not exceed the preset time. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being valid, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; in response to the rotational angular velocity not exceeding the preset velocity (for example, the rotational angular velocity is 0, and the operation knob halts) and beginning timing, ending the unlocking program and returning to step (1) in response to the operation knob not recovering the condition that the rotational angular velocity exceeds the preset velocity and that the rotational direction is the valid rotational direction within the preset halt time, and continuously accumulating and calculating the rotational circumferential distance of the operation knob until the rotational circumferential distance reaches the preset circumferential distance in response to recovering the condition, and completing the unlocking work. If the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the changed rotational direction is regarded as the valid rotational direction, and the unlocking program is restarted to recalculate the valid rotational circumferential distance. In this embodiment, the halt time refers to the time during which the rotational angular velocity of the operation knob is lower than the preset velocity. This embodiment supports unlocking by multiple consecutive rotations, and if a reverse low-velocity rotation is caused by a misoperation, the unlocking program is still running, thereby having lower requirements on operation concentration and action continuity of an operator, and allowing a user to give several consecutive turns with a few halts to unlock the electronic device, thus helping patients having visual impairment or finger nerve ending impairment to use the electronic device more easily. Certainly, the halt time may also include the time during which the rotational angular velocity of the operation knob is 0 and lower than the preset velocity, and the rotational direction of the operation knob is the current valid rotational direction. In this case, if the rotational direction changes, whatever the rotational angular velocity is, a halt failure is indicated, and the method returns to step (11). Based on the embodiment three, in embodiment six, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the valid rotational direction after the unlocking program is started as the valid rotational circumferential distance, where the halt time of the rotation does not exceed the preset halt time. In an embodiment, when the rotational direction is valid, and the rotational angular velocity is greater than 0, the rotational circumferential distance of the operation knob is calculated according to the rotational duration and the rotational angular velocity. When the rotational direction changes, or the rotational angular velocity is 0 (the operation knob halts), and timing begins, if the operation knob does not recover the condition that the rotational angular velocity does not exceed the preset velocity and that the rotational direction is the valid rotational direction within the preset halt time, the unlocking program is ended, and the method returns to step (11), and if the condition is recovered, the rotational circumferential distance of the operation knob is continuously accumulated and calculated until the rotational circumferential distance reaches the preset circumferential distance, and the unlocking work is completed. If the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the changed rotational direction is regarded as the valid rotational direction, and the unlocking program is restarted to recalculate the valid rotational circumferential distance. In this embodiment, the halt time refers to the time during which the rotational angular velocity of the operation knob is 0. This embodiment supports unlocking by multiple consecutive rotations, thereby having lower requirements on operation concentration and action continuity of an operator, and allowing a user to give several consecutive turns with a few halts to unlock the electronic device, thus helping patients having visual impairment or finger nerve ending impairment to use the electronic device more easily. Based on the embodiment three, in embodiment seven, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the valid rotational direction within a preset time after the unlocking program is started as the valid rotational circumferential distance. In an embodiment, the rotational direction and the rotational angular velocity are detected in real time, and timing begins, and the rotational circumferential distance of rotation along the valid rotational direction is calculated according to the rotational direction, the rotational duration and the rotational angular velocity; if the rotational circumferential distance reaches the preset circumferential distance within the preset time, the unlocking work is performed, and if the rotational circumferential distance does not reach the preset circumferential distance when the preset time is over, the method returns to step (11). In the process, if the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the changed rotational direction is regarded as the valid rotational direction, and the unlocking program is restarted to recalculate the valid rotational circumferential distance. During the rotation of the operation knob, one or any combination of the following situations is called discontinuous rotation of the operation knob: the rotational direction changes, the rotation suddenly halts, the rotational angular velocity is lower than the preset velocity during rotation, the rotation halt time exceeds the preset halt time or the time during which the rotational angular velocity is lower than the preset velocity exceeds the preset halt time, the rotational direction changes, and the rotational angular velocity exceeds the preset velocity. When one or any combination of these situations occurs, the current unlocking program is ended; and then depending on these situations, the method returns to step (11); or depending on these situations, the method returns to step (11), and the unlocking program is restarted. Referring toFIG.4, based on the embodiment two, in embodiment eight, a condition for starting the unlocking program includes: starting (121b) the unlocking program in response to the rotational direction of the operation knob being a preset valid rotational direction. In this embodiment, the valid rotational direction is the preset valid rotational direction. In an embodiment, clockwise or anticlockwise may be configured as the valid rotational direction, or clockwise and anticlockwise may be configured as the valid rotational directions simultaneously, the rotational angular velocity of the operation knob and the rotational direction of the operation knob are detected in real time, and when the operation knob rotates in the preset valid rotational directions, the unlocking program is started. For example, when clockwise or anticlockwise is configured as the valid rotational direction, whether the rotational angular velocity is greater than 0 is detected, and whether the rotational direction is clockwise or anticlockwise is detected, and if the rotational angular velocity is greater than 0, and the rotational direction is clockwise or anticlockwise, the unlocking program is started. When clockwise and anticlockwise are configured as the valid rotational directions simultaneously, whether the rotational angular velocity is greater than 0 is detected, and if the rotational angular velocity is greater than 0, the unlocking program is started, and the current rotational direction is recorded. If the rotational direction of the operation knob suddenly changes, and the changed rotational direction is the valid rotational direction, the valid rotational direction is rerecorded, the current unlocking program is ended, and the unlocking program is restarted. If the changed rotational direction is not the preset valid rotational direction, the current unlocking program is ended, and the method returns to step (11). Starting (M) the unlocking program includes the following steps: calculating (122) a rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction and regarding the rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction as the valid rotational circumferential distance, determining (123) whether the valid rotational circumferential distance reaches the preset circumferential distance, controlling (124) the electronic device to unlock in response to the valid rotational circumferential distance reaching the preset circumferential distance, and returning to step (11) in response to the valid rotational circumferential distance not reaching the preset circumferential distance. That is, whether the rotational circumferential distance of the operation knob rotational along the current rotational direction (the rotational circumferential distance of one rotation of the operation knob) reaches the preset circumferential distance is calculated, and the electronic device is controlled to unlock in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction reaching the preset circumferential distance, and the method returns to step (11) in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction not reaching the preset circumferential distance. In an embodiment, the valid rotational circumferential distance is determined in the following manner: calculating a rotational circumferential distance of continuous rotation along the current valid rotational direction and regarding the rotational circumferential distance of the continuous rotation along the current valid rotational direction as the valid rotational circumferential distance. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being valid, and the rotational angular velocity being greater than 0, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; in response to the rotational angular velocity being 0, stopping the calculation of the valid rotational circumferential distance, and returning to step (11); in response to changing the rotational direction, stopping the calculation of the valid rotational circumferential distance, and returning to step (11), and starting the unlocking program according to the changed rotational direction, or not starting unlocking program according to the changed rotational direction. That is, when the rotational angular velocity is greater than 0, the rotational circumferential distance is continuously accumulated and calculated, and when the rotational angular velocity is less than or equal to 0 (the operation knob halts or changes the rotational direction), the current unlocking program is ended, and the method returns to step (11). Based on the embodiment eight, in embodiment nine, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of continuous rotation along the current valid rotational direction within the preset time after the unlocking program is started as the valid rotational circumferential distance. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: performing the following operations within the preset time after the unlocking program is started, and timing begins: calculating the rotational circumferential distance of the operation knob rotational along the current rotational direction according to the rotational direction, the rotational duration and the rotational angular velocity; regarding the rotational circumferential distance accumulated within the preset time as the valid rotational circumferential distance; determining whether the valid rotational circumferential distance exceeds the preset circumferential distance in real time; unlocking the electronic device immediately in response to the valid rotational circumferential distance exceeding the preset circumferential distance; and returning to step (11) in response to the valid rotational circumferential distance not exceeding the preset circumferential distance when the preset time is over. If the rotational angular velocity becomes 0 within the preset time, the unlocking program continues without interruption until the preset time is over. When the rotational direction changes, and the rotational angular velocity exceeds the preset velocity, the current unlocking program is ended, the method returns to step (11), and the unlocking program is restarted according to the changed rotational direction, or the unlocking program is not restarted according to the changed rotational direction. That is, when the rotational angular velocity is greater than or equal to 0, the rotational circumferential distance is continuously accumulated and calculated within the preset time, and when the rotational angular velocity is less than 0 (the operation knob changes the rotational direction), the current unlocking program is ended, and the method returns to step (11). Based on the embodiment eight, in embodiment ten, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the current valid rotational direction after the unlocking program is started as the valid rotational circumferential distance, where the halt time of the operation does not exceed the preset halt time. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being the current valid rotational direction, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; in response to the rotational angular velocity being 0 and beginning timing, ending the unlocking program and returning to step (11) in response to the operation knob not resuming rotation within the preset halt time; continuously accumulating and calculating the rotational circumferential distance of the operation knob until the rotational circumferential distance reaches the preset circumferential distance in response to recovering the condition, and completing the unlocking work. If the rotational direction changes, the calculation of the valid circumferential distance is stopped, the method returns to step (11), and the unlocking program is restarted according to the changed rotational direction, or the unlocking program is not restarted according to the changed rotational direction. That is, when the rotational angular velocity is greater than 0, the rotational circumferential distance is continuously accumulated and calculated; and when the rotational angular velocity is 0, and timing begins, if the original rotational direction is recovered within the preset halt time, the rotational circumferential distance is continuously accumulated and calculated, and if the halt time exceeds the preset halt time, the current unlocking program is ended, and the method returns to step (11); and when the rotational angular velocity is less than 0, the current unlocking program is ended, and the method returns to step (11). In this embodiment, the halt time refers to the time during which the rotational angular velocity of the operation knob is 0. This embodiment supports unlocking by multiple consecutive rotations, thereby having lower requirements on operation concentration and action continuity of an operator, and allowing a user to give several consecutive turns with a few halts to unlock the electronic device, thus helping patients having visual impairment or finger nerve ending impairment to use the electronic device more easily. During the rotation of the operation knob, one or any combination of the following situations is called discontinuous rotation of the operation knob: the rotational direction changes, the rotation suddenly halts, the rotational halt time exceeds the preset halt time. When one or any combination of these situations occurs, the current unlocking program is ended; and then depending on these situations, the method returns to step (11); or depending on these situations, the method returns to step (11), and the unlocking program is restarted. Referring toFIG.5, based on the embodiment two, in embodiment eleven, starting the unlocking program includes: starting (121c) the unlocking program in response to the rotational direction of the operation knob being a preset valid rotational direction, and the rotational angular velocity exceeding the preset velocity. In this embodiment, the valid rotational direction is the preset valid rotational direction. In an embodiment, clockwise or anticlockwise may be configured as the valid rotational direction, or clockwise and anticlockwise may be configured as the valid rotational directions simultaneously, the rotational angular velocity of the operation knob and the rotational direction of the operation knob are detected in real time, and when the operation knob rotates in the preset valid rotational direction, and the rotational angular velocity exceeds the preset velocity, the unlocking program is started. For example, when clockwise or anticlockwise is configured as the valid rotational direction, whether the rotational angular velocity is greater than the preset velocity is detected, and whether the rotational direction is clockwise or anticlockwise is detected, and if the rotational angular velocity is greater than the preset velocity, and the rotational direction is clockwise or anticlockwise, the unlocking program is started. When clockwise and anticlockwise are configured as the valid rotational directions simultaneously, whether the rotational angular velocity is greater than the preset velocity is detected, and if the rotational angular velocity is greater than the preset velocity, the unlocking program is started, and the current rotational direction is recorded. Starting (M) the unlocking program includes the following steps: calculating (122) a rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction and regarding the rotational circumferential distance of the operation knob continuously rotational along the current valid rotational direction as the valid rotational circumferential distance, determining (123) whether the valid rotational circumferential distance reaches the preset circumferential distance, controlling (124) the electronic device to be unlocked in response to the valid rotational circumferential distance reaching the preset circumferential distance, and returning to step (11) in response to the valid rotational circumferential distance not reaching the preset circumferential distance. That is, whether the rotational circumferential distance of the operation knob rotational along the current rotational direction (the rotational circumferential distance of one rotation of the operation knob) reaches the preset circumferential distance is calculated, and the electronic device is controlled to unlock in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction reaching the preset circumferential distance, and the method returns to step (11) in response to the rotational circumferential distance of the operation knob rotational along the current rotational direction not reaching the preset circumferential distance. In an embodiment, the valid rotational circumferential distance is determined in the following manner: calculating a rotational circumferential distance of continuous rotation at a velocity greater than the preset velocity along the current valid rotational direction and regarding the rotational circumferential distance of the continuous rotation at the velocity greater than the preset velocity along the current valid rotational direction as the valid rotational circumferential distance. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being the current valid rotational direction, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; in response to the rotational angular velocity not exceeding the preset velocity, stopping the calculation of the valid rotational circumferential distance, and returning to step (11); in response to changing the rotational direction, and the rotational angular velocity not exceeding the preset velocity, stopping the calculation of the valid rotational distance, and returning to step (11); in response to changing the rotational direction, and the rotational angular velocity exceeding the preset velocity, stopping the calculation of the valid rotational distance, and returning to step (11); and determining whether the changed rotational direction is the valid rotational direction, and restarting the unlocking program to recalculate the valid rotational circumferential distance in response to the changed rotational direction being the valid rotational direction. Based on the embodiment eleven, in embodiment twelve, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the current valid rotational direction within a preset time after the unlocking program is started as the valid rotational circumferential distance, where the rotational angular velocity exceeds the preset velocity. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: performing the following operations within the preset time after the unlocking program is started, and timing begins: in response to the rotational direction being the current valid rotational direction, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; regarding the rotational circumferential distance accumulated within the preset time as the valid rotational circumferential distance; determining whether the valid rotational circumferential distance exceeds the preset circumferential distance in real time; unlocking the electronic device immediately in response to the valid rotational circumferential distance exceeding the preset circumferential distance; and returning to step (11) in response to the valid rotational circumferential distance not exceeding the preset circumferential distance when the preset time is over. When the rotational angular velocity does not exceed the preset velocity, the calculation of the current rotational circumferential distance is stopped temporarily until the rotational angular velocity is recovered. When the rotational direction changes, and the rotational angular velocity exceeds the preset velocity, the calculation of the rotational circumferential distance is stopped, and the method returns to step (11); whether the changed rotational direction is the valid rotational direction is determined, and if the changed rotational direction is the valid rotational direction, the unlocking program is restarted to recalculate the valid rotational circumferential distance. Based on the embodiment eleven, in embodiment thirteen, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the current valid rotational direction after the unlocking program is started as the valid rotational circumferential distance, where the rotational angular velocity exceeds the preset velocity, and the halt time of the rotation does not exceed the preset time. In an embodiment, calculating the valid rotational circumferential distance includes the following steps: in response to the rotational direction being valid, and the rotational angular velocity exceeding the preset velocity, calculating the rotational circumferential distance of the operation knob according to the rotational duration and the rotational angular velocity; in response to the rotational angular velocity not exceeding the preset velocity (for example, the rotational angular velocity is 0, and the operation knob halts), and beginning timing, ending the unlocking program, and returning to step (11) in response to the operation knob not recovering the condition that the rotational angular velocity exceeds the preset velocity and that the rotational direction is the valid rotational direction within the preset halt time; and continuously accumulating and calculating the rotational circumferential distance of the operation knob until the rotational circumferential distance reaches the preset circumferential distance in response to recovering the condition, and completing the unlocking work. If the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the calculation of the valid rotational circumferential distance is stopped, and the method returns to step (11); whether the changed rotational direction is the valid rotational direction is determined, and if the changed rotational direction is the valid rotational direction, the unlocking program is restarted to recalculate the valid rotational circumferential distance. In this embodiment, the halt time refers to the time during which the rotational angular velocity of the operation knob is lower than the preset velocity. This embodiment supports unlocking by multiple consecutive rotations, and if a reverse low-velocity rotation is caused by a misoperation, the unlocking program is still running, thereby having lower requirements on operation concentration and action continuity of an operator, and allowing a user to give several consecutive turns with a few halts to unlock the electronic device, thus helping patients having visual impairment or finger nerve ending impairment to use the electronic device more easily. Certainly, the halt time may also include the time during which the rotational angular velocity of the operation knob is 0 and lower than the preset velocity, and the rotational direction of the operation knob is the current valid rotational direction. In this case, if the rotational direction changes, whatever the rotational angular velocity is, a halt failure is indicated, and the method returns to step (11). Based on the embodiment eleven, in embodiment fourteen, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the current valid rotational direction after the unlocking program is started as the valid rotational circumferential distance, where the halt time of the rotation does not exceed the preset halt time. In an embodiment, when the rotational direction is valid, and the rotational angular velocity is greater than 0, the rotational circumferential distance of the operation knob is calculated according to the rotational duration and the rotational angular velocity; when the rotational direction changes, or the rotational angular velocity is 0 (the operation knob halts), and timing begins, if the operation knob does not recover the condition that the rotational angular velocity does not exceed the preset velocity and that the rotational direction is the valid rotational direction within the preset halt time, the unlocking program is ended, and the method returns to step (11); and if the condition is recovered, the rotational circumferential distance of the operation knob is continuously accumulated and calculated until the rotational circumferential distance reaches the preset circumferential distance, and the unlocking work is completed. If the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the method returns to step (11). In this embodiment, the halt time refers to the time during which the rotational angular velocity of the operation knob is 0. This embodiment supports unlocking by multiple consecutive rotations, thereby having lower requirements on operation concentration and action continuity of an operator, and allowing a user to give several consecutive turns with a few halts to unlock the electronic device, thus helping patients having visual impairment or finger nerve ending impairment to use the electronic device more easily. Based on the embodiment eleven, in embodiment fifteen, the valid rotational circumferential distance is determined in the following manner: regarding a rotational circumferential distance of rotation along the current valid rotational direction within the preset time after the unlocking program is started as the valid rotational circumferential distance. In an embodiment, the rotational direction and the rotational angular velocity are detected in real time, and timing begins, and the rotational circumferential distance of the rotation along the current valid rotational direction is calculated according to the rotational direction, the rotational duration and the rotational angular velocity; if the rotational circumferential distance reaches the preset circumferential distance within the preset time, the unlocking work is performed; and if the rotational circumferential distance does not reach the preset circumferential distance when the preset time is over, the method returns to step (11). In the process, if the rotational direction suddenly changes, and the rotational angular velocity exceeds the preset rotational angular velocity, the method returns to step (11). During the rotation of the operation knob, one or any combinations of the following situation is called discontinuous rotation of the operation knob: the rotational direction changes, the rotation suddenly halts, the rotational angular velocity is lower than the preset velocity during rotation, the rotation halt time exceeds the preset halt time or the time during which the rotational angular velocity is lower than the preset velocity exceeds the preset halt time, the rotational direction changes and the rotational angular velocity exceeds the preset velocity. When one or any combination of these situations occurs, the current unlocking program is ended; and then depending on these situations, the method returns to step (11); or depending on these situations, the method returns to step (11), and the unlocking program is restarted. In the preceding embodiment, the preset time is 1 second, the preset halt time is 0.5 seconds, and the preset circumferential distance is a circumferential distance of 60 degrees. Certainly, other values may also be selected for the preset halt time, the preset time and the preset circumferential distance respectively. Referring toFIG.6, the present application further provides an electronic device200capable of being unlocked by using a knob. The electronic device200includes a host31, an operation knob32rotatably mounted on the host31, one or more processors34, a memory35, and one or more programs36. The one or more programs36are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for executing the preceding unlocking method100by using a knob. The electronic device200further includes a photosensor33. In the one or more programs, the rotational direction of the operation knob32and the rotational angular velocity of the operation knob32are acquired by using the photosensor33, and the rotational circumferential distance of the operation knob32is calculated according to the rotational direction, the rotational duration and the rotational angular velocity. In an embodiment, the photosensor33includes a laser light source unit and an image detection unit. The laser light source unit is configured to emit weak laser light on a surface of the operation knob32, and the image detection unit is configured to determine the rotational direction of the operation knob32and the rotational angular velocity of the operation knob32by using light reflected by the surface of the operation knob32. Referring toFIGS.7ato7d, the electronic device may be a watch (as shown inFIGS.7ato7d). The watch may be an ordinary watch or an electronic watch. The operation knob may be the crown32aof the watch (as shown inFIG.7a) or the bezel32cdisposed around the dial of the watch (as shown inFIG.7d). Referring toFIG.8, the electronic device may be a camera, and the operation knob may be the dial32con the camera. Referring toFIG.9, the electronic device may be a recording device, and the operation knob may be a volume knob32d. Certainly, the electronic device may also be a walkman (not shown) or another electronic device whose interface needs to be locked to prevent a misoperation. The preceding embodiments disclosed are only the preferred embodiments of the present application and certainly cannot be used for limiting the scope of claims of the present application. Therefore, the equivalent changes made according to the patent scope of the present application still fall within the scope of the present application. | 47,556 |
11861081 | DETAILED DESCRIPTION FIG.1is a block diagram illustrating an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thererto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by other component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used for an incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may visually provide information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an accelerometer, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., PCB). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. FIG.2is a perspective view illustrating the electronic device101including a stylus pen201(e.g., the electronic device102ofFIG.1) according to various embodiments. According to various embodiments, the stylus pen201in this specification may correspond to the input device150ofFIG.1instead of the electronic device102ofFIG.1. Referring toFIG.2, the electronic device101according to various embodiments may include the configuration illustrated inFIG.1, and may include a structure into which the stylus pen201may be inserted. The electronic device101may include a housing210and a hole211in a portion of the housing210, for example, a portion of a side surface210aof the housing210. The electronic device101may include a first internal space212that is a garage connected to the hole211, and the stylus pen201may be inserted into the first internal space212. According to the illustrated embodiment, the stylus pen201may include a first button201aon one end thereof, which may be pressed so that the stylus pen201is easily taken out of the first internal space212of the electronic device101. When the first button201ais pressed, a repulsion mechanism configured in association with the first button201a(e.g., a repulsion mechanism by at least one elastic member (e.g., a spring)) may operate, so that the stylus pen201may be removed from the first internal space212. FIG.3Ais a block diagram illustrating a stylus pen (e.g., the stylus pen201ofFIG.2) according to various embodiments. Referring toFIG.3A, the stylus pen201according to an embodiment may include a processor220, a memory230, a resonant circuit287, a charging circuit288, a battery289, and a communication circuit290, an antenna297, a trigger circuit298, and/or a sensor299. In some embodiments, the processor220of the stylus pen201, at least a part of the resonant circuit287, and/or at least a part of the communication circuit290may be configured on a printed circuit board or in the form of a chip. The processor220, the resonant circuit287, and/or the communication circuit290may be electrically coupled to the memory230, the charging circuit288, the battery289, the antenna297, the trigger circuit298, and/or the sensor299. The processor220according to various embodiments may include a customized hardware module or a generic processor configured to execute software (e.g., an application program). The processor220may include a component (function) or a software element (program) including at least one of various sensors, a data measurement module, an input/output interface, a module for managing the state or environment of the stylus pen201, or a communication module, which is provided in the stylus pen201. The processor220may include, for example, one or a combination of two or more of hardware, software, and firmware. According to an embodiment, the processor220may be configured to transmit information indicating a pressed state of a button (e.g., a button337), sensing information obtained by the sensor299, and/or information calculated based on the sensing information (e.g., information related to the position of the stylus pen201) to the electronic device101through the communication circuit290. The resonant circuit287according to various embodiments may resonate based on an electromagnetic field signal generated from a digitizer (e.g., the display device160) of the electronic device101, and radiate an electromagnetic resonance (EMR) input signal (or magnetic field) by resonance. The electronic device101may identify the position of the stylus pen201on the electronic device101by using the EMR input signal. For example, the electronic device101may identify the position of the stylus pen201based on the magnitude of an induced electromotive force (e.g., output current) generated by an EMR signal in each of a plurality of channels (e.g., a plurality of loop coils). While the electronic device101and the stylus pen201have been described as operating by EMR, this is merely exemplary, and the electronic device101may generate a signal based on an electric field by electrically coupled resonance (ECR). The resonant circuit of the stylus pen201may resonate by an electric field. The electronic device101may identify a potential in a plurality of channels (e.g., electrodes) by the resonance in the stylus pen201and identify the position of the stylus pen201based on the potential. A person skilled in the art will understand that the stylus pen201may be implemented in an active electrostatic (AES) method, and the type of implementation is not limited. In addition, the electronic device101may detect the stylus pen201based on a change in capacitance (self-capacitance or mutual capacitance) associated with at least one electrode of a touch panel. In this case, the stylus pen201may not include the resonant circuit. In the present disclosure, “panel” or “sensing panel” may be used as a term encompassing a digitizer and a touch screen panel (TSP). According to various embodiments, a signal having a pattern may be received through the resonant circuit287. The processor220may analyze the pattern of the signal received through the resonant circuit287and perform an operation based on the analysis result. The stylus pen201according to various embodiments may perform first communication through the resonant circuit287and second communication through the communication circuit290. For example, when the stylus pen201is inserted into the electronic device101, the stylus pen201may receive information from the electronic device101through the resonant circuit287. For example, although the stylus pen201may receive a communication signal through the communication circuit290, when it is detached from the electronic device101, the stylus pen201may also receive a communication signal through the communication circuit290even when it is inserted. The two different communications described above will be described later with reference toFIG.8A. The memory230according to various embodiments may store information related to the operation of the stylus pen201. For example, the information may include information for communication with the electronic device101and frequency information related to an input operation of the stylus pen201. In addition, the memory230may store a program (or application, algorithm, or processing loop) for calculating information about the position of the stylus pen201from sensing data of the sensor299. The memory230may store a communication stack of the communication circuit290. Depending on the implementation, the communication circuit290and/or the processor220may include a dedicated memory. The resonant circuit287according to various embodiments may include a coil (or inductor) and/or a capacitor. The resonant circuit287may resonate based on an input electric field and/or magnetic field (e.g., an electric field and/or magnetic field generated by a digitizer of the electronic device101). When the stylus pen201transmits a signal by EMR, the stylus pen201may generate a signal including a resonance frequency based on an electromagnetic field generated from an inductive panel of the electronic device101. When the stylus pen201transmits a signal by AES, the stylus pen201may generate a signal through capacitive coupling with the electronic device101. When the stylus pen201transmits a signal by ECR, the stylus pen201generates a signal including a resonance frequency based on an electric field generated from a capacitive device of the electronic device. According to an embodiment, the resonant circuit287may be used to change the strength or frequency of the electromagnetic field according to a user's manipulation state. For example, the resonant circuit287may provide various frequencies for recognizing a hovering input, a drawing input, a button input, or an erasing input. For example, the resonant circuit287may provide various resonance frequencies according to a connection combination of a plurality of capacitors, or may provide various resonance frequencies based on a variable inductor and/or a variable capacitor. When the charging circuit288according to various embodiments is connected to the resonant circuit287based on a switching circuit, the charging circuit288may rectify a resonance signal generated in the resonant circuit287into a direct current (DC) signal and provide the DC signal to the battery289. According to an embodiment, the stylus pen201may identify whether the stylus pen201is inserted into the electronic device101by using the voltage level of the DC signal detected by the charging circuit288. Alternatively, the stylus pen201may identify whether the stylus pen201is inserted by identifying a pattern corresponding to the signal identified by the charging circuit288. The battery289according to various embodiments may be configured to store power required for the operation of the stylus pen201. The battery289may include, for example, a lithium-ion battery or a capacitor, and may be rechargeable or replaceable. According to an embodiment, the battery289may be charged with power (e.g., a DC signal (DC power)) supplied from the charging circuit288. The communication circuit290according to various embodiments may be configured to perform a wireless communication function between the stylus pen201and the communication module190of the electronic device101. According to an embodiment, the communication circuit290may transmit state information, input information, and/or information related to the position of the stylus pen201to the electronic device101by short-range communication. For example, the communication circuit290may transmit direction information (e.g., motion sensor data) about the stylus pen201, obtained through the trigger circuit298, voice information input through a microphone, or information about the remaining amount of the battery289to the electronic device101. For example, the communication circuit290may transmit sensing data obtained from the sensor299and/or information related to the position of the stylus pen201identified based on the sensing data to the electronic device101. For example, the communication circuit290may transmit information about a state of a button (e.g., the button337) included in the stylus pen201to the electronic device101. For example, the short-range communication scheme may include, but not limited to, at least one of Bluetooth, Bluetooth low energy (BLE), NFC, or Wi-Fi direct. The antenna297according to various embodiments may be used to transmit or receive a signal or power to or from the outside (e.g., the electronic device101). According to an embodiment, the stylus pen201may include a plurality of antennas297and select at least one of the antennas297, suitable for a communication scheme. The communication circuit290may exchange signals or power with an external electronic device through the selected at least one antenna297. The trigger circuit298according to various embodiments may include at least one button or a sensor circuit. According to an embodiment, the processor220may identify an input scheme (e.g., touch or press) or type (e.g., EMR button or BLE button) of the button in the stylus pen201. According to an embodiment, the trigger circuit298may transmit a trigger signal to the electronic device101by using an input signal of a button or a signal through the sensor299. The sensor299according to various embodiments may include an accelerometer, a gyro sensor, and/or a geomagnetic sensor. The accelerometer may sense information about a linear motion of the stylus pen201and/or a 3-axis acceleration of the stylus pen201. The gyro sensor may sense information related to rotation of the stylus pen201. The geomagnetic sensor may sense information about a direction in which the stylus pen201is directed in an absolute coordinate system. According to an embodiment, the sensor299may include a sensor for measuring movement, and a sensor for generating an electrical signal or data value corresponding to an internal operating state or external environmental state of the stylus pen201, for example, at least one of a remaining battery level sensor, a pressure sensor, an optical sensor, a temperature sensor, or a biometric sensor. According to various embodiments, the processor220may transmit the information obtained from the sensor299to the electronic device101through the communication circuit290. Alternatively, the processor220may transmit information related to the position of the stylus pen201(e.g., coordinates of the stylus pen201and/or displacement of the stylus pen201) based on the information obtained from the sensor299to the electronic device101through the communication circuit290. FIG.3Bis an exploded perspective view illustrating a stylus pen (e.g., the stylus pen201ofFIG.2), according to various embodiments. Referring toFIG.3B, the stylus pen201may include a pen housing300forming the exterior of the stylus pen201and an inner assembly inside the pen housing300. In the illustrated embodiment, the inner assembly with several components of the stylus pen201coupled together therein may be inserted into the pen housing300by one assembly action. The pen housing300may be elongated between a first end300aand a second end300band include a second internal space301therein. The pen housing300may have an elliptical cross section with a major axis and a minor axis, and may be shaped into an elliptical cylinder as a whole. The first internal space212of the electronic device101described before with reference toFIG.2may also have an elliptical cross-section corresponding to the shape of the pen housing300. According to various embodiments, at least a portion of the pen housing300may include a synthetic resin (e.g., plastic) and/or a metallic material (e.g., aluminum). According to an embodiment, the first end300aof the pen housing300may be formed of a synthetic resin. Various other embodiments may be available for the material of the pen housing300. The inner assembly may be elongated in correspondence with the shape of the pen housing300. The inner assembly may be largely divided into three parts along the longitudinal direction. For example, the inner assembly may include a coil unit310disposed at a position corresponding to the first end300aof the pen housing300, an ejection member320disposed at a position corresponding to the second end300bof the pen housing300, and a circuit board unit330disposed at a position corresponding to a body of the pen housing300. The coil unit310may include a pen tip311exposed to the outside of the first end300a, when the inner assembly is completely inserted into the pen housing300, a packing ring312, and a coil313wound a plurality of times, and/or a pen pressure sensing unit314for obtaining a change in pressure applied by the pressing of the pen tip311. The packing ring312may include epoxy, rubber, urethane, or silicone. The packing ring312may be provided for the purpose of waterproofing and dustproofing and protect the coil unit310and the circuit board unit330from water or dust. According to an embodiment, the coil313may form a resonance frequency in a set frequency band (e.g., 500 kHz and adjust the resonance frequency formed by the coil313in a certain range, in combination with at least one element (e.g., a capacitor). The ejection member320may include a configuration for withdrawing the stylus pen201from the first internal space212of the electronic device (e.g.,101ofFIG.2). According to an embodiment, the ejection member320may be include a shaft321, an ejection member322disposed around the shaft321and forming the whole exterior of the ejection member320, and a button portion323(e.g., the first button201aofFIG.2). When the inner assembly is completely inserted into the pen housing300, a part including the shaft321and the ejection body322may be surrounded by the second end300bof the pen housing300, and at least a portion of the button portion323may be exposed outward from the second end300b. A plurality of components which are not shown, for example, cam members or elastic members may be disposed in the ejection body322to form a push-pull structure. In an embodiment, the button portion323may be substantially coupled with the shaft321to make a linear reciprocating motion with respect to the ejection body322. According to various embodiments, the button portion323may include a button having a locking structure so that a user may take out the stylus pen201by using a fingernail. According to an embodiment, the stylus pen201may provide another input method by including a sensor for detecting a linear reciprocating motion of the shaft321. The circuit board unit330may include a printed circuit board332, a base331surrounding at least one surface of the printed circuit board332, and an antenna. According to an embodiment, a board mounting portion333on which the printed circuit board332is disposed may be formed on the top surface of the base331, and the printed circuit board332may be fixedly mounted on the board mounting portion333. According to an embodiment, the printed circuit board332may include a first surface and a second surface, and a variable capacitor or switch334connected to the coil313may be disposed on the first surface, and, a charging circuit, a battery336or a communication circuit may be disposed on the second surface. The first surface and the second surface of the printed circuit board332may refer to different stacked surfaces in a top/down stack structure according to an embodiment, and refer to different portions of the printed circuit board332disposed along the longitudinal direction according to another embodiment. The battery336may include an electric double layered capacitor (EDLC). The charging circuit may be located between the coil313and the battery and include a voltage detector circuitry and a rectifier. The battery336may not necessarily be disposed on the second surface of the printed circuit board332. The position of the battery336may vary according to various mounting structures of the circuit board330, and the battery336may be disposed at a position different from that shown in the drawing. The antenna may include an antenna embedded in an antenna structure339and/or the printed circuit board332, as in the example illustrated inFIG.3B. According to various embodiments, a switch334may be provided on the printed circuit board332. The second button337provided in the stylus pen201may be used to press the switch334and exposed outward through a side opening302of the pen housing300. While supporting the second button337, a support member338may restore or maintain the second button337to or at a certain position by providing an elastic restoring force in the absence of an external force applied to the second button337. The second button337may be implemented as a physical key, a touch key, a motion key, or a pressure key, or implemented in a keyless manner. The implementation type of the button is not limited. The circuit board unit330may include, for example, a packing ring such as an O-ring. According to an embodiment, O-rings formed an elastic material may be disposed at both ends of the base331to form a sealing structure between the base331and the pen housing300. In some embodiments, the support member338may partially adhere to the inner wall of the pen housing300around the side opening302to form a sealing structure. For example, at least one portion of the circuit board unit330may include a waterproof and dustproof structure similar to the packing ring312of the coil unit310. The stylus pen201may include a battery mounting portion333ain which the battery336is disposed, on the top surface of the base331. The battery336that may be mounted on the battery mounting portion333amay include, for example, a cylinder-type battery. The stylus pen201may include a microphone (not shown) and/or a speaker. The microphone and/or the speaker may be directly coupled to the printed circuit board332or coupled to a separate flexible printed circuit board (FPCB) (not shown) coupled to the printed circuit board332. According to various embodiments, the microphone and/or the speaker may be disposed at a position parallel to the second button337in the longitudinal direction of the stylus pen301. FIG.4is a diagram illustrating configurations of an electronic device and a stylus pen according to various embodiments. According to various embodiments, the electronic device101(e.g., the electronic device101ofFIG.1) may include a pen controller410. The pen controller410may include at least one coil411and412and supply charging power to the stylus pen201(e.g., the stylus pen201ofFIG.2) through the at least one coil411and412. The at least one coil411and412may be disposed at, not limited to, a position physically adjacent to a coil421of the stylus pen201, when the stylus pen201is inserted into the garage of the electronic device101. Insertion into the garage is merely exemplary, and the electronic device101may include an area (or space) in which the stylus pen201may be mounted (or attached), aside from the garage. In this case, the stylus pen201may be detachably attached in the area (or space). Those skilled in the art will understand that the operation of the stylus pen201in the garage in the present disclosure may also be performed, when the stylus pen201is attached to the mounting area (or space) in another embodiment. At least some functions of the pen controller410may be performed by the processor120, or the pen controller410and the processor120may be integrated to perform at least some functions. In the present disclosure, when it is said that the pen controller410performs a specific operation, this may mean that the specific operation is performed by the processor120or by a control circuit independent of the processor120. The pen controller410may include a control circuit (e.g., the control circuit independent of the processor120), an inverter, and/or an amplifier, in addition to the at least one coil411and412. As described above, the pen controller410may not include the control circuit. In this case, the pen controller410may provide a signal for charging to the at least one coil411and412under the control of the processor120. According to various embodiments, the pen controller410may provide a signal having a pattern through the at least one coil411and412. The pattern may be pre-shared with the stylus pen201, for controlling the stylus pen201and include, but not limited to, for example, a charging start instruction pattern, a charging termination instruction pattern. While two coils411and412are shown to providing a charging signal or a signal having a pattern for control, this is merely exemplary, and the number of the coils is not limited. Table 1 illustrates information about associations among binary codes configured to be transmitted from the pen controller410to the stylus pen201, patterns, and configured control operations according to various embodiments. TABLE 1Binary codePatternControl operation0000 0001First patternBLE communicationmodule reset0000 0010Second patternCharging start0000 0011Third patternIndication of garage-in0000 0100Fourth patternSensor reset For example, the electronic device101may determine to command the reset of a BLE communication module of the stylus pen201and identify a binary code “0000 0001” corresponding to the command. The electronic device101may apply a signal of the first pattern corresponding to the binary code “0000 0001” to a garage coil (e.g., the coils411and412). An electromagnetic induction signal corresponding to the signal of the first pattern may be output from the coil421of the stylus pen201through inter-coil electromagnetic induction. The stylus pen201may identify the binary code of “0000 0001”, for example, based on a rectified voltage (e.g., VM). The stylus pen201may reset the BLE communication module corresponding to the binary code of “0000 0001”. The electronic device101may apply various patterns of signals corresponding to charging start, indication of garage-in, and sensor reset to the garage coil (e.g., the coils411and412). The stylus pen201may perform a corresponding operation based on the electromagnetic induction signal (or a signal obtained by rectifying the electromagnetic induction signal). Various patterns corresponding to control operations will be described later. The binary codes, patterns, and control operations in Table 1 are merely exemplary. According to various embodiments, the stylus pen201may generate a signal of a sixth pattern as illustrated in Table 2. TABLE 2Binary codePatternControl operation1000 0011Sixth patternIndication of garage-in The electronic device101may identify an induced electromotive force signal by a signal of the sixth pattern and identify a binary code of “1000 0011” based on the induced electromotive force signal. The electronic device101may identify that the stylus pen201is located in the garage based on the binary code of “1000 0011”. The electronic device101may operate in an insert mode based on the insertion of the stylus pen201. For example, the electronic device101may release a pen input sensing operation of the sensing panel to reduce current consumption. In various embodiments, the electronic device101may detect the insertion of the stylus pen201by receiving the signal of the sixth pattern from the stylus pen201(e.g., an active electrostatic pen). The electronic device101may transmit a communication signal for controlling the stylus pen201to operate in the insert mode to the stylus pen201, for example, by BLE communication. According to various embodiments, a resonant circuit420(e.g., the resonant circuit287ofFIG.3A) of the stylus pen201may include the coil421, at least one capacitor422and423, and/or a switch424. When the switch424is in an off state, the coil421and the capacitor422may form the resonant circuit, and when the switch424is in an on state, the coil421and the capacitors422and423may form the resonant circuit. Accordingly, the resonance frequency of the resonant circuit420may be changed according to the on/off state of the switch424. For example, the electronic device101may identify the on/off state of the switch424based on the frequency of a signal from the stylus pen201. For example, when the button337of the stylus pen201is pressed/released, the switch424may be turned on/off, and the electronic device101may identify whether the button337of the stylus pen201has been pressed, based on the frequency of the received signal, identified through the digitizer. According to various embodiments, at least one rectifier431and435may rectify and output an alternating current (AC) waveform signal VPEN output from the resonant circuit420. A charging switch controller (SWchgctrl)432may receive the rectified signal VM output from the rectifier431. Based on the rectified signal VM, the charging switch controller432may identify whether a signal generated from the resonant circuit420is a signal for charging or a signal for position detection. For example, the charging switch controller432may identify whether the signal generated from the resonant circuit420is a signal for charging or a signal for position detection based on, for example, the magnitude of the voltage of the rectified signal VM. Alternatively, the charging switch controller432may identify whether a signal having the charging start pattern is received based on the waveform of the rectified signal VM. According to various embodiments, when the signal is identified as for charging, the charging switch controller432may control a charging switch (SWchg)436to the on state. Alternatively, when a signal having the charging start pattern is detected, the charging switch controller432may control the charging switch (SWchg)436to be turned on. The charging switch controller432may transmit a charging start signal chg_on to the charging switch436. In this case, a rectified signal VIN may be transmitted to a battery437(e.g., the battery289ofFIG.3a) through the charging switch436. The battery437may be charged by using the received rectified signal VIN. An over-voltage protection circuit (OVP)433may identify a battery voltage VBAT and control the charging switch436to be turned off when the battery voltage exceeds an over-voltage threshold. The charging switch (SWchg)436may operate like a low dropout (LDO) regulator that adjusts the gate voltage of the charging switch (SWchg)436so that the battery voltage VBAT may be controlled to a constant voltage. According to various embodiments, when a charging stop pattern is identified, the charging switch controller432may control the charging switch436to the off state. According to various embodiments, when a reset pattern is identified, the charging switch controller432may transmit a reset signal to a BLE communication circuit and controller (BLE+controller)439(e.g., the communication circuit290and the processor220ofFIG.3a). According to various embodiments, when a pattern indicating a position in the garage is identified, the charging switch controller432may transmit corresponding information dck to the BLE communication circuit and controller (BLE+controller)439(e.g., the communication circuit290and the processor220ofFIG.3a). According to various embodiments, a load switch controller (SWLctrl)434may control a load switch (SWL)438to the on state, when the battery voltage is identified as exceeding an operating voltage threshold. When the load switch438is turned on, power from the battery437may be transferred to the BLE communication circuit and controller (BLE+controller)439(e.g., the communication circuit290and processor220ofFIG.3a). The BLE communication circuit and controller439may operate by using the received power. When the distance between the stylus pen201and the electronic device101is greater than a threshold distance, a button control circuit (button control)440may transmit information about an input of a button (e.g., the button337) to the BLE communication circuit and controller439. The BLE communication circuit and controller439may transmit the received information about the button input to the electronic device101through an antenna441(e.g., the antenna297ofFIG.3A). A sensor450(e.g., the sensor299ofFIG.3a) may include a gyro sensor451and/or an accelerometer452. Sensing data obtained by the gyro sensor451and/or the accelerometer452may be transmitted to the BLE communication circuit and controller439. The BLE communication circuit and controller439may transmit a communication signal including the received sensing data to the electronic device101through the antenna441. Alternatively, the BLE communication circuit and controller439may identify information related to the position of the stylus pen201(e.g., the coordinates and/or displacement of the stylus pen201) identified based on the received sensing data. The BLE communication circuit and controller439may transmit the identified information related to the position of the stylus pen201to the electronic device101through the antenna441. According to various embodiments, when the stylus pen201is withdrawn from the electronic device101, the BLE communication circuit and controller439may activate the accelerometer452. When the button (e.g., button337) is pressed, the BLE communication circuit and controller439may activate the gyro sensor451. The activation timings are merely an example, and there is no limitation on the activation timing of each sensor. In addition, the sensor450may further include a geomagnetic sensor. When only the accelerometer452is activated, the stylus pen201may provide acceleration information measured by the accelerometer452to the electronic device101, and the electronic device101may operate based on both of the position and acceleration information of the stylus pen201, which have been identified based on a pen signal. According to various embodiments, the electronic device101may control the stylus pen201to enter a charge mode through a first communication method (e.g., unidirectional/EMR communication). The stylus pen201may transmit charging information (e.g., a battery charge percentage) to the electronic device101in a second communication method (e.g., bidirectional/BLE communication). The first communication method may include at least one of ECR, AES, or unidirectional communication, in addition to EMR. The second communication method may include at least one of Bluetooth communication, NFC communication, or Wi-Fi direct, in addition to BLE communication. When the charging information received through BLE communication indicates fully charged (e.g., 100% or full-charge voltage), the electronic device101may discontinue the operation of charging the stylus pen201and perform auxiliary charging. When the battery charge level of the stylus pen201falls to or below a certain level (e.g., 95% or a specific voltage) after the charging of the stylus pen201is stopped, the electronic device101may control the stylus pen201to enter the charge mode through the first communication method, thereby performing charging. FIG.5is a diagram illustrating the configuration of an electronic device according to various embodiments. According to various embodiments, the electronic device101(e.g., the electronic device101ofFIG.1) may include a sensing panel controller511, a processor512(e.g., the processor120), a Bluetooth controller513(e.g., the communication module190), and/or an antenna514. The electronic device101may include a sensing panel503, a display assembly502disposed on the sensing panel503, and/or a window501disposed on the display assembly502. Depending on implementation, when the sensing panel503is implemented as a digitizer, a touch screen panel for sensing a user's touch may be further disposed on or under the sensing panel503. The touch screen panel may be located on the display assembly502depending on implementation. As described before, the sensing panel503may be implemented as a digitizer and include a plurality of loop coils. According to various embodiments, when implemented as a digitizer, the sensing panel503may include a component (e.g., an amplifier) for applying an electrical signal (e.g., a transmission signal) to the loop coils. The sensing panel503may include a component (e.g., an amplifier, a capacitor, or an ADC) for processing a signal (e.g., an input signal) output from the loop coils. The sensing panel503may identify the position of the stylus pen201based on the magnitudes of signals received from the loop coils (e.g., a converted digital value converted for each channel), and output information about the position to the processor120. Alternatively, depending on implementation, the processor120may identify the position of the stylus pen201based on the magnitudes of the signals received from the loop coils (e.g., the converted digital value for each channel). For example, the sensing panel503may apply a current to at least one of the loop coils, and the at least one coil may form a magnetic field. The stylus pen201may resonate by a magnetic field formed around it, and a magnetic field may be formed from the stylus pen201by the resonance. A current may be output from each of the loop coils by the magnetic field formed from the stylus pen201. The electronic device101may identify the position of the stylus pen201based on the magnitude of the current (e.g., converted digital value) for each of the channels of the loop coils. To determine the position of the stylus pen201, the loop coils may include coils extending in one axis (e.g., x-axis) direction and coils extending in another axis (e.g., y-axis) direction. However, the arrangement of the coils is not limited. The sensing panel controller511may apply a transmission signal Tx to at least some of the plurality of loop coils of the sensing panel503, and the loop coil receiving the transmission signal Tx may form a magnetic field. The sensing panel controller511may receive reception signals Rx from at least some of the plurality of loop coils in time division. The sensing panel controller511may identify the position of the stylus pen201(e.g., the stylus pen201ofFIG.2) based on the received signals Rx and transmit information about the position of the stylus pen201to the processor512. For example, the strengths of the reception signals Rx may be different for the plurality of respective loop coils (e.g., the respective channels), and the position of the stylus pen201may be identified based on the strengths of the reception signals. In addition, the electronic device101may identify whether the button (e.g., the button337) of the stylus pen201has been pressed based on the frequency of a received signal. For example, when the frequency of the received signal is a first frequency, the electronic device101may identify that the button of the stylus pen201has been pressed, and when the frequency of the received signal is a second frequency, the electronic device110may identify that the button of the stylus pen201is in a released state. Alternatively, when the sensing panel is implemented as a touch screen panel (TSP), the sensing panel503may identify the position of the stylus pen200based on an output signal of an electrode. The touch screen panel may be located on the display assembly502. The touch screen panel may be implemented in an in-cell structure in which a sensor electrode is located inside the display assembly502. Alternatively, the sensor electrode may be implemented as an on-cell structure in which a sensor electrode is located on the display assembly502. Alternatively, the electronic device101may detect the pen based on a change in the capacitance (mutual capacitance and/or self-capacitance) of a touch panel electrode. Hardware for sensing a pen signal from the stylus pen on the digitizer or the touch screen panel may be referred to as the sensing panel503. When the position of the stylus pen201is identified through the touch screen panel, the electronic device101may identify whether the button has been pressed based on a received communication signal. The sensing panel controller511may identify whether the stylus pen201has been inserted into (or coupled with) the electronic device101based on a received signal, and notify the processor512of the identification. Depending on implementation, the sensing panel controller511may be integrated with the sensing panel503. In various embodiments, the pen controller401ofFIG.4and the sensing panel controller510may be configured into one IC. The processor512may transmit a signal for wireless charging based on whether the stylus pen201has been inserted. The processor512may control the Bluetooth controller513based on whether the stylus pen201has been inserted. When a wireless communication connection has not been established, the processor512may control the Bluetooth controller513to establish a wireless communication connection with the stylus pen201. In addition, when the stylus pen201is inserted, charging capacity information may be transmitted to the electronic device101, and when the stylus pen201is removed, information about button press and sensor data may be transmitted to the electronic device101. In addition, the processor512may control transmission of a charging signal and a control signal to the sensing panel controller511, based on data received from the stylus pen201. The processor512may identify a gesture of the stylus pen201based on data received from the stylus pen201and perform an operation corresponding to the gesture. The processor512may indicate a function mapped to the gesture to an application. The Bluetooth controller513may transmit/receive information through the stylus pen201and the antenna514. The display assembly502may include a component for displaying a screen. The window501may be formed of a transparent material so that at least a portion of the display assembly502may be visually exposed. FIG.6Ais a flowchart illustrating operations of a stylus pen and an electronic device, when the stylus pen is inserted into the electronic device according to various embodiments. According to various embodiments, in operation601, the stylus pen201(e.g., the stylus pen201ofFIG.2) may be inserted into the garage of the electronic device101(e.g., the electronic device101ofFIG.1). For example, the user may insert the stylus pen201into the garage of the electronic device101, and the operation is marked with a dotted line based on the fact that the operation is not an active operation of the stylus pen201. Regarding the embodiment ofFIG.6A, a case in which the stylus pen201without a communication connection to the electronic device101is inserted into the electronic device101is described. In the present disclosure, when the electronic device101or the stylus pen201performs a specific operation, this may imply that the processor120included in the electronic device101or the processor220included in the stylus pen201performs the specific operation. When the electronic device101or the stylus pen201performs a specific operation, this may imply that the processor120included in the electronic device101or the processor220included in the stylus pen201controls other hardware to perform the specific operation. Alternatively, when the electronic device101or the stylus pen201performs a specific operation, this may imply that an instruction stored in a memory, which causes the processor120included in the electronic device101or the processor220included in the stylus pen201to perform the specific operation is executed or the instruction is stored. According to various embodiments, in operation603, the electronic device101may detect the insertion of the stylus pen201. For example, the electronic device101may detect the insertion of the stylus pen201based on a reception signal received from the stylus pen201in response to a transmission signal transmitted through the garage coil (e.g., the coils411and412). However, those skilled in the art will understand that the method of detecting the insertion is not limited. In operation605, the electronic device101may perform an initialization operation, for example, transmit a reset command to the stylus pen201. For example, the electronic device101transmits a reset command. When the electronic device101identifies insertion of the stylus pen201having no connection established, in an idle state, in a stuck state, or having no connection history, the electronic device101may transmit the reset command. According to various embodiments, in operation607, the stylus pen201may perform a reset operation. For example, the stylus pen201may release an existing BLE connection and initialize the BLE communication module. In operation609, the stylus pen201may perform an advertising operation. For example, the stylus pen201may broadcast an advertisement signal. In operation611, the electronic device101may identify the inserted stylus pen201. The electronic device101may identify the inserted stylus pen201based on the received advertisement signal. In operation613, the electronic device101may request a communication connection. For example, the electronic device101may transmit a connection request signal corresponding to the advertisement signal. The stylus pen201may establish a communication connection with the electronic device101in operation615. FIG.6Bis a flowchart illustrating a detailed operation, when a stylus pen is inserted into an electronic device according to various embodiments. In operation621, the stylus pen201may be inserted into the garage of the electronic device101. According to various embodiments, when identifying the insertion in operation623, the electronic device101may start charging in operation625. The electronic device101may transmit, for example, a signal of a pattern indicating the start of charging through the garage coils411and412, or transmit a communication signal indicating the start of charging to the stylus pen201through the communication module. The stylus pen201may identify information indicating the start of charging, and perform charging start chg_on in operation627. For example, the stylus pen201may control the charging switch436to connect the rectifier435to the battery437. The stylus pen201may detect garage-in in operation629. In operation631, the electronic device101may transmit a reset start command to the stylus pen201. The stylus pen201may be reset in operation633. For example, the stylus pen201may initialize the BLE module. According to various embodiments, in operation635, the stylus pen201may perform an advertising procedure. The electronic device101may start scanning for the stylus pen in operation637and continue scanning in operation639. For example, the electronic device101may perform scanning during a timeout period (e.g., 40 seconds). The electronic device101may start to search for the inserted stylus pen in operation641. Operations637,639, and641may be performed as one operation depending on implementation. In operation643, the electronic device101and the stylus pen201may perform a search procedure. For example, after transmitting a charging start signal, the electronic device101may identify whether the stylus pen201transmitting the advertisement signal exists. Without the charging start signal, the electronic device101may detect the advertisement signal transmitted from the stylus pen201. According to various embodiments, the stylus pen201may be configured to transmit an advertisement signal, when receiving a charging start signal. Accordingly, the electronic device101may identify the stylus pen201inserted into the electronic device101by identifying the advertisement signal received after transmitting the charging start signal. In operation645, the electronic device101may detect the inserted stylus pen201based on the above-described process. The electronic device101may transmit a connect request to the stylus pen201in operation647, and the stylus pen201may receive the connection request in operation649. In operation651, the electronic device101and the stylus pen201may be connected. In operation653, the stylus pen201may set a descriptor and transmit information about the descriptor to the electronic device101. The electronic device101may identify the descriptor. The descriptor may be, for example, a setting for an activated function (e.g., a button event, and device information including battery information), and the type thereof is not limited. In operation655, the stylus pen201may transmit information about the descriptor to the electronic device101. In various embodiments, the electronic device101may identify that there is no need to perform a reset/communication connection with the stylus pen201, and in this case, the reset start process of operation631to the connection process of operation651may be omitted. FIG.6Cis a flowchart illustrating operations of an electronic device and a stylus pen, when the stylus pen is inserted into the electronic device according to various embodiments. According to various embodiments, in operation661, the stylus pen201may be inserted into the garage of the electronic device101. For example, after the stylus pen201is initially inserted into the garage and then removed from the garage, the stylus pen201may be reinserted. In operation663, the electronic device101may detect the insertion of the stylus pen201. In operation665, the electronic device101may command the stylus pen201to activate charging. The electronic device101may command charging activation based on, for example, transmission of a signal having a pattern through the garage coils or transmission of a communication signal through the communication module. In operation667, the electronic device101may start the charge mode. In operation669, the stylus pen201may detect the insertion of the stylus pen201. The stylus pen201may identify whether the stylus pen201has been inserted based on information received from the electronic device101or the magnitude of a voltage applied to the resonant circuit (or the output terminal of the rectifier) of the stylus pen201. In operation671, the stylus pen201may deactivate a sensor. The stylus pen201may deactivate some sensors or may be configured to skip the sensor deactivation. In operation673, the electronic device101and the stylus pen201may perform a charging operation. In various embodiments, the charging operation673may be performed immediately after initiation of the charge mode in operation667, and the time of the charging operation is not limited. FIG.7is a flowchart illustrating operations of a stylus pen and an electronic device, when the stylus pen is removed from the electronic device according to various embodiments. According to various embodiments, in operation701, the stylus pen201(e.g., the stylus pen201ofFIG.2) may be removed from the garage of the electronic device101(e.g., the electronic device101ofFIG.1). For example, the user may take out the stylus pen201from the garage of the electronic device101. In operation703, the electronic device101may detect the removal of the stylus pen201. For example, the electronic device101may detect the removal of the stylus pen201based on no reception of a response signal to a detection signal from the garage coils411and412. However, the method of detecting the removal is not limited. The electronic device101may be configured to identify insertion/removal of the stylus pen201based on sensing data from a detection sensor such as a hall sensor. In operation705, the stylus pen201may detect the removal of the stylus pen201. For example, the stylus pen201may detect the removal of the stylus pen201based on no reception of a signal from the electronic device101according to the voltage VM of the output terminal of the rectifier431. However, the removal detection method is not limited. Upon detection of the removal, the stylus pen201may exchange parameters (e.g., a connection interval and/or a slave latency) with the electronic device101. According to various embodiments, the stylus pen201may activate the accelerometer based on the detection of the removal in operation707. The stylus pen201may sense acceleration information about the stylus pen201through the activated accelerometer in operation709. While not shown, the stylus pen201may transmit the sensed acceleration information to the electronic device101. In various embodiments, the electronic device101may perform an operation based on the received acceleration information. In various embodiments, the stylus pen201may be configured to activate the accelerometer and maintain the gyro sensor consuming relatively high power in an inactive state. According to various embodiments, the stylus pen201may identify an input of a button (e.g., the button337) in operation711. When identifying the button input, the stylus pen201may activate the gyro sensor in operation713. The stylus pen201may sense rotation information through the activated gyro sensor in operation715. In operation717, the stylus pen201may transmit information based on the sensing result. For example, the stylus pen201may transmit sensing information obtained through the accelerometer and/or the gyro sensor to the electronic device101. Alternatively, the stylus pen201may identify the coordinates (e.g., two-dimensional coordinates or three-dimensional coordinates) of the stylus pen201based on the sensing information obtained through the accelerometer and the gyro sensor, and transmit the identified coordinates to the electronic device101. Alternatively, the stylus pen201may identify displacement information about the coordinates (e.g., two-dimensional coordinates or three-dimensional coordinates) of the stylus pen201based on the sensing information obtained through the accelerometer and the gyro sensor, and transmit the identified displacement information to the electronic device101. The stylus pen201may estimate an initial orientation of the stylus pen201based on information measured by the accelerometer and use the estimated initial orientation to correct the position information. According to various embodiments, in operation719, the electronic device101may perform an operation based on the received information. When receiving the sensing information, the electronic device101may identify position information about the stylus pen201based on the sensing information, identify a gesture corresponding to the position information, and perform an operation corresponding to the gesture. When receiving the position information about the stylus pen201, the electronic device101may identify the gesture corresponding to the position information and perform the operation corresponding to the gesture. For example, the stylus pen201may transmit information to the electronic device101until the input of the pen button is released. The electronic device101may identify the gesture based on the identified position information about the stylus pen201until detecting the release of the button input. When the release of the button input is detected, the stylus pen201may deactivate the gyro sensor again. In various embodiments, the stylus pen201may activate both the gyro sensor and the accelerometer from a time of detecting removal. In this case, the position information about the stylus pen201before the button input may be used to correct the direction of the gesture, and gesture recognition accuracy may be improved. For example, the electronic device101may identify the initial orientation information about the stylus pen201and recognize a gesture by using a displacement based on the initial orientation information. FIGS.8A and8Bare diagrams referred to for describing an interface between an electronic device and a stylus pen according to various embodiments. Referring toFIGS.8A and8B, the electronic device101(e.g., the electronic device101ofFIG.1) and the stylus pen201(e.g., the stylus pen201ofFIG.2) may interact with each other in three methods. According to various embodiments, when the stylus pen201is inserted into the electronic device101, the electronic device101may transmit a signal813through the garage coils (e.g., the coils411and412ofFIG.4). An induced electromotive force corresponding to the signal813may be generated in a coil (e.g., the coil421ofFIG.4) of the stylus pen201by induction of a magnetic field. The induced electromotive force may be rectified by the rectifier (e.g., the rectifier431ofFIG.4). A charging switch controller804(e.g., the charging switch controller432ofFIG.4) of the stylus pen201may analyze the waveform of the voltage VM at the output terminal of the rectifier. The waveform of the voltage VM may correspond to the signal813. The stylus pen201may perform an operation identified based on the result of the waveform analysis of the voltage VM. For example, the stylus pen201may perform start charging, initialization, garage-in identification, and charging discontinuation, and these operations are not limited. While not shown, the stylus pen201may apply a signal having a pattern (e.g., a signal of the sixth pattern in Table 2) to a resonant circuit802. The electronic device101may analyze an induced electromotive force signal by the signal having the pattern, and identify insertion of the stylus pen201based on the analysis result. According to various embodiments, when the stylus pen201is within a recognizable range of the electronic device101(e.g., a recognizable range of the digitizer), a digitizer801(e.g., the sensing panel503) of the electronic device101and the resonant circuit802of the stylus pen201may interact with each other. As illustrated inFIGS.8aand8b, the stylus pen201may resonate by a transmission signal811generated from at least one loop coil of the digitizer801, and a reception signal812may be generated by the resonance. The digitizer801may identify the position of the stylus pen201based on the magnitude of the induced electromotive force generated by the reception signal812in each of the plurality of loop coils. In addition, the digitizer801may identify whether a button (e.g., the button337) of the stylus pen201has been pressed based on the frequency of the induced electromotive force. Depending on implementation, it may be identified whether the button has been pressed, based on information included in a communication signal815. In various embodiments, when identifying a gesture based on information included in the communication signal815within the recognizable range, the electronic device101may perform an operation different from when identifying the gesture based on the information included in the communication signal815outside the recognizable range. For example, when an upward swipe gesture is identified by the communication signal815from the stylus pen201within the recognizable range, the electronic device101may perform up-scrolling. When an upward swipe gesture is identified by the communication signal815from the stylus pen201outside the recognizable range, the electronic device101may perform an enlargement operation. However, this is an example, and the electronic device101according to various embodiments may be configured to ignore a gesture based on the communication signal815from the stylus pen201within the recognizable range. According to various embodiments, when the stylus pen201is outside the recognizable range of the electronic device101(e.g., the recognizable range of the digitizer), the communication module805(e.g., the communication module190) of the electronic device101and the communication module806(e.g., the BLE communication circuit and controller439) of the stylus pen201may transmit and receive communication signals814and815. The stylus pen201may transmit information about the position of the stylus pen201(e.g., the coordinates of the stylus pen201and/or a displacement of the stylus pen201within the coordinate system) identified based on a built-in sensor (e.g., the accelerometer, the gyro sensor, and the geomagnetic sensor) in the communication signal815to the electronic device101. The stylus pen201may transmit information indicating whether the button has been pressed in the communication signal815to the electronic device101. The stylus pen201may transmit state information in the communication signal815to the electronic device101. The electronic device101may transmit state information and/or control information in the communication signal814. The electronic device101may move the cursor, identify a gesture, and perform a corresponding operation, and/or an operation corresponding to button pressing in an air mouse mode, based on the received information about the position of the stylus pen201and/or the information indicating whether the button has been pressed. FIG.9is a block diagram illustrating a charging switch controller according to various embodiments. The charging switch controller432(e.g., the charging switch controller432ofFIG.4) according to various embodiments may include at least one comparator901, at least one edge detector902, at least one pulse detector, at least one oscillator904, at least one counter905, and/or at least one digital logic circuit906. According to various embodiments, the at least one comparator901may compare an input voltage with a reference voltage and output a high signal or a low signal based on the comparison result. The at least one comparator901may apply, for example, the reference voltage so that the reference voltage is hysteresis. For example, the reference voltage is set to a high reference voltage (e.g. 3.5V) in the rising period of the rectifier output voltage VM and to a low reference voltage (e.g. 1.5V) in the falling period of the signal output from the resonant circuit. According to various embodiments, the at least one edge detector902may detect whether an edge is generated in an input signal and output an output signal, upon detection of an edge. The at least one pulse detector903may detect whether a pulse is generated in the input signal and output an output signal upon detection of a pulse. The at least one oscillator904may output, for example, a reference clock for determining time. The at least one counter905may count the number of input pulses and output a counting result. The at least one digital logic circuit906may identify information indicated by the pattern of a signal transmitted by the electronic device101based on the counting result obtained from the at least one counter905. The at least one digital logic circuit906may output a control signal corresponding to the identified information. For example, the at least one digital logic circuit906may transmit a charging start signal to the charging switch436. For example, the at least one digital logic circuit906may transmit a communication reset signal to the BLE communication circuit and controller439. For example, when insertion into the garage is completed, the at least one digital logic circuit906may transmit a signal indicating garage-in to the BLE communication circuit and controller439. FIG.10is a flowchart illustrating a method of operating an electronic device and a stylus pen according to various embodiments. In operation1001, the stylus pen201(e.g., the stylus pen201ofFIG.2) may be inserted into the garage of the electronic device101(e.g., the electronic device101ofFIG.1). For example, the user may insert the stylus pen201into the garage of the electronic device101, and the operation is marked with a dotted line based on the fact that the operation is not an active operation of the stylus pen201. According to various embodiments, in operation1003, the electronic device101may detect the insertion of the stylus pen201. The electronic device101may apply, for example, a detection signal to the garage coils (e.g., the coils411and412), and detect whether the stylus pen201has been inserted, based on whether a response signal is received from the stylus pen201. Alternatively, the electronic device101may detect whether the stylus pen201has been inserted based on a sensing result of a separate sensor (e.g., a hall sensor) for detecting the insertion of the stylus pen201. The method of detecting whether the stylus pen201is inserted by the electronic device101is not limited. In operation1005, the electronic device101may apply a signal having a pattern corresponding to the start of charging to the garage coils, based on the detection of the insertion of the stylus pen201. According to various embodiments, the stylus pen201may analyze a rectifier output voltage (e.g., the output voltage VM of the rectifier431ofFIG.4) in operation1007. In operation1009, the stylus pen201may identify the start of charging based on the voltage analysis result. The stylus pen201may control the charging switch (e.g., the charging switch436ofFIG.4) to be turned on based on the identification of the start of charging in operation1011. As the charging switch is controlled to the on state, a charging signal may be transmitted to the battery. In various embodiments, the stylus pen201may analyze voltages at various points (e.g., the output terminal of the coil421) other than the output terminal of the rectifier to identify information indicated by the electronic device101. In various embodiments, the stylus pen201may identify the information indicated by the electronic device101based on a current, power, or impedance in addition to a voltage. FIG.11illustrates waveforms referred to for describing a charging initiation process according to various embodiments. According to various embodiments, upon detection of insertion (garage-in) of the stylus pen201(e.g., the stylus pen201ofFIG.2), the electronic device101(e.g., the electronic device101ofFIG.1) may apply a signal indicating the start of charging to the garage coils (e.g., the coils411and412). When a signal is generated, an induced electromotive force Vpen1100may be generated in the coil421of the stylus pen201by electromagnetic induction. The induced electromotive force1100may have substantially the same waveform as the signal (or a waveform with an inverted phase). The induced electromotive force1100may include a first part1101of a square wave, a second part1102of an off period, and a third part1103of the square wave. The induced electromotive force1100may be rectified by the rectifier431, and a voltage VM1110of the output terminal of the rectifier431may be identified. The voltage VM1110of the output terminal may include a first part1111that is a high period, a second part1112that is a low period, and a third part1113that is a high period. The at least one comparator901may generate an output signal based on a voltage exceeding a high reference voltage (e.g., 3.5V) at time tr1, for example. The at least one comparator901may discontinue generating the output signal based on a voltage being less than or equal to a low reference voltage (e.g., 1.5V) at time tf1. The at least one comparator901may generate an output signal based on a voltage exceeding the high reference voltage (e.g., 3.5V) at time tr2, for example. The at least one edge detector902may detect edges at time tr1, time tf1, and time tr2. The at least one counter905may count the number of pulses output from the oscillator904. The counter905may, for example, count the number of pulses between edge detection time points, which may correspond to a time period between edge detection time points. The digital logic circuit906may identify whether the time between time tr1and time tf1exceeds T1and is less than T3as a first condition. The digital logic circuit906may identify whether the time between time tr2and time tf1is less than T2as a second condition. The digital logic circuit906may identify whether a high signal holding time tr1after time tr2is equal to or greater than T4as a third condition after the first condition and the second condition are satisfied. For example, when it is identified that the first condition, the second condition, and the third condition are satisfied, the digital logic circuit906may transition a charging start signal chg_on1120from a low signal1121to a high signal1122. The charging switch436may be controlled to be turned on by the high signal1122. FIG.12is a flowchart illustrating a method of operating an electronic device and a stylus pen according to various embodiments. In operation1201, the stylus pen201(e.g., the stylus pen201ofFIG.2) may be inserted into the garage of the electronic device101(e.g., the electronic device101ofFIG.1). In operation1203, the electronic device101may detect the insertion of the stylus pen201. In operation1205, the electronic device101may apply a signal having a pattern corresponding to the start of charging to the garage coils, based on the detection of the insertion of the stylus pen201. In operation1207, the stylus pen201may control the charging switch to the on state based on a pattern analysis result and start charging. In operation1208, the electronic device101may determine whether the stylus pen201needs to be reset. For example, when an idle state, a communication stuck state, and the initial insertion of the stylus pen201are identified, the electronic device101may determine that the stylus pen201needs to be reset. When the reset is not requested (1208—NO), the electronic device101may maintain charging, while monitoring periodically or aperiodically whether the reset is requested. When it is identified that reset is required (1208—Yes), the electronic device101may apply a signal having a pattern corresponding to reset to the garage coils in operation1209. According to various embodiments, the stylus pen201may analyze a rectifier output voltage (e.g., the output voltage VM of the rectifier431ofFIG.4) in operation1211. In operation1213, the stylus pen201may identify a charging start or reset instruction based on a voltage analysis result. The stylus pen201may control the charging switch (e.g., the charging switch436ofFIG.4) to the on state based on the identification of the start of charging, or initialize the BLE module based on the identification of the reset instruction, in operation1215. FIG.13illustrates waveforms referred to for describing a reset process according to various embodiments. According to various embodiments, when identifying that a reset is required, the electronic device101(e.g., the electronic device101ofFIG.1) may apply a signal indicating a reset instruction to the garage coils (e.g., the coils411and412). When the signal is generated, an induced electromotive force Vpen1310may be generated in the coil421of the stylus pen201(e.g., the stylus pen201ofFIG.2) by electromagnetic induction. The induced electromotive force1310may have a waveform substantially identical to that of the signal (or a waveform with an inverted phase). The induced electromotive force1310may include a first part1311, a third part1313, a fifth part1315, a seventh part1317, a ninth part1319of a square wave, and a second part1312, a fourth part1314, a sixth part1316, and an eighth part1318of an off period. The induced electromotive force1100may be rectified by the rectifier431, and a voltage VM1320of the output terminal of the rectifier431may be identified. The voltage VM1320of the output terminal may include a first part1321, a third part1323, a fifth part1325, a seventh part1327, and a ninth part1329which are high, and a second part1322, a fourth part1324, a sixth part1326, and an eighth part1328which are low. The at least one comparator901may generate an output signal based on a voltage exceeding a high reference voltage (e.g., 3.5V) at time t1, time t2, time t3, and time t4. The at least one edge detector902may detect rising edges at time t1, time t2, time t3, and time t4. The at least one counter905may count the number of pulses output from the oscillator904. The counter905may, for example, count the number of pulses between edge detection time points, which may correspond to a time period between the rising edge detection time points. When a specified number of (e.g.,4) rising edges are detected, and the detection time t4to t1of the specified number of rising edges is within a specified threshold period Trst, the digital logic circuit906may detect the reset instruction. Upon detection of the reset instruction, the digital logic circuit906may output a reset signal1331to the BLE module in response to the detection of the reset instruction, and the BLE module may perform reset based on the reset signal. FIG.14Ais a flowchart illustrating a method of operating an electronic device according to various embodiments. In operation1401, the stylus pen201(e.g., the stylus pen201ofFIG.2) may be inserted into the garage of the electronic device101(e.g., the electronic device101ofFIG.1). In operation1403, the electronic device101may detect the insertion of the stylus pen201. In operation1405, the electronic device101may apply a signal having a pattern indicating garage-in to the garage coils, based on the detection of the insertion of the stylus pen201. In operation1407, the stylus pen201may analyze a rectifier output voltage. In operation1409, the stylus pen201may identify that the stylus pen201is located in the garage based on a voltage analysis result. When identifying garage-in, the stylus pen201may adjust a communication period with the electronic device101. For example, when identifying that the stylus pen201is located in the garage, the stylus pen201may set the communication period with the electronic device101to be relatively long, thereby reducing power consumption. When identifying that the stylus pen201is located in the garage, the sensor and/or the microphone may be turned off or may be controlled to operate in the inactive mode. Alternatively, when the stylus pen201is an active pen, a pen tip transmitter may be deactivated. FIG.14Bis a flowchart illustrating a method of operating a stylus pen according to various embodiments. According to various embodiments, in operation1411, the stylus pen201may analyze a rectifier output voltage. In operation1413, the stylus pen201may identify whether a pattern indicating the start of charging start or a pattern indicating garage-in has been detected, as a result of the output voltage analysis. For example, a condition for maintaining a pen garage-in signal may be that three or more pulses are maintained per second, and a condition for disabling the pen garage-in signal may be that pulses are detected less than three times per second, which will be described with reference toFIG.15. Upon detection of the pattern indicating the start of charging or the pattern indicating garage-in (1413—Yes), the stylus pen201may identify that the stylus pen201is located in the garage in operation1415. When the pattern indicating the start of charging or the pattern indicating garage-in is not detected (1413—No), the stylus pen201may identify that the stylus pen201is located outside the garage in operation1417. FIG.15is a diagram illustrating waveforms referred to for describing a process of indicating garage-in according to various embodiments. According to various embodiments, the electronic device101(e.g., the electronic device101ofFIG.1) may discontinue charging during application of a charging signal to the garage coils (e.g., the coils411and412). After the charging is stopped, the electronic device101may apply a signal identifying garage-in every specified period. When a signal is generated, an induced electromotive force Vpen1500may be generated in the coil421of the stylus pen201(e.g., the stylus pen201ofFIG.2) by electromagnetic induction. The induced electromotive force Vpen1500may include a part1501corresponding to a charging signal, off periods1502,1504,1506,1508, and1510each corresponding to the specified period, and parts1503,1505,1507,1509, and1511corresponding to signals. The induced electromotive force1500may be rectified by the rectifier431, and a rectified voltage1520may be measured. The rectified voltage1520may include a part1521corresponding to the charging signal, off periods1522,1524,1526,1528, and1530, and spike periods1523,1525,1527,1529, and1531. The at least one comparator901may output a comparison result Vmp1550according to a result of comparison between the rectified voltage1520and a reference voltage, and the comparison result1550may include a part1551corresponding to the charging signal, off periods1552,1554,1556,1558, and1560, and pulses1553,1555,1557,1559, and1561. The reference voltage may be configured to be, for example, hysteresis and have a different magnitude from the reference voltage used in another signal identification process. The pulse detector903may detect the pulses1553,1555,1557,1559, and1561of the comparison result1550, and the counter905may count the number of pulses generated by the oscillator904during a pulse period. A pulse width may be, for example, tp. The digital logic circuit906may identify the time between occurrences of the pulses1553,1555,1557,1559, and1561by checking the counting result. For example, when the digital logic circuit906detects the pulses1553,1555,1557,1559, and1561more than a specified threshold number (e.g., three), and identifies that the specified threshold number of pulses have been detected within a threshold time (e.g., tp3), the digital logic circuit906may identify that the stylus pen201is located in the garage of the electronic device101. The digital logic circuit906may output a garage-in indication signal dck1570. The stylus pen201may identify whether the stylus pen201has been inserted/removed based on whether the garage-in indication signal dck has been detected. The waveform analysis methods ofFIGS.11,13, and15is merely exemplary, which should not be construed as limiting. In addition, each pattern is not restrictively mapped to a specific command. For example, those skilled in the art will understand that the waveform analysis method ofFIG.11may also be used to instruct reset or indicate garage-in. FIG.16is a flowchart illustrating a method of operating a stylus pen according to various embodiments. The embodiment ofFIG.16will be described with reference toFIG.17.FIG.17illustrates waveforms, when charging is terminated according to various embodiments. According to various embodiments, in operation1601, the stylus pen201(e.g., the stylus pen201ofFIG.2) may receive a charging signal to perform charging. The electronic device101(e.g., the electronic device101ofFIG.1) may apply the charging signal to the garage coils. An induced electromotive force1711corresponding to the charging signal may be generated at the output terminal of the resonant circuit of the stylus pen201. The induced electromotive force1711may be generated as a rectified voltage (e.g., the high signal period1721) by the rectifier431, and the battery437may be charged. In operation1603, the stylus pen201may detect a charging signal interruption. For example, when identifying that the stylus pen201has been fully charged, the electronic device101may discontinue providing the charging signal, and an off period1712may be identified at the resonant circuit output terminal. An off period1722in which the voltage of the output terminal of the rectifier431is also substantially 0V may be detected. The stylus pen201may identify that the voltage of the output terminal of the rectifier431is in the high signal period1721and then in the off period1722. When the duration of the off period1722exceeds a threshold duration (e.g., 200 ms), the stylus pen201may detect the charging signal interruption. In operation1605, the stylus pen201may turn off the charging switch. The stylus pen201may stop applying the charging signal1731during application of the charging signal chg_on1731. The stylus pen201may maintain the output of a signal1740indicating garage-in. FIG.18is a flowchart illustrating a method of operating an electronic device and a stylus pen according to various embodiments. The embodiment ofFIG.18will be described with reference toFIG.19.FIG.19illustrates an exemplary scan signal according to various embodiments. According to various embodiments, the electronic device101(e.g., the electronic device101ofFIG.1) may apply a scan signal to the garage coils (e.g., the coils411and412) in operation1801. After a predetermined period has elapsed, the electronic device101may apply a scan signal in operation1803. For example, the electronic device101may apply scan signals1901,1903, and1905(e.g., detection signals) ofFIG.19to the garage coils. In operation1805, when the stylus pen201(e.g., the stylus pen201ofFIG.2) is placed, the stylus pen201may detect the applied scan signal in operation1807. In operation1809, the stylus pen201may generate a response signal. In operation1811, the electronic device101may identify insertion of the stylus pen201. After identifying the insertion of the stylus pen201, the electronic device101may apply a signal corresponding to one of the above-described charging start, reset instruction, and garage-in indication to the garage coils. In various embodiments, when the stylus pen201is an active pen, the stylus pen201may periodically transmit a signal, and upon detection of the transmitted signal, the electronic device101may identify the insertion of the stylus pen201. Alternatively, the electronic device101may identify whether the stylus pen201has been inserted based on an additional sensor (e.g., a hall sensor) for detecting the insertion, as described above. According to various embodiments, an electronic device (e.g., the electronic device101) may include a panel (e.g., the sensing panel503) configured to identify a position of a stylus pen (e.g., the stylus pen201), a communication module (e.g., the communication module190) configured to transmit and receive communication signals to and from the stylus pen (e.g., the stylus pen201), at least one garage coil (e.g., the coils411and412) disposed at a position corresponding to a position of a garage in which the stylus pen (e.g., the stylus pen201) is accommodatable, and at least one processor (e.g., the processor120). The at least one processor (e.g., the processor120) may be configured to, based on the stylus pen (e.g., the stylus pen201) being identified as inserted into the garage, apply, based on a first communication method, a signal having a pattern for controlling the stylus pen (e.g., the stylus pen201) (e.g., the coils411and412) to the garage coil, and based on the stylus pen (e.g., the stylus pen201) being identified as removed from the garage, control the communication module to transmit, based on a second communication method, a communication signal including information for controlling the stylus pen (e.g., the stylus pen201) to the stylus pen (e.g., the stylus pen201). According to various embodiments, the at least one processor (e.g., the processor120) may be configured to apply, to the garage coil (e.g., the coils411and412), a signal having a first pattern instructing reset of a communication module (e.g., the communication module190) of the stylus pen (e.g., the stylus pen201), based on identifying that reset of the stylus pen (e.g., the stylus pen201) is required. According to various embodiments, the at least one processor (e.g., the processor120) may be configured to apply, to the garage coil (e.g., the coils411and412), a signal having a second pattern instructing initiation of charging the stylus pen (e.g., the stylus pen201), based on the stylus pen (e.g., the stylus pen201) being identified as inserted into the garage. According to various embodiments, the at least one processor (e.g., the processor120) may be configured to apply, to the garage coil (e.g., the coils411and412), a charging signal for charging the stylus pen (e.g., the stylus pen201), after the signal having the second pattern is applied to the garage coil (e.g., the coils411and412). According to various embodiments, the at least one processor (e.g., the processor120) is configured to discontinue applying the charging signal to the garage coil (e.g., the coils411and412), based on the stylus pen (e.g., the stylus pen201) being identified as fully charged. According to various embodiments, the at least one processor (e.g., the processor120) may be configured to control the communication module (e.g., the communication module190) to receive a communication signal including charging information about a battery of the stylus pen (e.g., the stylus pen201). According to various embodiments, the at least one processor may be configured to apply, to the garage coil (e.g., the coils411and412), a signal having a third pattern indicating that the stylus pen (e.g., the stylus pen201) is located in the garage, based on the stylus pen (e.g., the stylus pen201) being identified as inserted into the garage. According to various embodiments, the at least one processor (e.g., the processor120) may be configured to perform an operation based on the position of the stylus pen (e.g., the stylus pen201) identified by the panel (e.g., the sensing panel503), based on the stylus pen (e.g., the stylus pen201) being identified as removed from the garage and located within a recognizable range of the panel (e.g., the sensing panel503). According to various embodiments, the at least one processor (e.g., the processor120) may be configured to control the communication module (e.g., the communication module190) to transmit, to the stylus pen (e.g., the stylus pen201), the communication signal including the information for controlling the stylus pen (e.g., the stylus pen201) or to receive another communication signal from the stylus pen (e.g., the stylus pen201), based on the stylus pen (e.g., the stylus pen201) being identified as removed from the garage and located outside a recognizable range of the panel (e.g., the sensing panel503). According to various embodiments, the at least one processor (e.g., the processor120) may be configured to periodically apply a scan signal to the garage coil (e.g., the coils411and412), and based on response signal corresponding to the scan signal being identified as detected, identify that the stylus pen (e.g., the stylus pen201) is inserted into the garage. According to various embodiments, a stylus pen (e.g., the stylus pen201) may include a resonant circuit (e.g., the resonant circuit287) including a coil and at least one capacitor, a communication module (e.g., the communication) circuit290), and at least one control circuit (e.g., the processor220), and the at least one control circuit (e.g., the processor220) may be configured to perform a first operation corresponding to a result of an analysis of a signal having a pattern output through the resonant circuit, while the stylus pen (e.g., the stylus pen201) is located in a garage of an electronic device (e.g., the electronic device101), and perform a second operation corresponding to information included in a communication signal received through the communication module (e.g., the communication module190), while the stylus pen (e.g., the stylus pen201) is located outside the garage of the electronic device (e.g., the electronic device101). According to various embodiments, the stylus pen (e.g., the stylus pen201) may further include a battery (e.g., the battery289) and at least one rectifier (e.g., the rectifiers431and435) that rectifies power output from the resonant circuit and transmits the rectified power to at least a part of the battery (e.g., the battery289) or the at least one control circuit (e.g., the processor220). According to various embodiments, the at least one control circuit (e.g., the processor220) may be configured to perform the first operation corresponding to a result of an analysis of the waveform of a voltage of power output from the resonant circuit (e.g., the resonance circuit287). According to various embodiments, the stylus pen (e.g., the stylus pen201) may further include a switch that selectively connects between at least a part of the at least one rectifier and the battery, and the at least one control circuit (e.g., the processor220) may be configured to control the switch to connect the at least part of the at least one rectifier to the battery, based on the result of the analysis of the signal being identified as commanding initiation of charging the stylus pen (e.g., the stylus pen201). According to various embodiments, the at least one control circuit (e.g., the processor220) may be configured to control the switch not to connect the at least part of the at least one rectifier to the battery, based on the result of the analysis of the signal being identified as commanding termination of charging the stylus pen (e.g., the stylus pen201). According to various embodiments, the at least one control circuit (e.g., the processor220) may be configured to transmit a signal instructing reset of the communication module (e.g., the communication module290) to the communication module (e.g., the communication module290), based on the result of the analysis of the signal being identified as commanding reset of the communication module (e.g., the communication module290). According to various embodiments, the at least one control circuit (e.g., the processor220) may be configured to identify that the stylus pen (e.g., the stylus pen201) is located in the garage of the electronic device (e.g., the electronic device101) based on the result of the analysis of the signal. According to various embodiments, the at least one control circuit (e.g., the processor220) may be configured to generate a response signal corresponding to a detection signal based on the electronic device (e.g., the electronic device101) being identified to correspond to the detection signal as a result of the analysis of the signal. According to various embodiments, a method of operating an electronic device (e.g., the electronic device101) including a panel configured (e.g., the sensing panel503) to identify a position of a stylus pen (e.g., the stylus pen201), a communication module (e.g., the communication module190) configured to transmit and receive communication signals to and from the stylus pen (e.g., the stylus pen201), and at least one garage coil (e.g., the coils411and412) disposed at a position corresponding to a position of a garage in which the stylus pen (e.g., the stylus pen201) is accommodatable may include, based on the stylus pen (e.g., the stylus pen201) being identified as inserted into the garage, applying, based on a first communication method, a signal having a pattern for controlling the stylus pen (e.g., the stylus pen201) to the garage coil (e.g., the coils411and412), and based on the stylus pen (e.g., the stylus pen201) being identified as removed from the garage, controlling the communication module (e.g., the communication module190) to transmit, based on a second communication method, a communication signal including information for controlling the stylus pen (e.g., the stylus pen201) to the stylus pen (e.g., the stylus pen201). According to various embodiments, the applying of a signal having a pattern for controlling the stylus pen (e.g., the stylus pen201) to the garage coil (e.g., the coils411and412) may include applying, to the garage coil (e.g., the coils411and412), a signal instructing initiation of changing the stylus pen (e.g., the stylus pen201), based on the stylus pen (e.g., the stylus pen201) being identified as inserted into the garage. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a computer device, a portable communication device (e.g., a smartphone), a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B”, “at least one of A and B”, “at least one of A or B,” “A, B, or C”, “at least one of A, B, and C,” and “at least one of A, B, or C” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, logic, logic block, part, or circuitry. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program) including one or more instructions that are stored in a storage medium (e.g., internal memory or external memory) that is readable by a machine (e.g., a master device or task performing device). For example, a processor of the machine (e.g., the master device or task performing device) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term ‘non-transitory’ simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. | 107,875 |
11861082 | DETAILED DESCRIPTION OF THE INVENTION FIG.1is a schematic block diagram of an embodiment of a communication system10that includes a plurality of computing. devices12-10, one or more servers22, one or more databases24, one or more networks26, a plurality of drive-sense circuits28, a plurality of sensors30, and a plurality of actuators32. Computing devices14include a touch screen16with sensors and drive-sensor circuits and computing devices18include a touch & tactic screen20that includes sensors, actuators, and drive-sense circuits. A sensor30functions to convert a physical input into an electrical output and/or an optical output. The physical input of a sensor may be one of a variety of physical input conditions. For example, the physical condition includes one or more of, but is not limited to, acoustic waves (e.g., amplitude, phase, polarization, spectrum, and/or wave velocity); a biological and/or chemical condition (e.g., fluid concentration, level, composition, etc.); an electric condition (e.g., charge, voltage, current, conductivity, permittivity, eclectic field, which includes amplitude, phase, and/or polarization); a magnetic condition (e.g., flux, permeability, magnetic field, which amplitude, phase, and/or polarization); an optical condition (e.g., refractive index, reflectivity, absorption, etc.); a thermal condition (e.g., temperature, flux, specific heat, thermal conductivity, etc.); and a mechanical condition (e.g., position, velocity, acceleration, force, strain, stress, pressure, torque, etc.). For example, piezoelectric sensor converts force or pressure into an eclectic signal. As another example, a microphone converts audible acoustic waves into electrical signals. There are a variety of types of sensors to sense the various types of physical conditions. Sensor types include, but are not limited to, capacitor sensors, inductive sensors, accelerometers, piezoelectric sensors, light sensors, magnetic field sensors, ultrasonic sensors, temperature sensors, infrared (IR) sensors, touch sensors, proximity sensors, pressure sensors, level sensors, smoke sensors, and gas sensors. In many ways, sensors function as the interface between the physical world and the digital world by converting real world conditions into digital signals that are then processed by computing devices for a vast number of applications including, but not limited to, medical applications, production automation applications, home environment control, public safety, and so on. The various types of sensors have a variety of sensor characteristics that are factors in providing power to the sensors, receiving signals from the sensors, and/or interpreting the signals from the sensors. The sensor characteristics include resistance, reactance, power requirements, sensitivity, range, stability, repeatability, linearity, error, response time, and/or frequency response. For example, the resistance, reactance, and/or power requirements are factors in determining drive circuit requirements. As another example, sensitivity, stability, and/or linear are factors for interpreting the measure of the physical condition based on the received electrical and/or optical signal (e.g., measure of temperature, pressure, etc.). An actuator32converts an electrical input into a physical output. The physical output of an actuator may be one of a variety of physical output conditions. For example, the physical output condition includes one or more of, but is not limited to, acoustic waves (e.g., amplitude, phase, polarization, spectrum, and/or wave velocity); a magnetic condition (e.g., flux, permeability, magnetic field, which amplitude, phase, and/or polarization); a thermal condition (e.g., temperature, flux, specific heat, thermal conductivity, etc.); and a mechanical condition (e.g., position, velocity, acceleration, force, strain, stress, pressure, torque, etc.). As an example, a piezoelectric actuator converts voltage into force or pressure. As another example, a speaker converts electrical signals into audible acoustic waves. An actuator32may be one of a variety of actuators. For example, an actuator32is one of a comb drive, a digital micro-mirror device, an electric motor, an electroactive polymer, a hydraulic cylinder, a piezoelectric actuator, a pneumatic actuator, a screw jack, a servomechanism, a solenoid, a stepper motor, a shape-memory allow, a thermal bimorph, and a hydraulic actuator. The various types of actuators have a variety of actuators characteristics that are factors in providing power to the actuator and sending signals to the actuators for desired performance. The actuator characteristics include resistance, reactance, power requirements, sensitivity, range, stability, repeatability, linearity, error, response time, and/or frequency response. For example, the resistance, reactance, and power requirements are factors in determining drive circuit requirements. As another example, sensitivity, stability, and/or linear are factors for generating the signaling to send to the actuator to obtain the desired physical output condition. The computing devices12,14, and18may each be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. The computing devices12,14, and18will be discussed in greater detail with reference to one or more ofFIGS.2-4. A server22is a special type of computing device that is optimized for processing large amounts of data requests in parallel. A server22includes similar components to that of the computing devices12,14, and/or18with more robust processing modules, more main memory, and/or more hard drive memory (e.g., solid state, hard drives, etc.). Further, a server22is typically accessed remotely; as such it does not generally include user input devices and/or user output devices. In addition, a server may be a standalone separate computing device and/or may be a cloud computing device. A database24is a special type of computing device that is optimized for large scale data storage and retrieval. A database24includes similar components to that of the computing devices12,14, and/or18with more hard drive memory (e.g., solid state, hard drives, etc.) and potentially with more processing modules and/or main memory. Further, a database24is typically accessed remotely; as such it does not generally include user input devices and/or user output devices. In addition, a database24may be a standalone separate computing device and/or may be a cloud computing device. The network26includes one more local area networks (LAN) and/or one or more wide area networks WAN), which may be a public network and/or a private network. A LAN may be a wireless-LAN (e.g., Wi-Fi access point, Bluetooth, ZigBee, etc.) and/or a wired network (e.g., Firewire, Ethernet, etc.). A WAN may be a wired and/or wireless WAN. For example, a LAN may be a personal home or business's wireless network and a WAN is the Internet, cellular telephone infrastructure, and/or satellite communication infrastructure. In an example of operation, computing device12-1communicates with a plurality of drive-sense circuits28, which, in turn, communicate with a plurality of sensors30. The sensors30and/or the drive-sense circuits28are within the computing device12-1and/or external to it. For example, the sensors30may be external to the computing device12-1and the drive-sense circuits are within the computing device12-1. As another example, both the sensors30and the drive-sense circuits28are external to the computing device12-1. When the drive-sense circuits28are external to the computing device, they are coupled to the computing device12-1via wired and/or wireless communication links as will be discussed in greater detail with reference to one or more ofFIGS.5A-5C. The computing device12-1communicates with the drive-sense circuits28to; (a) turn them on, (b) obtain data from the sensors (individually and/or collectively), (c) instruct the drive sense circuit on how to communicate the sensed data to the computing device12-1, (d) provide signaling attributes (e.g., DC level, AC level, frequency, power level, regulated current signal, regulated voltage signal, regulation of an impedance, frequency patterns for various sensors, different frequencies for different sensing applications, etc.) to use with the sensors, and/or (e) provide other commands and/or instructions. As a specific example, the sensors30are distributed along a pipeline to measure flow rate and/or pressure within a section of the pipeline. The drive-sense circuits28have their own power source (e.g., battery, power supply, etc.) and are proximally located to their respective sensors30. At desired time intervals (milliseconds, seconds, minutes, hours, etc.), the drive-sense circuits28provide a regulated source signal or a power signal to the sensors30. An electrical characteristic of the sensor30affects the regulated source signal or power signal, which is reflective of the condition (e.g., the flow rate and/or the pressure) that sensor is sensing. The drive-sense circuits28detect the effects on the regulated source signal or power signals as a result of the electrical characteristics of the sensors. The drive-sense circuits28then generate signals representative of change to the regulated source signal or power signal based on the detected effects on the power signals. The changes to the regulated source signals or power signals are representative of the conditions being sensed by the sensors30. The drive-sense circuits28provide the representative signals of the conditions to the computing device12-1. A representative signal may be an analog signal or a digital signal. In either case, the computing device12-1interprets the representative signals to determine the pressure and/or flow rate at each sensor location along the pipeline. The computing device may then provide this information to the server22, the database24, and/or to another computing device for storing and/or further processing. As another example of operation, computing device12-2is coupled to a drive-sense circuit28, which is, in turn, coupled to a senor30. The sensor30and/or the drive-sense circuit28may be internal and/or external to the computing device12-2. In this example, the sensor is sensing a condition that is particular to the computing device12-2. For example, the sensor30may be a temperature sensor, an ambient light sensor, an ambient noise sensor, etc. As described above, when instructed by the computing device12-2(which may be a default setting for continuous sensing or at regular intervals), the drive-sense circuit28provides the regulated source signal or power signal to the sensor30and detects an effect to the regulated source signal or power signal based on an electrical characteristic of the sensor. The drive-sense circuit generates a representative signal of the affect and sends it to the computing device12-2. In another example of operation, computing device12-3is coupled to a plurality of drive-sense circuits28that are coupled to a plurality of sensors30and is coupled to a plurality of drive-sense circuits28that are coupled to a plurality of actuators32. The generally functionality of the drive-sense circuits28coupled to the sensors30in accordance with the above description. Since an actuator32is essentially an inverse of a sensor in that an actuator converts an electrical signal into a physical condition, while a sensor converts a physical condition into an electrical signal, the drive-sense circuits28can be used to power actuators32. Thus, in this example, the computing device12-3provides actuation signals to the drive-sense circuits28for the actuators32. The drive-sense circuits modulate the actuation signals on to power signals or regulated control signals, which are provided to the actuators32. The actuators32are powered from the power signals or regulated control signals and produce the desired physical condition from the modulated actuation signals. As another example of operation, computing device12-xis coupled to a drive-sense circuit28that is coupled to a sensor30and is coupled to a drive-sense circuit28that is coupled to an actuator32. In this example, the sensor30and the actuator32are for use by the computing device12-x. For example, the sensor30may be a piezoelectric microphone and the actuator32may be a piezoelectric speaker. FIG.2is a schematic block diagram of an embodiment of a computing device12(e.g., any one of12-1through12-x). The computing device12includes a core control module40, one or more processing modules42, one or more main memories44, cache memory46, a video graphics processing module48, a display50, an Input-Output (I/O) peripheral control module52, one or more input interface modules56, one or more output interface modules58, one or more network interface modules60, and one or more memory interface modules62. A processing module42is described in greater detail at the end of the detailed description of the invention section and, in an alternative embodiment, has a direction connection to the main memory44. In an alternate embodiment, the core control module40and the I/O and/or peripheral control module52are one module, such as a chipset, a quick path interconnect (QPI), and/or an ultra-path interconnect (UPI). Each of the main memories44includes one or more Random Access Memory (RAM) integrated circuits, or chips. For example, a main memory44includes four DDR4 (4thgeneration of double data rate) RAM chips, each running at a rate of 2,400 MHz. In general, the main memory44stores data and operational instructions most relevant for the processing module42. For example, the core control module40coordinates the transfer of data and/or operational instructions from the main memory44and the memory64-66. The data and/or operational instructions retrieve from memory64-66are the data and/or operational instructions requested by the processing module or will most likely be needed by the processing module. When the processing module is done with the data and/or operational instructions in main memory, the core control module40coordinates sending updated data to the memory64-66for storage. The memory64-66includes one or more hard drives, one or more solid state memory chips, and/or one or more other large capacity storage devices that, in comparison to cache memory and main memory devices, is/are relatively inexpensive with respect to cost per amount of data stored. The memory64-66is coupled to the core control module40via the I/O and/or peripheral control module52and via one or more memory interface modules62. In an embodiment, the I/O and/or peripheral control module52includes one or more Peripheral Component Interface (PCI) buses to which peripheral components connect to the core control module40. A memory interface module62includes a software driver and a hardware connector for coupling a memory device to the I/O and/or peripheral control module52. For example, a memory interface62is in accordance with a Serial Advanced Technology Attachment (SATA) port. The core control module40coordinates data communications between the processing module(s)42and the network(s)26via the I/O and/or peripheral control module52, the network interface module(s)60, and a network card68or70. A network card68or70includes a wireless communication unit or a wired communication unit. A wireless communication unit includes a wireless local area network (WLAN) communication device, a cellular communication device, a Bluetooth device, and/or a ZigBee communication device. A wired communication unit includes a Gigabit LAN connection, a Firewire connection, and/or a proprietary computer wired connection. A network interface module60includes a software driver and a hardware connector for coupling the network card to the I/O and/or peripheral control module52. For example, the network interface module60is in accordance with one or more versions of IEEE 802.11, cellular telephone protocols, 10/100/1000 Gigabit LAN protocols, etc. The core control module40coordinates data communications between the processing module(s)42and input device(s)72via the input interface module(s)56and the I/O and/or peripheral control module52. An input device72includes a keypad, a keyboard, control switches, a touchpad, a microphone, a camera, etc. An input interface module56includes a software driver and a hardware connector for coupling an input device to the I/O and/or peripheral control module52. In an embodiment, an input interface module56is in accordance with one or more Universal Serial Bus (USB) protocols. The core control module40coordinates data communications between the processing module(s)42and output device(s)74via the output interface module(s)58and the I/O and/or peripheral control module52. An output device74includes a speaker, etc. An output interface module58includes a software driver and a hardware connector for coupling an output device to the I/O and/or peripheral control module52. In an embodiment, an output interface module56is in accordance with one or more audio codec protocols. The processing module42communicates directly with a video graphics processing module48to display data on the display50. The display50includes an LED (light emitting diode) display, an LCD (liquid crystal display), and/or other type of display technology. The display has a resolution, an aspect ratio, and other features that affect the quality of the display. The video graphics processing module48receives data from the processing module42, processes the data to produce rendered data in accordance with the characteristics of the display, and provides the rendered data to the display50. FIG.2further illustrates sensors30and actuators32coupled to drive-sense circuits28, which are coupled to the input interface module56(e.g., USB port). Alternatively, one or more of the drive-sense circuits28is coupled to the computing device via a wireless network card (e.g., WLAN) or a wired network card (e.g., Gigabit LAN). While not shown, the computing device12further includes a BIOS (Basic Input Output System) memory coupled to the core control module40. FIG.3is a schematic block diagram of another embodiment of a computing device14that includes a core control module40, one or more processing modules42, one or more main memories44, cache memory46, a video graphics processing module48, a touch screen16, an Input-Output (I/O) peripheral control module52, one or more input interface modules56, one or more output interface modules58, one or more network interface modules60, and one or more memory interface modules62. The touch screen16includes a touch screen display80, a plurality of sensors30, a plurality of drive-sense circuits (DSC), and a touch screen processing module82. Computing device14operates similarly to computing device12ofFIG.2with the addition of a touch screen as an input device. The touch screen includes a plurality of sensors (e.g., electrodes, capacitor sensing cells, capacitor sensors, inductive sensor, etc.) to detect a proximal touch of the screen. For example, when one or more fingers touches the screen, capacitance of sensors proximal to the touch(es) are affected (e.g., impedance changes). The drive-sense circuits (DSC) coupled to the affected sensors detect the change and provide a representation of the change to the touch screen processing module82, which may be a separate processing module or integrated into the processing module42. The touch screen processing module82processes the representative signals from the drive-sense circuits (DSC) to determine the location of the touch(es). This information is inputted to the processing module42for processing as an input. For example, a touch represents a selection of a button on screen, a scroll function, a zoom in-out function, etc. FIG.4is a schematic block diagram of another embodiment of a computing device18that includes a core control module40, one or more processing modules42, one or more main memories44, cache memory46, a video graphics processing module48, a touch and tactile screen20, an Input-Output (I/O) peripheral control module52, one or more input interface modules56, one or more output interface modules58, one or more network interface modules60, and one or more memory interface modules62. The touch and tactile screen20includes a touch and tactile screen display90, a plurality of sensors30, a plurality of actuators32, a plurality of drive-sense circuits (DSC), a touch screen processing module82, and a tactile screen processing module92. Computing device18operates similarly to computing device14ofFIG.3with the addition of a tactile aspect to the screen20as an output device. The tactile portion of the screen20includes the plurality of actuators (e.g., piezoelectric transducers to create vibrations, solenoids to create movement, etc.) to provide a tactile feel to the screen20. To do so, the processing module creates tactile data, which is provided to the appropriate drive-sense circuits (DSC) via the tactile screen processing module92, which may be a stand-alone processing module or integrated into processing module42. The drive-sense circuits (DSC) convert the tactile data into drive-actuate signals and provide them to the appropriate actuators to create the desired tactile feel on the screen20. FIG.5Ais a schematic plot diagram of a computing subsystem25that includes a sensed data processing module65, a plurality of communication modules61A-x, a plurality of processing modules42A-x, a plurality of drive sense circuits28, and a plurality of sensors1-x, which may be sensors30ofFIG.1. The sensed data processing module65is one or more processing modules within one or more servers22and/or one more processing modules in one or more computing devices that are different than the computing devices in which processing modules42A-x reside. A drive-sense circuit28(or multiple drive-sense circuits), a processing module (e.g.,41A), and a communication module (e.g.,61A) are within a common computing device. Each grouping of a drive-sense circuit(s), processing module, and communication module is in a separate computing device. A communication module61A-x is constructed in accordance with one or more wired communication protocol and/or one or more wireless communication protocols that is/are in accordance with the one or more of the Open System Interconnection (OSI) model, the Transmission Control Protocol/Internet Protocol (TCP/IP) model, and other communication protocol module. In an example of operation, a processing module (e.g.,42A) provides a control signal to its corresponding drive-sense circuit28. The processing module42A may generate the control signal, receive it from the sensed data processing module65, or receive an indication from the sensed data processing module65to generate the control signal. The control signal enables the drive-sense circuit28to provide a drive signal to its corresponding sensor. The control signal may further include a reference signal having one or more frequency components to facilitate creation of the drive signal and/or interpreting a sensed signal received from the sensor. Based on the control signal, the drive-sense circuit28provides the drive signal to its corresponding sensor (e.g., 1) on a drive & sense line. While receiving the drive signal (e.g., a power signal, a regulated source signal, etc.), the sensor senses a physical condition1-x(e.g., acoustic waves, a biological condition, a chemical condition, an electric condition, a magnetic condition, an optical condition, a thermal condition, and/or a mechanical condition). As a result of the physical condition, an electrical characteristic (e.g., impedance, voltage, current, capacitance, inductance, resistance, reactance, etc.) of the sensor changes, which affects the drive signal. Note that if the sensor is an optical sensor, it converts a sensed optical condition into an electrical characteristic. The drive-sense circuit28detects the effect on the drive signal via the drive & sense line and processes the affect to produce a signal representative of power change, which may be an analog or digital signal. The processing module42A receives the signal representative of power change, interprets it, and generates a value representing the sensed physical condition. For example, if the sensor is sensing pressure, the value representing the sensed physical condition is a measure of pressure (e.g., x PSI (pounds per square inch)). In accordance with a sensed data process function (e.g., algorithm, application, etc.), the sensed data processing module65gathers the values representing the sensed physical conditions from the processing modules. Since the sensors1-xmay be the same type of sensor (e.g., a pressure sensor), may each be different sensors, or a combination thereof; the sensed physical conditions may be the same, may each be different, or a combination thereof. The sensed data processing module65processes the gathered values to produce one or more desired results. For example, if the computing subsystem25is monitoring pressure along a pipeline, the processing of the gathered values indicates that the pressures are all within normal limits or that one or more of the sensed pressures is not within normal limits. As another example, if the computing subsystem25is used in a manufacturing facility, the sensors are sensing a variety of physical conditions, such as acoustic waves (e.g., for sound proofing, sound generation, ultrasound monitoring, etc.), a biological condition (e.g., a bacterial contamination, etc.) a chemical condition (e.g., composition, gas concentration, etc.), an electric condition (e.g., current levels, voltage levels, electro-magnetic interference, etc.), a magnetic condition (e.g., induced current, magnetic field strength, magnetic field orientation, etc.), an optical condition (e.g., ambient light, infrared, etc.), a thermal condition (e.g., temperature, etc.), and/or a mechanical condition (e.g., physical position, force, pressure, acceleration, etc.). The computing subsystem25may further include one or more actuators in place of one or more of the sensors and/or in addition to the sensors. When the computing subsystem25includes an actuator, the corresponding processing module provides an actuation control signal to the corresponding drive-sense circuit28. The actuation control signal enables the drive-sense circuit28to provide a drive signal to the actuator via a drive & actuate line (e.g., similar to the drive & sense line, but for the actuator). The drive signal includes one or more frequency components and/or amplitude components to facilitate a desired actuation of the actuator. In addition, the computing subsystem25may include an actuator and sensor working in concert. For example, the sensor is sensing the physical condition of the actuator. In this example, a drive-sense circuit provides a drive signal to the actuator and another drive sense signal provides the same drive signal, or a scaled version of it, to the sensor. This allows the sensor to provide near immediate and continuous sensing of the actuator's physical condition. This further allows for the sensor to operate at a first frequency and the actuator to operate at a second frequency. In an embodiment, the computing subsystem is a stand-alone system for a wide variety of applications (e.g., manufacturing, pipelines, testing, monitoring, security, etc.). In another embodiment, the computing subsystem25is one subsystem of a plurality of sub systems forming a larger system. For example, different sub systems are employed based on geographic location. As a specific example, the computing subsystem25is deployed in one section of a factory and another computing subsystem is deployed in another part of the factory. As another example, different subsystems are employed based function of the subsystems. As a specific example, one subsystem monitors a city's traffic light operation and another subsystem monitors the city's sewage treatment plants. Regardless of the use and/or deployment of the computing system, the physical conditions it is sensing, and/or the physical conditions it is actuating, each sensor and each actuator (if included) is driven and sensed by a single line as opposed to separate drive and sense lines. This provides many advantages including, but not limited to, lower power requirements, better ability to drive high impedance sensors, lower line to line interference, and/or concurrent sensing functions. FIG.5Bis a schematic block diagram of another embodiment of a computing subsystem25that includes a sensed data processing module65, a communication module61, a plurality of processing modules42A-x, a plurality of drive sense circuits28, and a plurality of sensors1-x, which may be sensors30ofFIG.1. The sensed data processing module65is one or more processing modules within one or more servers22and/or one more processing modules in one or more computing devices that are different than the computing device, devices, in which processing modules42A-x reside. In an embodiment, the drive-sense circuits28, the processing modules, and the communication module are within a common computing device. For example, the computing device includes a central processing unit that includes a plurality of processing modules. The functionality and operation of the sensed data processing module65, the communication module61, the processing modules42A-x, the drive sense circuits28, and the sensors1-xare as discussed with reference toFIG.5A. FIG.5Cis a schematic block diagram of another embodiment of a computing subsystem25that includes a sensed data processing module65, a communication module61, a processing module42, a plurality of drive sense circuits28, and a plurality of sensors1-x, which may be sensors30ofFIG.1. The sensed data processing module65is one or more processing modules within one or more servers22and/or one more processing modules in one or more computing devices that are different than the computing device in which the processing module42resides. In an embodiment, the drive-sense circuits28, the processing module, and the communication module are within a common computing device. The functionality and operation of the sensed data processing module65, the communication module61, the processing module42, the drive sense circuits28, and the sensors1-xare as discussed with reference toFIG.5A. FIG.5Dis a schematic block diagram of another embodiment of a computing subsystem25that includes a processing module42, a reference signal circuit100, a plurality of drive sense circuits28, and a plurality of sensors30. The processing module42includes a drive-sense processing block104, a drive-sense control block102, and a reference control block106. Each block102-106of the processing module42may be implemented via separate modules of the processing module, may be a combination of software and hardware within the processing module, and/or may be field programmable modules within the processing module42. In an example of operation, the drive-sense control block104generates one or more control signals to activate one or more of the drive-sense circuits28. For example, the drive-sense control block102generates a control signal that enables of the drive-sense circuits28for a given period of time (e.g., 1 second, 1 minute, etc.). As another example, the drive-sense control block102generates control signals to sequentially enable the drive-sense circuits28. As yet another example, the drive-sense control block102generates a series of control signals to periodically enable the drive-sense circuits28(e.g., enabled once every second, every minute, every hour, etc.). Continuing with the example of operation, the reference control block106generates a reference control signal that it provides to the reference signal circuit100. The reference signal circuit100generates, in accordance with the control signal, one or more reference signals for the drive-sense circuits28. For example, the control signal is an enable signal, which, in response, the reference signal circuit100generates a pre-programmed reference signal that it provides to the drive-sense circuits28. In another example, the reference signal circuit100generates a unique reference signal for each of the drive-sense circuits28. In yet another example, the reference signal circuit100generates a first unique reference signal for each of the drive-sense circuits28in a first group and generates a second unique reference signal for each of the drive-sense circuits28in a second group. The reference signal circuit100may be implemented in a variety of ways. For example, the reference signal circuit100includes a DC (direct current) voltage generator, an AC voltage generator, and a voltage combining circuit. The DC voltage generator generates a DC voltage at a first level and the AC voltage generator generates an AC voltage at a second level, which is less than or equal to the first level. The voltage combining circuit combines the DC and AC voltages to produce the reference signal. As examples, the reference signal circuit100generates a reference signal similar to the signals shown inFIG.7, which will be subsequently discussed. As another example, the reference signal circuit100includes a DC current generator, an AC current generator, and a current combining circuit. The DC current generator generates a DC current a first current level and the AC current generator generates an AC current at a second current level, which is less than or equal to the first current level. The current combining circuit combines the DC and AC currents to produce the reference signal. Returning to the example of operation, the reference signal circuit100provides the reference signal, or signals, to the drive-sense circuits28. When a drive-sense circuit28is enabled via a control signal from the drive sense control block102, it provides a drive signal to its corresponding sensor30. As a result of a physical condition, an electrical characteristic of the sensor is changed, which affects the drive signal. Based on the detected effect on the drive signal and the reference signal, the drive-sense circuit28generates a signal representative of the effect on the drive signal. The drive-sense circuit provides the signal representative of the effect on the drive signal to the drive-sense processing block104. The drive-sense processing block104processes the representative signal to produce a sensed value97of the physical condition (e.g., a digital value that represents a specific temperature, a specific pressure level, etc.). The processing module42provides the sensed value97to another application running on the computing device, to another computing device, and/or to a server22. FIG.5Eis a schematic block diagram of another embodiment of a computing subsystem25that includes a processing module42, a plurality of drive sense circuits28, and a plurality of sensors30. This embodiment is similar to the embodiment ofFIG.5Dwith the functionality of the drive-sense processing block104, a drive-sense control block102, and a reference control block106shown in greater detail. For instance, the drive-sense control block102includes individual enable/disable blocks102-1through102-y. An enable/disable block functions to enable or disable a corresponding drive-sense circuit in a manner as discussed above with reference toFIG.5D. The drive-sense processing block104includes variance determining modules104-1athroughyand variance interpreting modules104-2athroughy. For example, variance determining module104-1areceives, from the corresponding drive-sense circuit28, a signal representative of a physical condition sensed by a sensor. The variance determining module104-1afunctions to determine a difference from the signal representing the sensed physical condition with a signal representing a known, or reference, physical condition. The variance interpreting module104-1binterprets the difference to determine a specific value for the sensed physical condition. As a specific example, the variance determining module104-1areceives a digital signal of 1001 0110 (150 in decimal) that is representative of a sensed physical condition (e.g., temperature) sensed by a sensor from the corresponding drive-sense circuit28. With 8-bits, there are 28(256) possible signals representing the sensed physical condition. Assume that the units for temperature is Celsius and a digital value of 0100 0000 (64 in decimal) represents the known value for 25 degree Celsius. The variance determining module104-b1determines the difference between the digital signal representing the sensed value (e.g., 1001 0110, 150 in decimal) and the known signal value of (e.g., 0100 0000, 64 in decimal), which is 0011 0000 (86 in decimal). The variance determining module104-b1then determines the sensed value based on the difference and the known value. In this example, the sensed value equals 25+86*(100/256)=25+33.6=58.6 degrees Celsius. FIG.6is a schematic block diagram of a drive center circuit28-acoupled to a sensor30. The drive sense-sense circuit28includes a power source circuit110and a power signal change detection circuit112. The sensor30includes one or more transducers that have varying electrical characteristics (e.g., capacitance, inductance, impedance, current, voltage, etc.) based on varying physical conditions114(e.g., pressure, temperature, biological, chemical, etc.), or vice versa (e.g., an actuator). The power source circuit110is operably coupled to the sensor30and, when enabled (e.g., from a control signal from the processing module42, power is applied, a switch is closed, a reference signal is received, etc.) provides a power signal116to the sensor30. The power source circuit110may be a voltage supply circuit (e.g., a battery, a linear regulator, an unregulated DC-to-DC converter, etc.) to produce a voltage-based power signal, a current supply circuit (e.g., a current source circuit, a current mirror circuit, etc.) to produce a current-based power signal, or a circuit that provide a desired power level to the sensor and substantially matches impedance of the sensor. The power source circuit110generates the power signal116to include a DC (direct current) component and/or an oscillating component. When receiving the power signal116and when exposed to a condition114, an electrical characteristic of the sensor affects118the power signal. When the power signal change detection circuit112is enabled, it detects the affect118on the power signal as a result of the electrical characteristic of the sensor. For example, the power signal is a 1.5 voltage signal and, under a first condition, the sensor draws 1 milliamp of current, which corresponds to an impedance of 1.5 K Ohms. Under a second conditions, the power signal remains at 1.5 volts and the current increases to 1.5 milliamps. As such, from condition1to condition2, the impedance of the sensor changed from 1.5 K Ohms to 1 K Ohms. The power signal change detection circuit112determines this change and generates a representative signal120of the change to the power signal. As another example, the power signal is a 1.5 voltage signal and, under a first condition, the sensor draws 1 milliamp of current, which corresponds to an impedance of 1.5 K Ohms. Under a second conditions, the power signal drops to 1.3 volts and the current increases to 1.3 milliamps. As such, from condition1to condition2, the impedance of the sensor changed from 1.5 K Ohms to 1 K Ohms. The power signal change detection circuit112determines this change and generates a representative signal120of the change to the power signal. The power signal116includes a DC component122and/or an oscillating component124as shown inFIG.7. The oscillating component124includes a sinusoidal signal, a square wave signal, a triangular wave signal, a multiple level signal (e.g., has varying magnitude over time with respect to the DC component), and/or a polygonal signal (e.g., has a symmetrical or asymmetrical polygonal shape with respect to the DC component). Note that the power signal is shown without affect from the sensor as the result of a condition or changing condition. In an embodiment, power generating circuit110varies frequency of the oscillating component124of the power signal116so that it can be tuned to the impedance of the sensor and/or to be off-set in frequency from other power signals in a system. For example, a capacitance sensor's impedance decreases with frequency. As such, if the frequency of the oscillating component is too high with respect to the capacitance, the capacitor looks like a short and variances in capacitances will be missed. Similarly, if the frequency of the oscillating component is too low with respect to the capacitance, the capacitor looks like an open and variances in capacitances will be missed. In an embodiment, the power generating circuit110varies magnitude of the DC component122and/or the oscillating component124to improve resolution of sensing and/or to adjust power consumption of sensing. In addition, the power generating circuit110generates the drive signal110such that the magnitude of the oscillating component124is less than magnitude of the DC component122. FIG.6Ais a schematic block diagram of a drive center circuit28-alcoupled to a sensor30. The drive sense-sense circuit28-alincludes a signal source circuit111, a signal change detection circuit113, and a power source115. The power source115(e.g., a battery, a power supply, a current source, etc.) generates a voltage and/or current that is combined with a signal117, which is produced by the signal source circuit111. The combined signal is supplied to the sensor30. The signal source circuit111may be a voltage supply circuit (e.g., a battery, a linear regulator, an unregulated DC-to-DC converter, etc.) to produce a voltage-based signal117, a current supply circuit (e.g., a current source circuit, a current mirror circuit, etc.) to produce a current-based signal117, or a circuit that provide a desired power level to the sensor and substantially matches impedance of the sensor. The signal source circuit111generates the signal117to include a DC (direct current) component and/or an oscillating component. When receiving the combined signal (e.g., signal117and power from the power source) and when exposed to a condition114, an electrical characteristic of the sensor affects119the signal. When the signal change detection circuit113is enabled, it detects the affect119on the signal as a result of the electrical characteristic of the sensor. FIG.8is an example of a sensor graph that plots an electrical characteristic versus a condition. The sensor has a substantially linear region in which an incremental change in a condition produces a corresponding incremental change in the electrical characteristic. The graph shows two types of electrical characteristics: one that increases as the condition increases and the other that decreases and the condition increases. As an example of the first type, impedance of a temperature sensor increases and the temperature increases. As an example of a second type, a capacitance touch sensor decreases in capacitance as a touch is sensed. FIG.9is a schematic block diagram of another example of a power signal graph in which the electrical characteristic or change in electrical characteristic of the sensor is affecting the power signal. In this example, the effect of the electrical characteristic or change in electrical characteristic of the sensor reduced the DC component but had little to no effect on the oscillating component. For example, the electrical characteristic is resistance. In this example, the resistance or change in resistance of the sensor decreased the power signal, inferring an increase in resistance for a relatively constant current. FIG.10is a schematic block diagram of another example of a power signal graph in which the electrical characteristic or change in electrical characteristic of the sensor is affecting the power signal. In this example, the effect of the electrical characteristic or change in electrical characteristic of the sensor reduced magnitude of the oscillating component but had little to no effect on the DC component. For example, the electrical characteristic is impedance of a capacitor and/or an inductor. In this example, the impedance or change in impedance of the sensor decreased the magnitude of the oscillating signal component, inferring an increase in impedance for a relatively constant current. FIG.11is a schematic block diagram of another example of a power signal graph in which the electrical characteristic or change in electrical characteristic of the sensor is affecting the power signal. In this example, the effect of the electrical characteristic or change in electrical characteristic of the sensor shifted frequency of the oscillating component but had little to no effect on the DC component. For example, the electrical characteristic is reactance of a capacitor and/or an inductor. In this example, the reactance or change in reactance of the sensor shifted frequency of the oscillating signal component, inferring an increase in reactance (e.g., sensor is functioning as an integrator or phase shift circuit). FIG.11Ais a schematic block diagram of another example of a power signal graph in which the electrical characteristic or change in electrical characteristic of the sensor is affecting the power signal. In this example, the effect of the electrical characteristic or change in electrical characteristic of the sensor changes the frequency of the oscillating component but had little to no effect on the DC component. For example, the sensor includes two transducers that oscillate at different frequencies. The first transducer receives the power signal at a frequency of f1and converts it into a first physical condition. The second transducer is stimulated by the first physical condition to create an electrical signal at a different frequency f2. In this example, the first and second transducers of the sensor change the frequency of the oscillating signal component, which allows for more granular sensing and/or a broader range of sensing. FIG.12is a schematic block diagram of an embodiment of a power signal change detection circuit112receiving the affected power signal118and the power signal116as generated to produce, therefrom, the signal representative120of the power signal change. The affect118on the power signal is the result of an electrical characteristic and/or change in the electrical characteristic of a sensor; a few examples of the affects are shown inFIGS.8-11A. In an embodiment, the power signal change detection circuit112detect a change in the DC component122and/or the oscillating component124of the power signal116. The power signal change detection circuit112then generates the signal representative120of the change to the power signal based on the change to the power signal. For example, the change to the power signal results from the impedance of the sensor and/or a change in impedance of the sensor. The representative signal120is reflective of the change in the power signal and/or in the change in the sensor's impedance. In an embodiment, the power signal change detection circuit112is operable to detect a change to the oscillating component at a frequency, which may be a phase shift, frequency change, and/or change in magnitude of the oscillating component. The power signal change detection circuit112is also operable to generate the signal representative of the change to the power signal based on the change to the oscillating component at the frequency. The power signal change detection circuit112is further operable to provide feedback to the power source circuit110regarding the oscillating component. The feedback allows the power source circuit110to regulate the oscillating component at the desired frequency, phase, and/or magnitude. FIG.13is a schematic block diagram of another embodiment of a drive sense circuit28-bincludes a change detection circuit150, a regulation circuit152, and a power source circuit154. The drive-sense circuit28-bis coupled to the sensor30, which includes a transducer that has varying electrical characteristics (e.g., capacitance, inductance, impedance, current, voltage, etc.) based on varying physical conditions114(e.g., pressure, temperature, biological, chemical, etc.). The power source circuit154is operably coupled to the sensor30and, when enabled (e.g., from a control signal from the processing module42, power is applied, a switch is closed, a reference signal is received, etc.) provides a power signal158to the sensor30. The power source circuit154may be a voltage supply circuit (e.g., a battery, a linear regulator, an unregulated DC-to-DC converter, etc.) to produce a voltage-based power signal or a current supply circuit (e.g., a current source circuit, a current mirror circuit, etc.) to produce a current-based power signal. The power source circuit154generates the power signal158to include a DC (direct current) component and an oscillating component. When receiving the power signal158and when exposed to a condition114, an electrical characteristic of the sensor affects160the power signal. When the change detection circuit150is enabled, it detects the affect160on the power signal as a result of the electrical characteristic of the sensor30. The change detection circuit150is further operable to generate a signal120that is representative of change to the power signal based on the detected effect on the power signal. The regulation circuit152, when its enabled, generates regulation signal156to regulate the DC component to a desired DC level and/or regulate the oscillating component to a desired oscillating level (e.g., magnitude, phase, and/or frequency) based on the signal120that is representative of the change to the power signal. The power source circuit154utilizes the regulation signal156to keep the power signal at a desired setting158regardless of the electrical characteristic of the sensor. In this manner, the amount of regulation is indicative of the affect the electrical characteristic had on the power signal. In an example, the power source circuit158is a DC-DC converter operable to provide a regulated power signal having DC and AC components. The change detection circuit150is a comparator and the regulation circuit152is a pulse width modulator to produce the regulation signal156. The comparator compares the power signal158, which is affected by the sensor, with a reference signal that includes DC and AC components. When the electrical characteristics is at a first level (e.g., a first impedance), the power signal is regulated to provide a voltage and current such that the power signal substantially resembles the reference signal. When the electrical characteristics changes to a second level (e.g., a second impedance), the change detection circuit150detects a change in the DC and/or AC component of the power signal158and generates the representative signal120, which indicates the changes. The regulation circuit152detects the change in the representative signal120and creates the regulation signal to substantially remove the effect on the power signal. The regulation of the power signal158may be done by regulating the magnitude of the DC and/or AC components, by adjusting the frequency of AC component, and/or by adjusting the phase of the AC component. With respect to the operation of various drive-sense circuits as described herein and/or their equivalents, note that the operation of such a drive-sense circuit is operable simultaneously to drive and sense a signal via a single line. In comparison to switched, time-divided, time-multiplexed, etc. operation in which there is switching between driving and sensing (e.g., driving at first time, sensing at second time, etc.) of different respective signals at separate and distinct times, the drive-sense circuit is operable simultaneously to perform both driving and sensing of a signal. In some examples, such simultaneous driving and sensing is performed via a single line using a drive-sense circuit. In addition, other alternative implementations of various drive-sense circuits are described in U.S. Utility patent application Ser. No. 16/113,379, entitled “DRIVE SENSE CIRCUIT WITH DRIVE-SENSE LINE,”, filed 08-27-2018, pending. Any instantiation of a drive-sense circuit as described herein may be implemented using any of the various implementations of various drive-sense circuits described in U.S. Utility patent application Ser. No. 16/113,379. In addition, note that the one or more signals provided from a drive-sense circuit (DSC) may be of any of a variety of types. For example, such a signal may be based on encoding of one or more bits to generate one or more coded bits used to generate modulation data (or generally, data). For example, a device is configured to perform forward error correction (FEC) and/or error checking and correction (ECC) code of one or more bits to generate one or more coded bits. Examples of FEC and/or ECC may include turbo code, convolutional code, turbo trellis coded modulation (TTCM), low density parity check (LDPC) code, Reed-Solomon (RS) code, BCH (Bose and Ray-Chaudhuri, and Hocquenghem) code, binary convolutional code (BCC), Cyclic Redundancy Check (CRC), and/or any other type of ECC and/or FEC code and/or combination thereof, etc. Note that more than one type of ECC and/or FEC code may be used in any of various implementations including concatenation (e.g., first ECC and/or FEC code followed by second ECC and/or FEC code, etc. such as based on an inner code/outer code architecture, etc.), parallel architecture (e.g., such that first ECC and/or FEC code operates on first bits while second ECC and/or FEC code operates on second bits, etc.), and/or any combination thereof. Also, the one or more coded bits may then undergo modulation or symbol mapping to generate modulation symbols (e.g., the modulation symbols may include data intended for one or more recipient devices, components, elements, etc.). Note that such modulation symbols may be generated using any of various types of modulation coding techniques. Examples of such modulation coding techniques may include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), 8-phase shift keying (PSK), 16 quadrature amplitude modulation (QAM), 32 amplitude and phase shift keying (APSK), etc., uncoded modulation, and/or any other desired types of modulation including higher ordered modulations that may include even greater number of constellation points (e.g., 1024 QAM, etc.). In addition, note a signal provided from a DSC may be of a unique frequency that is different from signals provided from other DSCs. Also, a signal provided from a DSC may include multiple frequencies independently or simultaneously. The frequency of the signal can be hopped on a pre-arranged pattern. In some examples, a handshake is established between one or more DSCs and one or more processing module (e.g., one or more controllers) such that the one or more DSC is/are directed by the one or more processing modules regarding which frequency or frequencies and/or which other one or more characteristics of the one or more signals to use at one or more respective times and/or in one or more particular situations. FIG.14is a schematic block diagram of an embodiment1400of a computing device operative with an e-pen (an electronic or electrical pen with electrical and/or electronic functionality) in accordance with the present invention. Within this diagram as well as any other diagram described herein, or their equivalents, the one or more touch sensors1410(e.g., touch sensor electrodes) may be of any of a variety of one or more types including any one or more of a touchscreen, a button, an electrode, an external controller, rows of electrodes, columns of electrodes, a matrix of buttons, an array of buttons, a film that includes any desired implementation of components to facilitate touch sensor operation, and/or any other configuration by which interaction with the touch sensor may be performed. Note that the one or more touch sensors1410may be implemented within any of a variety of devices including any one or more of touchscreen, pad device, laptop, cell phone, smartphone, whiteboard, interactive display, navigation system display, in vehicle display, etc., and/or any other device in which one or more touch sensors1410may be implemented. Note that such interaction of a user with a touch sensor may correspond to the user touching the touch sensor, the user being in proximate distance to the touch sensor (e.g., within a sufficient proximity to the touch sensor that coupling from the user to the touch sensor may be performed via capacitively coupling (CC), etc. and/or generally any manner of interacting with the touch sensor that is detectable based on processing of signals transmitted to and/or sensed from the touch sensor). With respect to the various embodiments, implementations, etc. of various respective touch sensors as described herein, note that they may also be of any such variety of one or more types. For example, touch sensors may be implemented or include any one or more of touch sensor electrodes, capacitive buttons, capacitive sensors, row and column implementations of touch sensor electrodes such as in a touchscreen, etc. One example of such user interaction with the one or more touch sensors1410is via capacitive coupling to a touch sensor. Such capacitive coupling may be achieved from a user, via a stylus, an active element such as an electronic pen (e-pen), and/or any other element implemented to perform capacitive coupling to the touch sensor. In some examples, note that the one or more touch sensors1410are also implemented to detect user interaction based on user touch (e.g., via capacitive coupling (CC) from a user, such as a user's finger, to the one or more touch sensors1410). At the top of the diagram, a user interacts with one or more touch sensors1410using one or more electronic pens (e-pens). An e-pen1402is configured to transmit one or more signals that is/are detected by the one or more touch sensors1410. When different respective signals are transmitted from the different respective sensor electrodes of an e-pen, the one or more touch sensors1410is implemented to detect the signals and distinguish among them. For example, the one or more touch sensors1410is configured to detect, process, and identify the different respective signals provided from the different respective sensor electrodes of the e-pen1402. At the bottom of the diagram, one or more processing modules1430is coupled to drive-sense circuits (DSCs)28. Note that the one or more processing modules1430may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules1430. In some examples, the one or more processing modules1430includes a first subset of the one or more processing modules1430that are in communication and operative with a first subset of the DSCs28(e.g., those in communication with the e-pen sensor electrodes) and a second subset of the one or more processing modules1430that are in communication and operative with a second subset of the DSCs28(e.g., those in communication with the one or more touch sensors1410). In some examples, these two different subsets of the one or more processing modules1430are also in communication with one another (e.g., via communication effectuated via the e-pen sensor electrodes and the one or more touch sensors1410themselves, via one or more alternative communication means such as a backplane, a bus, a wireless communication path, etc., and/or other means). In some particular examples, these two different subsets of the one or more processing modules1430are not in communication with one another directly other than via the signal coupling between the e-pen sensor electrodes and the one or more touch sensors1410themselves. In addition, in certain examples, note that the detection and sensing capability of a DSC as described herein is such that detection of signals being coupled from the e-pen sensor electrodes to the one or more touch sensors1410, and vice versa, may be effectuated without the e-pen1402(e.g., a writing and/or erasing tip of the e-pen1402) being in contact with a touchscreen associated with the one or more touch sensors1410. For example, as the e-pen1402is above (e.g., hovering over) or within sufficient proximity for signal coupling between the e-pen sensor electrodes to the one or more touch sensors1410, and vice versa, then detection and sensing of such signals may be made. A first group of one or more DSCs28is/are implemented simultaneously to drive and to sense respective one or more signals provided to the one or more touch sensors1410. In addition, a second group of one or more DSCs28is/are implemented simultaneously to drive and to sense respective one or more other signals provided to the respective sensor electrodes of the e-pen1402. For example, a first DSC28is implemented simultaneously to drive and to sense a first signal via a first sensor electrode (e.g., a primary sensor electrode) of the e-pen1402. A second DSC28is implemented simultaneously to drive and to sense a second signal via a second sensor electrode (e.g., a first secondary sensor electrode) of the e-pen1402. Note that any number of additional DSCs implemented simultaneously to drive and to sense additional signals to additional sensor electrodes of the e-pen1402as may be appropriate in certain embodiments. Note also that the respective DSCs28may be implemented in a variety of ways. For example, they may be implemented within a device that includes the one or more touch sensors1410, they may be implemented within the e-pen1402, they may be distributed among the device that includes the one or more touch sensors1410and the e-pen1402, etc. In an example of operation and implementation, the one or more processing modules1430is configured to generate a first signal. A first DSC28is configured simultaneously to drive and to sense the first signal via a first sensor electrode. In some examples, the one or more processing modules1430is configured to generate a number of other signals such as a second signal, third signal, fourth signal, etc. In general, the one or more processing modules1430is configured to generate any one or more signals to be provided via one or more DSCs28. In this example of operation and implementation, the one or more processing modules1430is also configured to generate a second signal. A second DSC28is configured simultaneously to drive and to sense the second signal via a second sensor electrode. As may be appropriate in certain embodiments, the one or more processing modules1430is also configured to generate additional signals up to an nth signal (e.g., where n is a positive integer greater than or equal to 3). An nth DSC28is configured simultaneously to drive and to sense the nth signal via a nth sensor electrode. Note that the different respective signals provided via the different DSCs28are differentiated in frequency. For example, if first signal has a first frequency, and a second signal has a second frequency that is different than the first frequency. When implemented, a third signal has a third frequency that is different than the first frequency and the second frequency. In general, any number of different respective signals generated by the one or more processing modules1430are differentiated in frequency. In addition, note that different respective signals having different respective frequencies are provided from the DSCs28that are associated with the one or more touch sensors1410(e.g., on the lower right-hand portion of the diagram). Each of those respective DSCs28is also configured simultaneously to drive and to sense its respective signal. Note that the signals that are provided via the different respective sensor electrodes are coupled into one or more of the touch sensors1410when the e-pen1402is interacting with the device that includes the one or more touch sensors1410. For example, when the e-pen1402is within sufficient proximity to the one or more touch sensors1410and such that the respective DSCs28that are associated with the one or more touch sensors1410and that are configured simultaneously to drive and to sense their respective signals detect one or more of the signals that are provided via the different respective sensor electrodes are coupled into one or more of the touch sensors1410, then the signals that are provided from the different respective sensor electrodes of the e-pen1402will be coupled into and detected by the DSCs28that are associated with the one or more touch sensors1410. The one or more processing modules1430is configured to process signals provided from the various DSCs28to determine various information regarding the e-pen1402. Such information includes the location of the e-pen1402with respect to the one or more touch sensors1410, the orientation of the e-pen1402with respect to the one or more touch sensors1410, which one or more signals coupled from the one or more sensor electrodes of the e-pen1402are being coupled into the one or more touch sensors1410, etc. In some examples, note that the converse operation is also performed. Those signals that are driven and simultaneously sensed by the DSCs28via the one or more touch sensors1410may also be detected, process, and identified by the DSCs28that simultaneously drive and sense their respective signals via the e-pen sensor electrodes1-n. For example, a signal that is driven by a DSC28via one of the touch sensors1510may also be coupled into and detected by one or more of the DSCs28that simultaneously drive and sense their respective signals via the e-pen sensor electrodes1-n. The coupling of signals between the various e-pen sensor electrodes and the one or more touch sensors1410is performed bidirectionally in some implementations. Note that detection, processing, identification, etc. may be performed by the one or more processing modules1430based only on signals associated with the DSCs28that are coupled to the e-pen sensor electrodes, based only on signals associated with the DSCs28that are coupled to the one or more touch sensors1410, and/or based on both signals associated with the DSCs28that are coupled to the e-pen sensor electrodes and also to the one or more touch sensors1410. In addition, note that certain examples, embodiments, etc. are implemented such that a DSC is operative to perform both drive and sense of a signal (e.g., transmit and detect) simultaneously. However, as may be desired in certain applications, a DSC may be implemented only to perform drive (e.g., transmit) of a signal. In an example of operation and implementation, no generation of a digital signal that is representative of an electrical characteristic of an element (e.g., sensor electrode, sensor, transducer, etc.) is made. In certain examples that include more than one DSC, a first DSC is implemented to perform both drive and sense of a first signal (e.g., transmit and detect) simultaneously, and a second DSC is implemented to perform only drive (e.g., transmit) of a second signal. Any desired combination of DSCs may be implemented such that one or more DSCs are configured to perform both drive and sense of signals (e.g., transmit and detect) simultaneously as one or more other DSCs are configured to perform only drive (e.g., transmit) of other signals. With respect to any signal that is driven and simultaneously detected by a DSC28, note that any additional signal that is coupled into the sensor electrode or touch sensor associated with that DSC28is also detectable. For example, a DSC28that is associated with a touch sensor will detect any signal from one or more of the e-pen sensor electrodes that gets coupled into that touch sensor. Similarly, a DSC28that is associated with an e-pen sensor electrode will detect any signal from one or more of the touch sensors1410that gets coupled into that e-pen sensor electrode. Note that the different respective signals that are driven and simultaneously sensed via the respective e-pen sensor electrodes on the one or more sensors1410are differentiated from one another. Appropriate filtering and processing can identify the various signals given their differentiation, orthogonality to one another, difference in frequency, etc. Other examples described herein and their equivalents operate using any of a number of different characteristics other than or in addition to frequency. In an example of operation and implementation, the e-pen1402includes a plurality of e-pen sensor electrodes including a first e-pen sensor electrode and a second e-pen sensor electrode and a plurality of drive-sense circuits (DSCs), including a first DSC and a second DSC, operably coupled to the plurality of e-pen sensor electrodes. The first DSC, when enabled, is configured to drive a first e-pen signal having a first frequency via a first single line coupling to the first e-pen sensor electrode and simultaneously sense, via the first single line, the first e-pen signal, wherein based on interaction of the e-pen with a touch sensor device, the first e-pen signal is coupled into at least one touch sensor electrode of the touch sensor device. Also, the first DSC, when enabled, is configured to process the first e-pen signal to generate a first digital signal that is representative of a first electrical characteristic of the first e-pen sensor electrode. The second DSC, when enabled, is configured to drive a second e-pen signal having a second frequency that is different than the first frequency via a second single line coupling to the second e-pen sensor electrode and simultaneously sense, via the second single line, the second e-pen signal, wherein based on the interaction of the e-pen with the touch sensor device, the second e-pen signal is coupled into the at least one touch sensor electrode. Also, the second DSC, when enabled, is configured to process the second e-pen signal to generate a second digital signal that is representative of a second electrical characteristic of the second e-pen sensor electrode. In some examples, the e-pen1402also includes memory that stores operational instructions, and a processing module operably coupled to the first DSC and the second DSC and to the memory. The processing module when enabled, is configured to execute the operational instructions to process at least one of the first digital signal or the second digital signal to detect the interaction of the e-pen with the touch sensor device. In other examples, the touch sensor device also includes a third DSC operably coupled to a first touch sensor electrode of the at least one touch sensor electrode. The third DSC, when enabled, is configured to drive a touch sensor signal having a third frequency via a third single line coupling to the first touch sensor electrode and simultaneously sense, via the third single line, the touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the first touch sensor electrode. Also, the third DSC, when enabled, is configured to process the touch sensor signal to generate a third digital signal that is representative of a third electrical characteristic of the first touch sensor electrode. In addition, in certain examples, the touch sensor device also includes memory that stores operational instructions, and a processing module operably coupled to the third DSC and to the memory. The processing module when enabled, is configured to execute the operational instructions to process the third digital signal to determine location of at least one of the first e-pen sensor electrode or the second e-pen sensor electrode based on the interaction of the e-pen with the touch sensor device. In yet other examples, the touch sensor device also includes another plurality of DSCs, including a third DSC and a fourth DSC, operably coupled to a plurality of touch sensor electrodes, including a first touch sensor electrode and a second touch sensor electrode, including the at least one touch sensor electrode. The third DSC, when enabled, is configured to drive a first touch sensor signal having a third frequency via a third single line coupling to the first touch sensor electrode and simultaneously sense, via the third single line, the first touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the first touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the second touch sensor electrode. Also, the third DSC, when enabled, is configured to process the first touch sensor signal to generate a third digital signal that is representative of a third electrical characteristic of the first touch sensor electrode. The second DSC, when enabled, configured to drive a second touch sensor signal having a fourth frequency that is different than the first frequency via a fourth single line coupling to the second touch sensor electrode and simultaneously sense, via the fourth single line, the second touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the second touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the second touch sensor electrode. Also, the fourth DSC, when enabled, is configured to process the second touch sensor signal to generate a fourth digital signal that is representative of a fourth electrical characteristic of the second touch sensor electrode. In even other examples, the touch sensor device also includes memory that stores operational instructions, and a processing module operably coupled to the third DSC, the fourth DSC, and to the memory. The processing module when enabled, is configured to execute the operational instructions to process the third digital signal and the fourth digital signal to determine location of at least one of the first e-pen sensor electrode or the second e-pen sensor electrode based on the interaction of the e-pen with the touch sensor device and also based on a two-dimensional mapping of a touchscreen of the touch sensor device that uniquely identifies an intersection of the first touch sensor electrode and the second touch sensor electrode. Also, in some particular examples, the first DSC also includes a power source circuit operably coupled to the first e-pen sensor electrode via the first single line. When enabled, the power source circuit is configured to provide the first e-pen signal that includes an analog signal via the first single line coupling to the first e-pen sensor electrode, and wherein the analog signal includes at least one of a DC (direct current) component or an oscillating component. Also, the first DSC includes a power source change detection circuit operably coupled to the power source circuit. When enabled, the power source change detection circuit is configured to detect an effect on the analog signal that is based on the first electrical characteristic of the first e-pen sensor electrode, and to generate the first digital signal that is representative of the first electrical characteristic of the first e-pen sensor electrode. In certain additional examples, the power source circuit includes a power source to source at least one of a voltage or a current to the first e-pen sensor electrode via the first single line. Also, the power source change detection circuit includes a power source reference circuit configured to provide at least one of a voltage reference or a current reference, and a comparator configured to compare the at least one of the voltage and the current provided to the first e-pen sensor electrode to the at least one of the voltage reference and the current reference to produce the analog signal. In various examples, embodiments, etc., note that one or more processing modules are in communication with one or more of DSCs, touch sensor electrodes, e-pen sensor electrodes, etc. and are configured to perform processing of the various signals associated with them for various purposes. FIG.15is a schematic block diagram of another embodiment1500of a computing device operative with an e-pen in accordance with the present invention. This diagram has some similarities to the previous diagram with at least one difference being that the respective e-pen and touch sensor signals are differentiated by one or more characteristics that may include any one or more of frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. Within this diagram as well as any other diagram described herein, or their equivalents, the one or more touch sensors1510may be of any of a variety of one or more types including any one or more of a touchscreen, a button, an electrode, an external controller, rows of electrodes, columns of electrodes, a matrix of buttons, an array of buttons, a film that includes any desired implementation of components to facilitate touch sensor operation, and/or any other configuration by which interaction with the touch sensor may be performed. Note that the one or more touch sensors1510may be implemented within any of a variety of devices including any one or more of touchscreen, pad device, laptop, cell phone, smartphone, whiteboard, interactive display, navigation system display, in vehicle display, etc., and/or any other device in which one or more touch sensors1510may be implemented. Note that such interaction of a user with a touch sensor may correspond to the user touching the touch sensor, the user being in proximate distance to the touch sensor (e.g., within a sufficient proximity to the touch sensor that coupling from the user to the touch sensor may be performed via capacitively coupling (CC), etc. and/or generally any manner of interacting with the touch sensor that is detectable based on processing of signals transmitted to and/or sensed from the touch sensor). With respect to the various embodiments, implementations, etc. of various respective touch sensors as described herein, note that they may also be of any such variety of one or more types. One example of such user interaction with the one or more touch sensors1510is via capacitive coupling to a touch sensor. Such capacitive coupling may be achieved from a user, via a stylus, an active element such as an electronic pen (e-pen), and/or any other element implemented to perform capacitive coupling to the touch sensor. In some examples, note that the one or more touch sensors1510are also implemented to detect user interaction based on user touch (e.g., via capacitive coupling (CC) from a user, such as a user's finger, to the one or more touch sensors1510). At the top of the diagram, a user interacts with one or more touch sensors1510using one or more electronic pens (e-pens). An e-pen1502is configured to transmit one or more signals that is/are detected by the one or more touch sensors1510. When different respective signals are transmitted from the different respective sensor electrodes of an e-pen1502, the one or more touch sensors1510is implemented to detect the signals and distinguish among them. For example, the one or more touch sensors1510is configured to detect, process, and identify the different respective signals provided from the different respective sensor electrodes of the e-pen1502. At the bottom of the diagram, one or more processing modules1530is coupled to drive-sense circuits (DSCs)28. Note that the one or more processing modules1530may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules1530. Note that the differentiation among the different respective signals that are driven and simultaneously sensed by the various DSCs28may be differentiated based on any one or more characteristics such as frequency, amplitude, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. By appropriate processing by the one or more processing modules1530, the one or more processing modules1530is configured, based on, to detect, process, and identify, which signal is being detected based on these one or more characteristics. Differentiation between the signals based on frequency corresponds to a first signal has a first frequency and a second signal has a second frequency different than the first frequency. Differentiation between the signals based on amplitude corresponds to a that if first signal has a first amplitude and a second signal has a second amplitude different than the first amplitude. Note that the amplitude may be a fixed amplitude for a DC signal or the oscillating amplitude component for a signal having both a DC offset and an oscillating component. Differentiation between the signals based on DC offset corresponds to a that if first signal has a first DC offset and a second signal has a second DC offset different than the first DC offset. Differentiation between the signals based on modulation and/or modulation & coding set/rate (MCS) corresponds to a first signal has a first modulation and/or MCS and a second signal has a second modulation and/or MCS different than the first modulation and/or MCS. Examples of modulation and/or MCS may include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK) or quadrature amplitude modulation (QAM), 8-phase shift keying (PSK), 16 quadrature amplitude modulation (QAM), 32 amplitude and phase shift keying (APSK), 64-QAM, etc., uncoded modulation, and/or any other desired types of modulation including higher ordered modulations that may include even greater number of constellation points (e.g., 1024 QAM, etc.). For example, a first signal may be of a QAM modulation, and the second signal may be of a 32 APSK modulation. In an alternative example, a first signal may be of a first QAM modulation such that the constellation points there and have a first labeling/mapping, and the second signal may be of a second QAM modulation such that the constellation points there and have a second labeling/mapping. Differentiation between the signals based on FEC/ECC corresponds to a first signal being generated, coded, and/or based on a first FEC/ECC and a second signal being generated, coded, and/or based on a second FEC/ECC that is different than the first modulation and/or first FEC/ECC. Examples of FEC and/or ECC may include turbo code, convolutional code, turbo trellis coded modulation (TTCM), low density parity check (LDPC) code, Reed-Solomon (RS) code, BCH (Bose and Ray-Chaudhuri, and Hocquenghem) code, binary convolutional code (BCC), Cyclic Redundancy Check (CRC), and/or any other type of ECC and/or FEC code and/or combination thereof, etc. Note that more than one type of ECC and/or FEC code may be used in any of various implementations including concatenation (e.g., first ECC and/or FEC code followed by second ECC and/or FEC code, etc. such as based on an inner code/outer code architecture, etc.), parallel architecture (e.g., such that first ECC and/or FEC code operates on first bits while second ECC and/or FEC code operates on second bits, etc.), and/or any combination thereof. For example, a first signal may be generated, coded, and/or based on a first LDPC code, and the second signal may be generated, coded, and/or based on a second LDPC code. In an alternative example, a first signal may be generated, coded, and/or based on a BCH code, and the second signal may be generated, coded, and/or based on a turbo code. Differentiation between the different respective signals may be made based on a similar type of FEC/ECC, using different characteristics of the FEC/ECC (e.g., codeword length, redundancy, matrix size, etc. as may be appropriate with respect to the particular type of FEC/ECC). Alternatively, differentiation between the different respective signals may be made based on using different types of FEC/ECC for the different respective signals. Differentiation between the signals based on type corresponds to a first signal being or a first type and a second signal being of a second generated, coded, and/or based on a second type that is different than the first type. Examples of different types of signals include a sinusoidal signal, a square wave signal, a triangular wave signal, a multiple level signal, a polygonal signal, a DC signal, etc. For example, a first signal may be of a sinusoidal signal type, and the second signal may be of a DC signal type. In an alternative example, a first signal may be of a first sinusoidal signal type having first sinusoidal characteristics (e.g., first frequency, first amplitude, first DC offset, first phase, etc.), and the second signal may be of second sinusoidal signal type having second sinusoidal characteristics (e.g., second frequency, second amplitude, second DC offset, second phase, etc.) that is different than the first sinusoidal signal type. Note that any implementation that differentiates the signals based on one or more characteristics may be used in this and other embodiments, examples, and their equivalents. In an example of operation and implementation, the one or more processing modules1530is configured to generate a first signal. A first DSC28is configured simultaneously to drive and to sense the first signal via a first sensor electrode. In some examples, the one or more processing modules1530is configured to generate a number of other signals such as a second signal, third signal, fourth signal, etc. In general, the one or more processing modules1530is configured to generate any one or more signals to be provided via one or more DSCs28. In this example of operation and implementation, the one or more processing modules1530is also configured to generate a second signal. A second DSC28is configured simultaneously to drive and to sense the second signal via a second sensor electrode. As may be appropriate in certain embodiments, the one or more processing modules1530is also configured to generate additional signals up to an nth signal (e.g., where n is a positive integer greater than or equal to 3). An nth DSC28is configured simultaneously to drive and to sense the nth signal via a nth sensor electrode. Note that the different respective signals provided via the different DSCs28are differentiated in frequency. For example, if first signal has a first frequency, and a second signal has a second frequency that is different than the first frequency. When implemented, a third signal has a third frequency that is different than the first frequency and the second frequency. In general, any number of different respective signals generated by the one or more processing modules1530are differentiated in based on one or more characteristics. Note that there may be certain implementations where differentiation between the signals driven and simultaneously sensed via the e-pen sensor electrodes in the one or more touch sensors1510may be limited by design. For example, there may be certain implementations where differentiation is desired based on only one characteristic. Other limitations may operate based on differentiation based on two or more characteristics (and generally up to n characteristics, where n is a positive integer greater than or equal to 2). Note that there may be some processing latency introduced in some examples when differentiation between the respective signals is based on multiple different parameters. For example, when identifying a particular signal, processing may be performed across a variety of characteristics to ensure proper detection of the signal when there is differentiation between the respective signals in multiple dimensions. In addition, note that adaptation between the different respective characteristics may be made. For example, at or during a first time, differentiation may be made based on a first one of the characteristics (e.g., frequency). Then, at or during a second time, differentiation may be based on a second one of the characteristics (e.g., DC offset). Then, at or during a third time, differentiation may be based on a third one of the characteristics (e.g., modulation/MCS), and so on. Various aspects, embodiments, and/or examples of the invention (and/or their equivalents) provide for individualization and uniqueness with respect to the different respective signals that are driven and simultaneously sensed via the respective DSCs28. Appropriate detection, processing, and identification based on these one or more characteristics allows for differentiation and identification of the different respective signals as well as the e-pen sensor electrodes and the one or more touch sensors1510via which those signals are driven and simultaneously sensed. For example, consider an implementation in which a mapping of which signals provided via which DSC28is known, then detection of a particular signal also allows for identification of which DSC28provided that signal. Also, when a mapping of which DSC28is connected to which e-pen sensor electrode or which of the one or more touch sensors1510is known, then detection of a particular signal also allows for identification of that particular e-pen sensor electrode or a particular touch sensor of the one or more touch sensors1510. Note that the transmission, reception, detection, driving, and sensing, of signals by one or more of the various DSCs28allows for detection in both directions between the e-pen sensor electrodes and the one or more touch sensors1510. Various aspects, embodiments, and/or examples of the invention (and/or their equivalents) are provided herein by which such detection, processing, identification, etc. is performed by one or more processing modules associated with an e-pen, associated with one or more touch sensors (e.g., a device that includes the one or more touch sensors), or cooperatively associated with both the e-pen and the one or more touch sensors (e.g., a device that includes the one or more touch sensors). FIG.16is a schematic block diagram of embodiments1600of computing devices operative with different types of e-pens in accordance with the present invention. Different respective groups of one or more touch sensors1610,1620, and1630are shown. Note that the respective groups of one or more touch sensors1610,1620, and1630may be included within any of a number of devices as described herein including any one or more of touchscreen, pad device, laptop, cell phone, smartphone, whiteboard, interactive display, navigation system display, in vehicle display, etc., and/or any other device in which one or more touch sensors1610,1620, and1630may be implemented. In the upper left-hand portion of the diagram, a tethered e-pen1612is electrically connected to the one or more touch sensors1610. In this example, note that the DSCs are implemented remotely from the tethered e-pen1612. For example, the electrodes in the tethered e-pen1612are coupled to remotely implemented DSCs, such as may be implemented within a device that includes the one or more touch sensors1610. In the upper right-hand portion of the diagram, a tethered e-pen1622is electrically connected to the one or more touch sensors1620. In this example, note that the DSCs are implemented within or integrated into the tethered e-pen1622. For example, the electrodes in the tethered e-pen1622are coupled to locally implemented DSCs within the tethered e-pen1622. For example, the DSCs in the tethered e-pen1622are coupled to a power source within the tethered e-pen1622. In some examples, the power source is a battery-powered power source within the tethered e-pen1622. In other examples, the power source is power supply that is energized remotely from a device that includes the one or more touch sensors1620. In addition, within the respective diagram shown at the top portion of the diagram, when a tethering is implemented between a tethered e-pen and a device that includes one or more touch sensors, note that the DSCs that drive the respective electrodes within the e-pen may alternatively be implemented and distributed between the e-pen itself and the device that includes the one or more touch sensors. For example, a first DSC may be implemented within the device that includes the one or more touch sensors and drives a first signal via the tethering and via a first electrode within the tethered e-pen. A second DSC may be locally implemented within the e-pen and is configured simultaneously to drive and to sense a second signal via a second electrode within the tethered e-pen. In the bottom portion of the diagram, a wireless e-pen1632is communication with the one or more touch sensors1620. In this example, note that the DSCs are implemented within or integrated into the wireless e-pen1632. For example, the electrodes in the wireless e-pen1632are coupled to locally implemented DSCs within the wireless e-pen1632. For example, the DSCs in the wireless e-pen1632are coupled to a power source within the wireless e-pen1632. In some examples, the power source is a battery-powered power source within the wireless e-pen1632. This diagram shows various examples by which DSCs may be implemented within a device that includes one or more touch sensors, implemented within an e-pen that may be of different types including a tethered e-pen, a wireless e-pen, etc., or alternatively be distributed among both the device that includes the one or more touch sensors and the e-pen. FIG.17Ais a schematic block diagram of an embodiment1701of an e-pen in accordance with the present invention. This diagram includes an e-pen1712that may optionally include a power source1718therein (e.g., such as to provide power to one or more DSCs coupled to the respective sensor electrodes). The e-pen1712includes multiple respective sensor electrodes. For example, the e-pen1712includes sensor electrodes0,1,2,3, and4. The sensor electrode0may be viewed as being a center sensor electrode0, a primary sensor electrode0, implemented within a pivoting chassis allowing movement within the e-pen1712. Sensor electrodes1-4, which may be viewed as being secondary sensor electrodes of the e-pen1712, are implemented around the sensor electrode0. As the primary sensor electrode0moves and pivots such as when in use, its relative location with respect to the secondary sensor electrodes1-4will change. For example, the distances between the primary sensor electrode0and the secondary sensor electrodes1-4will change as the primary sensor electrode0moves within a pivot-capable chassis. Different respective DSCs are implemented simultaneously to drive and to sense respective signals via the respective sensor electrodes. For example, a first DSC is implemented simultaneously to drive and to sense a first signal via the primary sensor electrode0. A second DSC is implemented simultaneously to drive and to sense a second signal via the secondary sensor electrode1, a third DSC is implemented simultaneously to drive and to sense a third signal via the secondary sensor electrode2, a fourth DSC is implemented simultaneously to drive and to sense a fourth signal via the secondary sensor electrode3, and a fifth DSC is implemented simultaneously to drive and to sense a fifth signal via the secondary sensor electrode4. As these respective signals are driven the of the respective sensor electrodes of the e-pen1712, when the e-pen1712is interacting with one or more touch sensors (e.g., of such a device that includes the one or more touch sensors), the respective signals provided from the respective sensor electrodes of the e-pen1712are coupled into the one or more touch sensors, which are located sufficiently close to the respective sensor electrodes of the e-pen1712four signal coupling, such that one or more DSCs associated with those one or more touch sensor electrodes will be able to detect the respective signals provided from the respective sensor electrodes of the e-pen1712. For example, consider the first signal that is driven via the primary sensor electrode0. As the e-pen1712is interacting with the one or more touch sensors, then those touch sensors that are within proximity of the primary sensor electrode0will detect that first signal. The one or more DSCs associated with those one or more touch sensor electrodes will be able to detect the first signal provided from the primary sensor electrode0of the e-pen1712. Similarly, other signals driven via the other respective sensor electrodes of the e-pen1712, when those other respective sensor electrodes of the e-pen1712are sufficiently close to the one or more touch sensors, will also be coupled into those one or more touch sensors. Also, with respect to this diagram as well as other examples, embodiments, and their equivalents described herein, note that signal driving and detection may be performed from an e-pen to the one or more touch sensors and also from the one or more touch sensors to the e-pen. Note that while a DSC is configured to perform simultaneous driving and sensing of a respective signal via an e-pen sensor electrode or a touch sensor, note that detection of other signals that are coupled into that e-pen sensor electrode or touch sensor is also performed. In general, any signal that gets coupled into an e-pen sensor electrode or a touch sensor via which DSC is configured to perform simultaneous driving and sensing may be detected. That is to say, not only is simultaneous driving and sensing of the signal that is provided from the DSC to the e-pen sensor electrode or the touch sensor performed, but also detection, sensing, processing, etc. is performed of any other signal that is coupled into that e-pen sensor electrode or touch sensor. In general, while this diagram shows four secondary sensor electrodes encompassing the primary sensor electrode0within the e-pen1712, note that any desired number of secondary sensor electrodes may be implemented within alternative embodiments of an e-pen (e.g., 3, 5, etc. or any other number of secondary sensor electrodes). FIG.17Bis a schematic block diagram of another embodiment1702of an e-pen in accordance with the present invention. This diagram has some similarities to the prior diagram. This diagram includes an e-pen1722that may optionally include a power source1728therein (e.g., such as to provide power to one or more DSCs coupled to the respective sensor electrodes). The e-pen1722includes multiple respective sensor electrodes. For example, the e-pen1712includes sensor electrodes0,1a,2a,3a, and4a. The sensor electrode may be viewed as being a center sensor electrode0, a primary sensor electrode0, implemented within a pivoting chassis allowing movement within the e-pen1712. This diagram is different from the prior diagram such that secondary sensor electrodes1-4of the prior diagram are instead implemented based on sets of secondary sensor electrodes. For example, in place of the secondary sensor electrode1of the prior diagram is instead implemented with a set of secondary electrodes1a,1b, and optionally up to In (where n is a positive integer greater than or equal to 3). Similarly, the other secondary sensor electrodes2,3,4are replaced by different respective sets of secondary electrodes. For example, secondary sensor electrode2of the prior diagram is instead implemented with a set of secondary electrodes2a,2b, and optionally up to2n(where n is a positive integer greater than or equal to 3). Similarly, the other secondary sensor electrodes are instead implemented with respective sets of secondary electrodes. The use of more than one secondary sensor electrode around the primary sensor electrode0allows for greater granularity regarding the position, orientation, till, etc. of the primary sensor electrode0within the e-pen1722. For example, as the primary sensor electrode tilts within the e-pen1722, information provided from signals that are driven and sentenced to be at the respective secondary sensor electrodes surrounding the primary sensor electrode0allow for greater determination on the position, orientation, till, etc. of the primary sensor electrode0within the e-pen1722. In general, while these diagrams show sets of four secondary sensor electrodes encompassing the primary sensor electrode0within the e-pen1722, note that any desired number of secondary sensor electrodes may be implemented within alternative embodiments of an e-pen (e.g., 3, 5, etc. or any other number of sets of secondary sensor electrodes). Note also that the arrangement of the different respective secondary sensor electrodes may not be uniform throughout the e-pen1722. For example, a first set of secondary electrodes encompassing the primary sensor electrode0within the e-pen1722may include 4 secondary sensor electrodes, and a second set of secondary electrodes encompassing the primary sensor electrode0within the e-pen1722may include 5 secondary sensor electrodes, etc. The arrangement and configuration of the different respective secondary sensor electrodes may very along the axial length of the primary sensor electrode0within the e-pen1722. The following two diagrams have some similarity to the previous two diagrams with at least one difference being that one or more sensor electrodes are implemented at an erasing end of an e-pen. For example, the previous two diagrams show various examples of sensor electrodes implemented at a writing end of an e-pen, and the use following two diagrams include both writing any erasing capability. For example, writing capability includes operating in the e-pen, display device, etc. in such a way to produce content on the display device based on interaction of the e-pen with the display device, whereas erasing capability includes operating in the e-pen, display device, etc. in such a way to remove content from the display device based on interaction of the e-pen with the display device. For example, consider an example where writing operation produces that is visible on the display device based on interaction of the e-pen with the display device (e.g., provides content based on the path that the e-pen travels on a touchscreen, display, etc.). In contradistinction, consider an example where erasing operation removes content that is visible on the display device based on interaction of the e-pen with the display device (e.g., removes based on the path that the e-pen travels on a touchscreen, display, etc.). In certain examples, note that different respective ends of an e-pen are implemented for writing and erasing operation. However, in some examples, note that both writing operation and erasing operation may be implemented using the same end of an e-pen (e.g., such as toggling between the two operations based on user selection, operation of a switch, toggle between writing operation and erasing operation, etc.). In some examples, the e-pen includes one or more means thereon (e.g., one or more button, one or more switches, etc.) by which a user may select writing operation or erasing operation. On other examples, user interaction with a touch sensor device associated with the e-pen effectuates selection of writing operation or erasing operation (e.g., a button, user interface, etc. shown on a display of a touch sensor device allows a user to select writing operation or erasing operation). FIG.18Ais a schematic block diagram of another embodiment1801of an e-pen in accordance with the present invention. This diagram includes an e-pen1822that may optionally include a power source1818therein (e.g., such as to provide power to one or more DSCs coupled to the respective sensor electrodes). The e-pen1822includes multiple respective sensor electrodes including at both ends of the e-pen1822. For example, the e-pen1822includes sensor electrodes01,11a,21a,31a, and41aand optionally up to11a,21n,31n, and41non the writing end of the e-pen1822. The sensor electrode01may be viewed as being a center sensor electrode01, a primary sensor electrode01, implemented within a pivoting chassis allowing movement within the e-pen1822on the writing end of the e-pen1822. Similarly, on an erasing end of the e-pen1822, the e-pen1822includes sensor electrodes02,12a,22a,32a, and42aand optionally up to12a,22n,31n, and42n. The sensor electrode01may be viewed as being a center sensor electrode02, a primary sensor electrode02, implemented within a pivoting chassis allowing movement within the e-pen1822on the erasing end of the e-pen1822. The construction and implementation of the different respective ends of the e-pen1822may be similar, but the functionality thereof is different. On one end of the e-pen1822, the respective signals that are driven and simultaneously sent via the DSCs associated with those sensor electrodes correspond to writing operations. On the other end of the e-pen1822, the respective signals that are driven and simultaneously sent via the DSCs associated with those sensor electrodes correspond to erasing operations. FIG.18Bis a schematic block diagram of another embodiment1802of an e-pen in accordance with the present invention. This diagram includes an e-pen1822-1that may optionally include a power source1828therein (e.g., such as to provide power to one or more DSCs coupled to the respective sensor electrodes). The e-pen1822-1includes multiple respective sensor electrodes including a singular sensor electrode at the erasing end of the e-pen1822-1. For example, the e-pen1822-1includes sensor electrodes01,11a,21a,31a, and41aand optionally up to11a,21n,31n, and41non the writing end of the e-pen1822-1. The sensor electrode01may be viewed as being a center sensor electrode01, a primary sensor electrode01, implemented within a pivoting chassis allowing movement within the e-pen1822-1on the writing end of the e-pen1822-1. However, on an erasing end of the e-pen1822-1, the e-pen1822-1includes a single sensor electrode02-1, which may be viewed as being a center sensor electrode02-1, a primary sensor electrode02-1, implemented on the erasing end of the e-pen1822-1. The construction and implementation of the different respective ends of the e-pen1822-1is different in this diagram. On one end of the e-pen1822-1, the respective signals that are driven and simultaneously sent via the DSCs associated with those sensor electrodes correspond to writing operations. On the other end of the e-pen1822-1, the singular signal provided via DSC associated with the single sensor electrode02-1corresponds to erasing operations. In general, note that any combination of writing and/or erasing ends and implementations thereof of an e-pen may be implemented. For example, an the e-pen may include a primary sensor electrode surrounded by one single set of secondary sensor electrodes. An e-pen may alternatively include a primary sensor electrode surrounded by multiple sets of secondary sensor electrodes. Any such e-pen may include any desired implementation of an erasing end (e.g., which may be implemented using a single sensor electrode or multiple sensor electrodes). FIG.19is a schematic block diagram of embodiments1900of different sensor electrode arrangements within e-pens in accordance with the present invention. This diagram shows different respective numbers of secondary electrodes implemented to surround a primary sensor electrode. Note that such a primary sensor electrode may be implemented in a pivot-capable type chassis that allows movement of it with respect to the secondary sensor electrodes that surround it. As can be seen with respect to reference numeral1901, 4 secondary sensor electrodes are implemented around a primary sensor electrode. In this example, the respective secondary sensor electrodes are approximately and/or substantially and/or substantially of a common size and shape, being rectangular in shape, and distributed evenly around the primary sensor electrode. Reference numeral1902shows 6 secondary electrodes implemented around the primary sensor electrode. In this example, the respective secondary sensor electrodes are also approximately and/or substantially of a common size and shape, being rectangular in shape, and distributed evenly around the primary sensor electrode. Reference numeral1903shows 8 secondary electrodes implemented around the primary sensor electrode. In this example, the respective secondary sensor electrodes are also approximately and/or substantially of a common size and shape, being rectangular in shape, and distributed evenly around the primary sensor electrode. Reference numeral1904shows 4 secondary sensor electrodes are implemented around a primary sensor electrode. In this example, the respective secondary sensor electrodes are not of a common size and shape. The sensor electrode (SE)1and3are approximately and/or substantially the common first size and first shape, and the SEs2and4are approximately and/or substantially the common second size and second shape. Note that while they are all approximately and/or substantially rectangular in shape, they are of different sizes and shapes. Reference numeral1905shows 3 secondary electrodes implemented around the primary sensor electrode. In this example, the respective secondary sensor electrodes are also approximately and/or substantially of a common size and shape, being each being curved and partially concentric around the primary sensor electrode and distributed evenly around the primary sensor electrode. Reference numeral1905shows 4 secondary electrodes implemented around the primary sensor electrode. In this example, the respective secondary sensor electrodes are also approximately and/or substantially of a common size and shape, being each being curved and partially concentric around the primary sensor electrode and distributed evenly around the primary sensor electrode. With respect to any of these examples or their equivalents, note that more than one sensor may be implemented along the axis of the primary sensor electrode. For example, as described above with respect to certain diagrams that include more than one set of secondary sensor electrodes encompassing the primary sensor electrode, any of these examples or their equivalents may also include more than one set of secondary sensor electrodes encompassing the primary sensor electrode. Note also that, along the axis of the primary sensor electrode, the different respective sets of secondary sensor electrodes may vary. For example, a set of secondary sensor electrodes based on the implementation of reference1901may be implemented first, a set of secondary sensor electrodes based on the limitation of reference numeral1905may be implemented next, and so on. Any desired implementation having varied types of secondary sensor electrodes may be implemented as desired in various embodiments and examples. FIG.20is a schematic block diagram of an embodiment2000of an e-pen interacting touch sensors in accordance with the present invention. This diagram shows an e-pen2012implemented to interact with one or more touch sensors2010(e.g., touch sensor electrodes). Note that the one or more touch sensors2010may be implemented within any type of device as described herein. This diagram shows a cross-section of rows and columns of a touchscreen and portions of the associated one or more touch sensors2010. Multiple DSCs are implemented simultaneously to drive and to sense signals provided via the respective sensor electrodes of the e-pen2012and the touch sensors. Note that the different respective signals provided to the respective sensor electrodes of the e-pen2012and the touch sensors may be differentiated using any one or more characteristics as described herein including frequency, amplitude, DC offset, modulation, FEC/ECC, type, etc. and/or any other characteristic that may be used to differentiate signals provided to different respective e-pen sensor electrodes and touch sensors. For example, unique respective signals are provided to the column and row sensor electrodes of a touchscreen. The signals provided to the column sensor electrodes of the touchscreen are depicted as sr1, sr2, and so on, and the column signals are signals are depicted as sc1, sc2, and so on. The signals provided to the sensor electrodes of the e-pen2012are depicted as sp0, sp1, sp2, and so on. Again, note that coupling of signals from the sensor electrodes of the e-pen2012may be made into the column and row sensor electrodes of the touchscreen, and vice versa. For example, signals coupled from the column and row sensor electrodes of the touchscreen into the sensor electrodes of the e-pen2012may be detected by the one or more DSCs that are configured simultaneously to drive in to sense signals via the respective sensor electrodes of the e-pen2012. One or more processing modules associated with a device that includes the one or more touch sensors2010and the e-pen2012is configured to process information associated with the signals that are driven and simultaneously sensed by the DSCs that are associated with the column and row sensor electrodes of the touchscreen and the sensor electrodes of the e-pen2012to determine various information including the location of the e-pen2012with respect to the touchscreen, the orientation, tilt, etc. of the e-pen2012, which particular signals are coupled from the one or more touch sensors2010to the e-pen2012, and vice versa, the amount of signal coupling from the one or more touch sensors2010to the e-pen2012, and vice versa, etc. In an example of operation and implementation, one or more processing modules is configured to process information corresponding to one or more signals that are detected as being coupled from the e-pen2012to the row and column electrodes of the touch sensors2010. Based on a mapping (e.g., x-y, a two-dimensional mapping) of the row and column electrodes relative to the touchscreen, and based on the particular locations at which those one or more signals are detected as being coupled from the e-pen2012to the row and column electrodes, the one or more processing modules is configured to determine particularly the locations of the sensor electrodes of the e-pen2012based on the row and column electrodes of the touch sensors2010. For example, location of coupling of a signal from a sensor electrode of the e-pen2012may be determined based on that signal being detected within a particular row electrode and column electrode. The cross-section of that row electrode and that column electrode, based on the mapping of the row and column electrodes, provides the location of the sensor electrode of the e-pen2012. This process may be performed with respect to any one or more of the different respective signals coupled from the sensor electrodes of the e-pen2012to the row and column electrodes of the touch sensors2010. FIG.21is a schematic block diagram of another embodiment2100of an e-pen interacting with touch sensors in accordance with the present invention. The top portion of this diagram includes a side view of an e-pen2122, which may optionally include a power source2128as needed, that is interacting with one or more touch sensors2110. This includes a cross-section of rows and columns of a touchscreen. In this implementation, the e-pen2122includes a primary sensor electrode and one or more sets of secondary sensor electrodes that surround the primary sensor electrode. The tilt, angle, etc. of the e-pen2122relative to the touchscreen changes the capacitance between the respective sensor electrodes of the e-pen and the row and column sensor electrodes of the touchscreen. Considering an example in which the e-pen2122is perfectly normal to the surface of the touchscreen, meaning the axis of the primary sensor electrode is perpendicular to the surface of the touchscreen in all respects, then the capacitance of the secondary sensor electrodes and row and column electrodes of the touchscreen would be the same (e.g., assuming a uniform implementation of the secondary sensor electrodes within the e-pen2122). However, as the e-pen2122is tilted relative to the surface of the touchscreen, then some of the secondary sensor electrodes will be closer to the row and column electrodes of the touchscreen than others. As can be seen in the diagram, considering the secondary sensor electrodes closer to the writing end/tip the e-pen2122that surround the primary sensor electrode, as the e-pen2122is tilted. For example, consider secondary sensor electrode, SE11and SE21. As the e-pen2122is tilted, SE11will be farther from the surface of the touchscreen than SE21. More effective capacitive coupling will be provided from SE21to the proximately located row and column electrodes of the touchscreen than from SE11in this instance. The bottom portion of this diagram includes a top view of the coupling of signals from the e-pen sensor electrodes to the row and column electrodes of the touchscreen, and vice versa. Note that there will be a spatial mapping of signals coupled from the e-pen2122to the row and column electrodes of the touchscreen, and vice versa. This diagram does not specifically show the different intensities of the respective signals on a per sensor electrode basis, but shows the general area via which coupling of signals is made between the e-pen2122and the row and column electrodes of the touchscreen. Note that as the e-pen2122is interacting with touchscreen and as the location of the e-pen2122changes, such as from user control of the e-pen2122in writing, drawing, erasing, etc. operations, the profile of coupling of signals between the e-pen2122and the row and column electrodes of the touchscreen, will be changing. A dynamic mapping of the profile of signals between the e-pen2122and the row and column electrodes of the touchscreen as well as identification of particular signals being within the profile may be used to provide for specific location of the sensor electrodes of the e-pen2122at any given time and as a function of time. The spatial mapping of these signals provides information related to location of the e-pen2122with respect to the touchscreen and also provides information related to the tilt, orientation, position, etc. of the e-pen2122. As described with respect to other embodiments, examples, etc. note that particular configurations of sensor electrodes within an e-pen may provide for greater granularity and resolution regarding its particular location respect the touchscreen and also information related to its tilt, orientation, position, etc. FIG.22is a schematic block diagram of another embodiment2200of an e-pen interacting with touch sensors in accordance with the present invention. This diagram shows particularly coupling between the respective sensor electrodes of an e-pen and the row and column electrodes of the touchscreen. At the top of the diagram, the profile of the coupling of the signals is shown, which is similar to the profile of coupling of signals in the prior diagram. At the bottom portion of the diagram, an enlargement of the coupling of the respective sensor electrodes of the e-pen are shown. The e-pen of this diagram may be viewed as being similar to the e-pen1722ofFIG.17B, the e-pen1822ofFIG.18A(writing end), or the e-pen1822-1ofFIG.18B(writing end) that includes a primary sensor electrode01and multiple sets of secondary sensor electrodes (e.g., a first set including secondary sensor electrodes11,21,31,41, and optionally up to an nth set including secondary sensor electrodes1n,2n,3n,4n). In an example of operation and implementation, consider an example in which the e-pen is perfectly normal to the surface of the touchscreen, meaning the axis of the primary sensor electrode is perpendicular to the surface of the touchscreen in all respects, then the capacitance of the secondary sensor electrodes and row and column electrodes of the touchscreen would be the same. There would be very strong capacitive coupling of the driven via the primary sensor electrode of the e-pen to the row and or column electrodes of the touchscreen closest to the primary sensor electrode of the e-pen. In addition, the capacitive coupling from the respective secondary sensor electrodes11,21,31,41would be approximately and/or substantially uniform to row and or column electrodes of the touchscreen surrounding the location of the primary sensor electrode of the e-pen. In another example of operation and implementation, consider an example of operation and implementation based on the location and orientation of the e-pen the prior diagram, the coupling via primary sensor electrode01would be greatest among the sensor electrodes of the e-pen given that it is in physical contact with the surface of the touchscreen. As the e-pen2122is tilted, sensor electrode (SE)11will be farther from the surface of the touchscreen than SE21. More effective capacitive coupling will be provided from SE21to the proximately located row and column electrodes of the touchscreen than from SE11in this instance. In addition, more effective coupling will be provided from the sensor electrodes (SEs)11,21,31,41than from the SEs1n,2n,3n,4nbased on the location and orientation of the e-pen the prior diagram. For example, analysis of a signal profile of the coupling from the sensor electrodes aligned along the e-pen (e.g., SEs11,12, up to1n) provide information regarding the angular position of the e-pen relative to the service of the touchscreen. Based on a signal profile of the coupling from those sensor electrodes aligned along the e-pen (e.g., SEs11,12, up to1n) being uniform, meaning approximately and/or substantially same signal coupling from each of the those sensor electrodes aligned along the e-pen (e.g., SEs11,12, up to1n), then a determination that the axis of the e-pen is parallel to the surface of the touchscreen may be made. In addition, analysis of how much signal coupling is provided from the respective sensor electrodes will provide information regarding the proximity of the e-pen to the service of the touchscreen. In general, analysis of the location, signal strength, intensity, and or other characteristics associated with the different respective signals coupled from the sensor electrodes of the e-pen to the row and column electrodes of the touchscreen provides for information regarding the location of the e-pen with respect to the service of the touchscreen as well as the tilt, orientation, etc. of the e-pen. Considering the two extreme examples described above, one in which the e-pen is normal to the surface of the touchscreen and another in which the e-pen is parallel to the surface of the touchscreen, considering when the e-pen is located somewhere in between those two extremes, such as in a tilted implementation shown in the prior diagram, analysis of the relationships between those respective signals will provide that information regarding the location of the e-pen with respect to the service of the touchscreen as well as the tilt, orientation, etc. of the e-pen. In some examples, analysis of the various signals that are coupled from the sensor electrodes of the e-pen to the row and column electrodes of the touchscreen may be associated geometrically with respect to the tilt, orientation, etc. of the e-pen. As an example, considering a signal profile of the coupling from those sensor electrodes aligned along the e-pen (e.g., SEs11,12, up to1n) degrades by 3 DB is a function of distance (e.g., one half as much capacitive coupling of the signal via SE12is made as the signal via SE11, and one half as much capacitive coupling of the signal via SE13is made as the signal via SE12, etc.), then an estimation of the angle of the e-pen with respect to the service of the touchscreen can be made. In one example, an estimation of that angle x is made based on the geometric function sin x where the vertical component of a right triangle opposite the angle x corresponds to the difference between the signal coupling via SE12and via SE13(e.g., height, h1, of the right triangle), and the hypotenuse of that same right triangle corresponds to the distance between the SE12and via SE13within the e-pen (e.g., hypotenuse, h2, of the right triangle). Other geometric estimations may be made using various geometric functions such as cos x, tan x, etc. based on the known or determined physical parameters of an e-pen (e.g., the physical configuration of the sensor electrodes therein, their relationship to one another, their spacing, etc.) and associating those relationships to the characteristics associated with the signals detected as being capacitively coupled from the sensor electrodes of the e-pen to the row and column electrodes of the touchscreen. In an example of operation and implementation, one or more processing modules is configured to perform appropriate processing of the relative signal strengths, intensities, magnitude, of the signals coupled from the sensor electrodes of the e-pen to the row and column sensor electrodes of the touchscreen to determine the location of the e-pen with respect to the touchscreen as well as the tilt, orientation, etc. of the e-pen. FIG.23is a schematic block diagram of an embodiment of a method2300for execution by one or more devices in accordance with the present invention. The method2300operates in step2310by transmitting a first signal having a first frequency via a sensor electrode of one or more touch sensors. The method2300also operates in step2320by detecting a change of a first signal having a first frequency via the sensor electrode of the one or more touch sensors. Note that the operations depicted within the steps2310and2320may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2300continues in step2330by processing the change of the first signal having the first frequency to generate digital information corresponding to user interaction and/or e-pen interaction with the sensor electrode of the one or more touch sensors. In some examples, note that such operations as depicted within the steps2310,2320, and2330may be performed using one or more additional signals and one or more sensor electrodes. For example, in some instances, a second signal having a second frequency is associated with a first sensor electrode of an e-pen. In such examples, the method2300also operates in step2314by transmitting the second signal having the second frequency via the first sensor electrode of an e-pen. The method2300also operates in step2324by detecting a change of the second signal having the second frequency via the sensor electrode of the e-pen. The method2300continues in step2334by processing the change of the second signal having the second frequency to generate other digital information corresponding to user interaction with the sensor electrode of the e-pen. As also described elsewhere herein with respect to other examples, embodiments, etc., note that coupling of signals may be performed from sensor electrodes of the e-pen to sensor electrodes of the touch sensors, and vice versa. Detection of signals being coupled from the e-pen to the sensor electrodes of the touch sensors, and vice versa, may be performed by appropriate signal processing including analysis of the digital information corresponding to such user and/or e-pen interaction with the various sensor electrodes. In this method2300, differentiation between the different respective signals provided via the sensor electrode of the touch sensors and the sensor electrode of the e-pen is made in frequency. Variants of the method2300operate by operating a first drive-sense circuit (DSC) of the e-pen, which includes a plurality of e-pen sensor electrodes including a first e-pen sensor electrode and a second e-pen sensor electrode, and a plurality of drive-sense circuits (DSCs), including the first DSC and a second DSC, operably coupled to the plurality of e-pen sensor electrodes, to drive a first e-pen signal having a first frequency via a first single line coupling to the first e-pen sensor electrode and simultaneously sense, via the first single line, the first e-pen signal, wherein based on interaction of the e-pen with a touch sensor device, the first e-pen signal is coupled into at least one touch sensor electrode of the touch sensor device. This also involves operating the first DSC of the e-pen to process the first e-pen signal to generate a first digital signal that is representative of a first electrical characteristic of the first e-pen sensor electrode. This also involves operating the second DSC to drive a second e-pen signal having a second frequency that is different than the first frequency via a second single line coupling to the second e-pen sensor electrode and simultaneously sense, via the second single line, the second e-pen signal, wherein based on the interaction of the e-pen with the touch sensor device, the second e-pen signal is coupled into the at least one touch sensor electrode. In addition, this also involves operating the second DSC of the e-pen to process the second e-pen signal to generate a second digital signal that is representative of a second electrical characteristic of the second e-pen sensor electrode. In some examples, this also involves processing at least one of the first digital signal or the second digital signal to detect the interaction of the e-pen with the touch sensor device. Certain other examples also operate by operating a third DSC operably coupled to a first touch sensor electrode of the at least one touch sensor electrode to drive a touch sensor signal having a third frequency via a third single line coupling to the first touch sensor electrode and simultaneously sense, via the third single line, the touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the first touch sensor electrode. This also involves operating the third DSC process the touch sensor signal to generate a third digital signal that is representative of a third electrical characteristic of the first touch sensor electrode. Also, such examples operate by processing the third digital signal to determine location of at least one of the first e-pen sensor electrode or the second e-pen sensor electrode based on the interaction of the e-pen with the touch sensor device. Even other examples involve a touch sensor device that also includes another plurality of DSCs, including a third DSC and a fourth DSC, operably coupled to a plurality of touch sensor electrodes, including a first touch sensor electrode and a second touch sensor electrode, including the at least one touch sensor electrode. In such examples, the operations also involve operating the third DSC to drive a first touch sensor signal having a third frequency via a third single line coupling to the first touch sensor electrode and simultaneously sense, via the third single line, the first touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the first touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the second touch sensor electrode. This also involves operating the third DSC to process the first touch sensor signal to generate a third digital signal that is representative of a third electrical characteristic of the first touch sensor electrode. In such examples, this also involves operating the fourth DSC to drive a second touch sensor signal having a fourth frequency that is different than the first frequency via a fourth single line coupling to the second touch sensor electrode and simultaneously sense, via the fourth single line, the second touch sensor signal, wherein based on the interaction of the e-pen with the touch sensor device, sensing the second touch sensor signal includes sensing at least one of the first e-pen signal that is coupled from the first e-pen sensor electrode into the first touch sensor electrode or the second e-pen signal that is coupled from the second e-pen sensor electrode into the second touch sensor electrode. Note that this also involves operating the fourth DSC to process the second touch sensor signal to generate a fourth digital signal that is representative of a fourth electrical characteristic of the second touch sensor electrode. In addition, certain variants also include processing the third digital signal and the fourth digital signal to determine location of at least one of the first e-pen sensor electrode or the second e-pen sensor electrode based on the interaction of the e-pen with the touch sensor device and also based on a two-dimensional mapping of a touchscreen of the touch sensor device that uniquely identifies an intersection of the first touch sensor electrode and the second touch sensor electrode. FIG.24is a schematic block diagram of another embodiment of a method2400for execution by one or more devices in accordance with the present invention. This diagram has similarity to the previous diagram with at least one difference being that more than one signal is driven via more than one sensor electrode of the touch sensors, and more than one signal is driven via more than one sensor electrode of the e-pen. In general, note that any desired number of signals may be simultaneously driven and sensed, and differentiated from one another in frequency, via the respective sensor electrodes of the touch sensors and the e-pen. In this example as well as others, note that more than one e-pen may be operative at a given time in conjunction with a given one or more touch sensors. For example, more than one e-pen associate with more than one user may be interactive and operative with a touch sensor device at a time. The method2400operates in step2410by transmitting a first signal having a first frequency via a first sensor electrode of one or more touch sensors. The method2400also operates in step2420by detecting a change of a first signal having a first frequency via the first sensor electrode of the one or more touch sensors. The method2400continues in step2430by processing the change of the first signal having the first frequency to generate digital information corresponding to user interaction and/or e-pen interaction with the first sensor electrode of the one or more touch sensors. In addition, when multiple sensor electrodes of the one or more touch sensors in implemented in a device (e.g., up to n, where n is a positive integer greater than or equal to 2), similar operations as performed with respect to the first sensor electrode may be performed with respect to the one or more additional sensor electrodes of the one or more touch sensors. The method2420operates in step2412by transmitting a nth signal having a nth frequency via a nth sensor electrode of one or more touch sensors. The method2400also operates in step2422by detecting a change of a nth signal having a nth frequency via the nth sensor electrode of the one or more touch sensors. The method2400continues in step2432by processing the change of the nth signal having the nth frequency to generate digital information corresponding to user interaction and/or e-pen interaction with the nth sensor electrode of the one or more touch sensors. In some examples, note that such operations as depicted within the steps2410,2420, and2430(and optionally and2412,2422, and2432) may be performed using one or more additional signals and one or more sensor electrodes. For example, in some instances, additional signal having additional frequencies are associated with respective sensor electrodes of an e-pen. In such examples, the method2400also operates in step2414by transmitting an nth signal having a nth frequency via a first sensor electrode of an e-pen. The method2400also operates in step2424by detecting a change of the nth signal having the nth frequency via the first sensor electrode of the e-pen. Note that the operations depicted within the steps2414and2424may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2400continues in step2434by processing the change of the nth signal having the nth frequency to generate other digital information corresponding to user interaction with the first sensor electrode of the e-pen. In addition, when multiple sensor electrodes of the e-pen are implemented (e.g., up to x, where x is a positive integer greater than or equal to (x minus n)), similar operations as performed with respect to the first sensor electrode of the e-pen may be performed with respect to the one or more additional sensor electrodes of the e-pen. In such examples, the method2400also operates in step2414by transmitting an xth signal having a xth frequency via a yth sensor electrode of an e-pen (e.g., where x and y are positive integers appropriately selected based on n, n+1, etc.). The method2400also operates in step2424by detecting a change of the xth signal having the xth frequency via the yth sensor electrode of the e-pen. Note that the operations depicted within the steps2414and2424may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2400continues in step2434by processing the change of the xth signal having the xth frequency to generate other digital information corresponding to user interaction with the yth sensor electrode of the e-pen. In general, note that different respective signals that are simultaneously driven and sensed via the respective sensor electrodes of the touch sensors and/or the e-pen are differentiated in terms of frequency. Note that the operations depicted within the steps2410and2420,2422and2422,2414and2424, and2416and2426may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). This two diagram below have some similarities to the previous two diagrams with at least one difference being that the respective e-pen and touch sensor signals are differentiated by one or more characteristics that may include any one or more of frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. FIG.25is a schematic block diagram of another embodiment of a method2500for execution by one or more devices in accordance with the present invention. The method2500operates in step2510by transmitting a first signal having a first one or more characteristics via a sensor electrode of one or more touch sensors. The method2500also operates in step2520by detecting a change of a first signal having a first one or more characteristics via the sensor electrode of the one or more touch sensors. Note that the operations depicted within the steps2510and2520may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2500continues in step2530by processing the change of the first signal having the first one or more characteristics to generate digital information corresponding to user interaction and/or e-pen interaction with the sensor electrode of the one or more touch sensors. In some examples, note that such operations as depicted within the steps2510,2520, and2530may be performed using one or more additional signals and one or more sensor electrodes. For example, in some instances, a second signal having a second one or more characteristics is associated with a first sensor electrode of an e-pen. In such examples, the method2500also operates in step2514by transmitting the second signal having the second one or more characteristics via the first sensor electrode of an e-pen. The method2500also operates in step2524by detecting a change of the second signal having the second one or more characteristics via the sensor electrode of the e-pen. The method2500continues in step2534by processing the change of the second signal having the second one or more characteristics to generate other digital information corresponding to user interaction with the sensor electrode of the e-pen. As also described elsewhere herein with respect to other examples, embodiments, etc., note that coupling of signals may be performed from sensor electrodes of the e-pen to sensor electrodes of the touch sensors, and vice versa. Detection of signals being coupled from the e-pen to the sensor electrodes of the touch sensors, and vice versa, may be performed by appropriate signal processing including analysis of the digital information corresponding to such user and/or e-pen interaction with the various sensor electrodes. In this method2500, differentiation between the different respective signals provided via the sensor electrode of the touch sensors and the sensor electrode of the e-pen is made in frequency. FIG.26is a schematic block diagram of another embodiment of a method2600for execution by one or more devices in accordance with the present invention. This diagram has similarity to the previous diagram with at least one difference being that more than one signal is driven via more than one sensor electrode of the touch sensors, and more than one signal is driven via more than one sensor electrode of the e-pen. In general, note that any desired number of signals may be simultaneously driven and sensed, and differentiated from one another based on one or more characteristics, via the respective sensor electrodes of the touch sensors and the e-pen. In this example as well as others, note that more than one e-pen may be operative at a given time in conjunction with a given one or more touch sensors. For example, more than one e-pen associate with more than one user may be interactive and operative with a touch sensor device at a time. The method2600operates in step2610by transmitting a first signal having a first one or more characteristics via a first sensor electrode of one or more touch sensors. The method2600also operates in step2620by detecting a change of a first signal having a first one or more characteristics via the first sensor electrode of the one or more touch sensors. The method2600continues in step2630by processing the change of the first signal having the first one or more characteristics to generate digital information corresponding to user interaction and/or e-pen interaction with the first sensor electrode of the one or more touch sensors. In addition, when multiple sensor electrodes of the one or more touch sensors in implemented in a device (e.g., up to n, where n is a positive integer greater than or equal to 2), similar operations as performed with respect to the first sensor electrode may be performed with respect to the one or more additional sensor electrodes of the one or more touch sensors. The method2620operates in step2612by transmitting a nth signal having a nth one or more characteristics via a nth sensor electrode of one or more touch sensors. The method2600also operates in step2622by detecting a change of a nth signal having a nth one or more characteristics via the nth sensor electrode of the one or more touch sensors. The method2600continues in step2632by processing the change of the nth signal having the nth one or more characteristics to generate digital information corresponding to user interaction and/or e-pen interaction with the nth sensor electrode of the one or more touch sensors. In some examples, note that such operations as depicted within the steps2610,2620, and2630(and optionally and2612,2622, and2632) may be performed using one or more additional signals and one or more sensor electrodes. For example, in some instances, additional signal having additional frequencies are associated with respective sensor electrodes of an e-pen. In such examples, the method2600also operates in step2614by transmitting an nth signal having a nth one or more characteristics via a first sensor electrode of an e-pen. The method2600also operates in step2624by detecting a change of the nth signal having the nth one or more characteristics via the first sensor electrode of the e-pen. Note that the operations depicted within the steps2614and2624may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2600continues in step2634by processing the change of the nth signal having the nth one or more characteristics to generate other digital information corresponding to user interaction with the first sensor electrode of the e-pen. In addition, when multiple sensor electrodes of the e-pen are implemented (e.g., up to x, where x is a positive integer greater than or equal to (x minus n)), similar operations as performed with respect to the first sensor electrode of the e-pen may be performed with respect to the one or more additional sensor electrodes of the e-pen. In such examples, the method2600also operates in step2614by transmitting an xth signal having a xth one or more characteristics via a yth sensor electrode of an e-pen (e.g., where x and y are positive integers appropriately selected based on n, n+1, etc.). The method2600also operates in step2624by detecting a change of the xth signal having the xth one or more characteristics via the yth sensor electrode of the e-pen. Note that the operations depicted within the steps2614and2624may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). The method2600continues in step2634by processing the change of the xth signal having the xth one or more characteristics to generate other digital information corresponding to user interaction with the yth sensor electrode of the e-pen. In general, note that different respective signals that are simultaneously driven and sensed via the respective sensor electrodes of the touch sensors and/or the e-pen are differentiated in terms of one or more characteristics. Note that the operations depicted within the steps2610and2620,2622and2622,2614and2624, and2616and2626may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). In addition, it is noted that with respect to any of the various embodiments, examples, etc. described herein and their equivalents, note that there may be instances in which a first at least one signal is simultaneously driven and sensed in accordance with DSC operation as described herein, or its equivalent, while a second at least one signal is only driven or transmitted. For example, note that alternative variations may include situations in which one or more signals are implemented using DSC operation as described, or its equivalent, and one or more other signals are implemented using an alternative technology including only transmission capability. Note that any combination of one or more DSCs in one or more other circuitries implemented to operate two or more signals within a system may be employed in a desired embodiment. FIG.27is a schematic block diagram of an embodiment2700of directional mapping determination (e.g., North, South, East, and West (NSEW)) and orientation determination of an e-pen in accordance with the present invention. This diagram shows a user interacting with one or more touch sensors2710of the device using an e-pen2702. With respect to the device that includes the one or more touch sensors2710, NSEW directionality is shown as North being towards the top, South being for the bottom, West being towards the left, and East being towards the right of the device that includes the one or more touch sensors2710. Note that alternative types of directionality may be used including those that have different numbers of subdivisions and granularity. For example, with respect to the NSEW directionality described, subdivisions may be included such as a Northwest directionality between North and West, Northeast directionality between North and East, a North Northwest directionality between Northwest and West, etc. In general, any desired directionality and granularity may be used in accordance with such a device. Alternatively, other nomenclature of directionality may be used such as a direction1, direction2, direction3, direction4, etc. In an example of operation and implementation, as a user interacts with a device that includes one or more touch sensors2710using an e-pen2702, two different tests are performed in accordance with the e-pen2702interaction with the device that includes one or more touch sensors2710. As described above with respect to other examples, embodiments, etc. note that one or more processing modules, which may include integrated memory and/or be coupled to memory, is in communication with one or more DSCs that are implemented to perform simultaneous driving and sensing of signals via sensor electrodes of the one or more sensors and one or more sensor electrodes of the e-pen2702. The first test (test1) corresponds to determining the e-pen NSEW mapping respect to the device, as shown by reference numeral2750. The second test (test2) corresponds to determining the e-pen orientation (e.g., till, angle, etc.), As shown by reference numeral2760. Note that these different respective tests may be performed in a variety of different manners. In various examples, the e-pen2702, the touch sensor device with which the e-pen2702is configured to interact, or both the e-pen and the touch sensor device are configured to facilitate the test1and test2. For example, in one example, one or more processing modules associated with the e-pen2702is configured to facilitate both test1and test2. In another example, one or more processing modules associated with the e-pen2702is configured to facilitate test1, and one or more processing modules associated with the touch sensor device is configured to facilitate test2. In yet another example, one or more processing modules associated with the touch sensor device is configured to facilitate test1, and one or more processing modules associated with the e-pen2702is configured to facilitate test2. In yet another example, one or more processing modules associated with the touch sensor device is configured to facilitate both test1and test2. In yet another example, one or more processing modules associated with both the e-pen2702and the touch sensor device is configured to facilitate both test1and test2. In general, any cooperation of one or more processing modules associated with either the e-pen2702or the touch sensor device may be configured to facilitate test1and test2. As also described elsewhere herein with respect to other embodiments, examples, etc., note that different respective signals may be associated with the different respective electrodes (e.g., row and column electrodes) of the one or more touch sensors2710and the sensor electrodes of the e-pen. In general, the first test associated with the e-pen NSEW mapping determination corresponds to the determination of the respective sensor electrodes of the e-pen2702with respect to the touchscreen. For example, this may involve determination of where on the touchscreen the respective signals from the sensor electrodes of the e-pen2702are being coupled into the sensor electrodes of the one or more touch sensors2710. When information corresponding to the assignment of signals to the respective sensor electrodes of the e-pen2702are known, then based on detection of where those signals are being coupled into the touchscreen is determined, the mapping of the respective sensor electrodes of the e-pen2702with respect to the NSEW directionality of the touchscreen may be determined. Note that if the sensor electrode mapping with in the e-pen2702is unknown, testing may be performed including coupling a primary signal via a primary sensor electrode of the e-pen2702to establish a base/reference location of the e-pen with respect to the touchscreen, and then one or more secondary signals may be coupled via one or more secondary sensor electrodes of the e-pen2702to determine the orientation of the e-pen2702including where particularly the secondary sensor electrodes are located with respect to the touchscreen. If desired in some embodiments, note that signals may be driven simultaneously via to work more of the sensor electrodes of the e-pen2702in accordance with making such determinations. In addition, note that time multiplexed operation may be performed such that the first signal is preliminarily driven via the primary sensor electrode of the e-pen2702, then a second signal is subsequently driven via a first secondary sensor electrode of the e-pen2702, and so on such that only one particular signal is driven through one of the sensor electrodes of the e-pen2702at a given time in this testing procedure for the e-pen NSEW mapping determination2750. In such a case when only one particular signal is driven through one of the sensor electrodes of the e-pen2702at a given time, then differentiation between that signal and others may not be needed. For example, when only one signal is operated a given time, then differentiation between that signal may be obviated. However, when simultaneous operation is performed by driving more than one signal via more than one sensor electrode of the e-pen2702, differentiation between the respective signals will facilitate better performance and allow for simultaneous detection and processing. Note that the differentiation between the respective signals may be made using any of the various means described herein including frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. Note also that based on a signal is driven continually via the primary sensor electrode of the e-pen, and based on detection of that signal being coupled into the touchscreen, and based on knowledge of the physical mapping of the secondary sensor electrodes within the e-pen2702, then by appropriately driving signals via the secondary sensor electrodes of the e-pen in a known manner, accompanied with detection of those signals as they are being coupled into the touchscreen, will provide for the e-pen NSEW mapping determination2750. With respect to the second test, the e-pen orientation determination2760, depending on the orientation, till, angle, etc. of the e-pen relative to the touchscreen service, there will be different capacitance of the sensor electrodes of the e-pen2702with respect to the sensor electrodes of the touchscreen. These different capacitances will result in different degrees of capacitive coupling between signals that are transmitted via the sensor electrodes of the e-pen2702to the sensor electrodes of the touchscreen. Based on the e-pen NSEW mapping determination2750, changes and differences of the capacitances between the respective sensor electrodes of the e-pen2702as well as between the sensor electrodes of the e-pen2702and the sensor electrodes of the touchscreen may be detected based on the simultaneous driving and sensing of signals via these respective electrodes. Once again, note the coupling of signals may be performed not only from the sensor electrodes of the e-pen2702to the sensor electrodes of the touchscreen, but also from the sensor electrodes of the touchscreen to the sensor electrodes of the e-pen2702. The precision and capability of the simultaneous driving and sensing as may be performed using DSCs as described herein and their equivalents allows for highly accurate detection of particularly which signals are being coupled from the e-pen2702to the touchscreen, or vice versa, and also the specific location via which that coupling is being made. Various methods are described within certain of the following diagrams showing different manners by which the test1and test2may be implemented. In some examples, certain of the operations are performed using an independent/smart e-pen and/or a dependent e-pen. In other examples, some of the operations are performed using an independent/smart e-pen, while other of the operations are performed using a dependent e-pen. In addition, note that there may be instances in which a handshake, association, etc. between the e-pen and the touch sensor device is performed preliminarily, such as providing feedback from one to the other, or vice versa, though for one or both of the tests is performed. For example, in an implementation in which one or more processing modules associated with the e-pen is implemented to perform processing of the two tests, then, for example, for the first test, the e-pen receives some feedback from the touch sensor device of at least one of the signals to be coupled from the touch sensor device to the e-pen. For example, when the e-pen is to perform identification and processing of signals coupled from the touch sensor device to the e-pen, and based on the e-pen knowing the physical mapping of the sensor electrodes of the e-pen, then the e-pen is then configured to receive those signals and associated them with the physical mapping of the sensor electrodes of the e-pen, process that information in accordance with performing the second step, and then transmit that determined information back to the touch sensor device (e.g., via one or more of the sensor electrodes of the e-pen). FIG.28is a schematic block diagram of another embodiment of a method2800for execution by one or more devices in accordance with the present invention. The method2800operates in step2810by transmitting e-pen signals having different characteristic(s) via sensor electrodes of e-pen (e.g., primary e-pen signal via primary sensor electrode, secondary signal(s) via one or more secondary sensor electrodes, erasure signal(s) via one or more erasure sensor electrodes, etc.). The method2800operates in step2820by detecting change(s) of e-pen signals having different characteristic(s) via sensor electrodes of e-pen. note that this also included detection of any change(s) such as those that may include affects caused by touch sensor signals. The method2800operates in step2830by transmitting touch sensor signals having different characteristic(s) via sensor electrodes of touch sensors (e.g., rows and columns sensor electrodes of touchscreen). The method2800operates in step2840by detecting change(s) of touch sensor signals having different characteristic(s) via sensor electrodes of touch sensors (e.g., any change(s) include affects caused by e-pen signals). The method2800operates in step2850by processing the change(s) of touch sensor signals having different characteristic(s) via sensor electrodes of touch sensors and/or change(s) of e-pen signals having different characteristic(s) via sensor electrodes of e-pen to generate digital information corresponding to e-pen NSEW mapping and e-pen orientation. Note that the operations depicted within the steps2810and2820,2830and2840, may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). FIG.29is a schematic block diagram of another embodiment of a method2900for execution by one or more devices in accordance with the present invention. The method2900operates in step2910by transmitting, via primary sensor electrode of e-pen, primary e-pen signal having primary characteristic(s) (e.g., primary e-pen signal via primary sensor electrode0). The method2900operates in step2920by transmitting, via one or more secondary sensor electrodes of e-pen, one or more secondary e-pen signals having secondary characteristic(s) (e.g., first secondary e-pen signal via secondary sensor electrode1, second secondary e-pen signal via secondary sensor electrode2, etc.). The method2900operates in step2930by detecting, via touch sensor electrodes, the primary e-pen signal having primary characteristic(s). The method2900operates in step2940by identifying, using processing module(s) associated with the touch sensor electrodes, location of the primary sensor electrode0(e.g., approx. cross-section where primary e-pen signal detected) and associated touch sensor electrodes. The method2900operates in step2950by transmitting, via the associated touch sensor electrodes, one or more touch sensor signals having touch sensor characteristic(s) (e.g., first touch sensor signal via touch sensor electrode1, second touch sensor signal via touch sensor electrode2, etc.). The method2900operates in step2960by detecting, via e-pen sensor electrodes, the one or more touch sensor signals having touch sensor characteristic(s). The method2900operates in step2970by processing, using processing module(s) associated with the e-pen, the one or more touch sensor signals having touch sensor characteristic(s) to generate digital information corresponding to e-pen NSEW mapping and e-pen orientation. The method2900operates in step2980by detecting, via e-pen sensor electrodes, change(s) of the e-pen signals (e.g., primary e-pen signal and the one or more secondary e-pen signals). The method2900operates in step2990by processing, using processing module(s) associated with the e-pen, the change(s) of e-pen signals (e.g., primary e-pen signal and the one or more secondary e-pen signals) to generate digital information corresponding to e-pen orientation (e.g., tilt, orientation of e-pen changes capacitance between respective sensor electrodes of e-pen). FIG.30is a schematic block diagram of another embodiment of a method3000for execution by one or more devices in accordance with the present invention. The method3000operates in step3010by transmitting, via primary sensor electrode of e-pen, primary e-pen signal having primary characteristic(s) (e.g., primary e-pen signal via primary sensor electrode0). The method3000operates in step3020by transmitting, via one or more secondary e-pen sensor electrodes, one or more secondary e-pen signals having secondary characteristic(s) (e.g., first secondary e-pen signal via secondary sensor electrode1, second secondary e-pen signal via secondary sensor electrode2, etc.). The method3000operates in step3030by detecting, via electrodes of touch sensors, the first e-pen signal having primary characteristic(s) and at least one of the one or more secondary e-pen signals having secondary characteristic(s). The method3000operates in step3040by processing, using processing module(s) associated with the touch sensors, the primary e-pen signal having primary characteristic(s) and at least one of the one or more secondary e-pen signals having secondary characteristic(s) to generate digital information corresponding to e-pen NSEW mapping. The method3000operates in step3050by identifying, using processing module(s) associated with the touch sensors, location of the primary sensor electrode based on the sensor electrodes of the touch sensors (e.g., row/column sensor electrodes) and associated touch sensor electrodes (e.g., approx. cross-section of touch sensors where primary first e-pen signal detected). The method3000operates in step3060by transmitting, via the associated touch sensor electrodes, one or more touch sensor signals having touch sensor characteristic(s) (e.g., first touch sensor signal via touch sensor electrode1, second touch sensor signal via touch sensor electrode2, etc.). The method3000operates in step3070by detecting, via the associated touch sensor electrodes, change(s) of the one or more touch sensor signals. The method3000operates in step3080by processing, using processing module(s) associated with the touch sensors, change(s) of the one or more touch sensor signals to generate digital information corresponding to e-pen orientation (e.g., tilt, orientation of e-pen changes capacitance between respective sensor electrodes of e-pen). FIG.31is a schematic block diagram of another embodiment of a method3100for execution by one or more devices in accordance with the present invention. The method3100operates in step3110by transmitting, via primary e-pen sensor electrode, primary e-pen signal having primary characteristic(s) (e.g., primary e-pen signal via primary sensor electrode0). The method3100operates in step3120by transmitting, via one or more secondary e-pen sensor electrodes, one or more secondary e-pen signals having secondary characteristic(s) (e.g., first secondary e-pen signal via secondary sensor electrode1, second secondary e-pen signal via secondary sensor electrode2, etc.). The method3100operates in step3130by detecting, via sensor electrodes of e-pen, change(s) of e-pen signals (e.g., primary e-pen signal and one or more secondary e-pen signals) having different characteristic(s). The method3100operates in step3140by detecting, via electrodes of touch sensors, the first e-pen signal having primary characteristic(s) and at least one of the one or more secondary e-pen signals having secondary characteristic(s). The method3100operates in step3150by processing, using processing module(s) associated with the touch sensors, the primary e-pen signal having primary characteristic(s) and at least one of the one or more secondary e-pen signals having secondary characteristic(s) to generate digital information corresponding to e-pen NSEW mapping. The method3100operates in step3160by processing, using processing module(s) associated with the e-pen, the change(s) of e-pen signals (e.g., primary e-pen signal and the one or more secondary e-pen signals) to generate digital information corresponding to e-pen orientation (e.g., tilt, orientation of e-pen changes capacitance between respective sensor electrodes of e-pen). The method3100operates in step3170by transmitting, via one of the e-pen sensor electrodes and at least one of the sensor electrodes of the touch sensors, one or more signals including the e-pen orientation to the processing module(s) associated with the touch sensors. Note that the operations depicted within the steps3110and3120, and3130and3140, may be performed in accordance with any of the variations, examples, embodiments, etc. of one or more DSCs as described herein that is/are configured to perform simultaneous transmit and receipt of signals (simultaneous drive and detect of signals). FIG.32Ais a schematic block diagram of an embodiment3201of signal assignment to signals associated with e-pen sensor electrodes in accordance with the present invention. In general, while certain embodiments, examples, etc. provided herein are described with respect to specific implementations of one or more e-pens (e.g., such that different respective signals are provided to different respective e-pen sensor electrodes), note that, in general, such principles may be applied to any system, computing device, device, etc. that includes more than one sensor electrode that may be used for various applications (e.g., touch sensor, e-pen, etc.). For example, a user that is associated with e-pen3202interacts with a touch sensor device that includes one or more touch sensors and that is configured to detect one or more signals from one or more e-pens. Operation within this diagram includes e-pen detection, as shown by reference numeral3250, followed by signal assignment to the e-pen sensor electrodes3260, as shown by reference numeral3260. Note that the e-pen detection is performed by a touch sensor device in some examples. Note also that the signal assignment to the e-pen sensor electrodes may be performed by the touch sensor device, by the e-pen, or cooperatively by both the touch sensor device and the e-pen in various examples. In some examples, a handshake, association, etc. between the e-pen and the touch sensor device is performed by which the e-pen3202is detected by the touch sensor device, and one or more signals are assigned to the one or more sensor electrodes of the e-pen. In some examples, an association process is performed within a certain time period such as X seconds (e.g., X=500 micro-seconds, 1 milli-seconds, etc.) that allows the touch sensor device and the e-pen to perform various operations including assigning signal(s) to the sensor electrode(s) of the e-pen, learning which signal(s) are assigned to the sensor electrode(s) of the e-pen, etc. Generally, any signal assignment may be performed to the respective sensor electrodes of the e-pen3202based on signals having any of a variety of properties. In some examples, the signals are differentiated based on frequency. In other examples, they are differentiated based on one or more other characteristics including frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. In some examples, when time division multiplex operation is implemented, a given signal may be reused based on it being employed at different times. In an example of operation and implementation, a signal is assigned to a first e-pen sensor electrode and is operative at or during a first time period. Then, those same signal is assigned to a second e-pen sensor electrodes and is operative at or during a second time period. Alternatively, a first signal is assigned to a first e-pen sensor electrode, and a second signal that is differentiated from the first signal is assigned to a second e-pen sensor electrode, and both the first signal and the second signal operative at or during a first time period. Then, those same signals is assigned to a third and fourth e-pen sensor electrodes and are operative at or during a second time period. When time division multiplex operation is performed, one or more signals may be reused for different sensor electrodes at different times. When simultaneous operation is performed, differentiation between the different respective signals assigned to the different sensor electrodes of the e-pen is performed. Examples of one or more characteristics associated with signals that may be assigned to sensor electrodes of the e-pen may include any one or more of frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. FIG.32Bis a schematic block diagram of an embodiment3202of frequency assignment to signals associated with e-pen sensor electrodes in accordance with the present invention. In this diagram, consider a usable frequency range of X Hz (where X may be any desired number and may include frequencies ranging from DC to any frequency within the radio spectrum, such as the radio spectrum including 3 Hz to 3000 GHz/3 THz, and the usable frequency range may be located anywhere within the radio spectrum and may optionally include DC). Generally speaking, such a usable frequency range of X Hz may include any portion of any frequency spectrum via which signaling may be made (e.g., such as varying from DC to any frequency within the radio spectrum, such as varying from DC to frequencies at, near, or within the visible frequency spectrum, etc.). Within this usable frequency range, consider a number of particular frequencies, shown as fc, fb, and so on up to fx. Based upon a determination of which frequencies within the usable frequency range are available for use, they may be assigned to respective e-pen sensor electrodes. Availability of a particular frequency may be determined based on a number of considerations including whether or not that frequency is being used by a touch sensor device, whether that frequency is problematic such as being susceptible to noise, interference, etc., and/or other considerations. In some examples, a first frequency that is determined to be susceptible to noise, interference, etc. is not selected or used, while a second frequency that is not determined to be susceptible to noise, interference, etc. is selected and used. In an example of operation and implementation, one or more processing modules3230is coupled to drive-sense circuits (DSCs)28that are respectively coupled to one or more sensor electrodes. Note that the one or more processing modules3230may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules3230. For example, the first DSC28is coupled to a primary sensor electrode (SE0), a second DSC20is coupled to a first secondary electrode (SE1), and optionally one or more additional DSCs28is coupled to one or more other sensor electrodes. The one or more processing modules3230is configured to identify and perform assignment of different respective signals to the different respective sensor electrodes are it the identification and selection of which signals are to be assigned to which sensor electrodes may be dynamic such that different assignments are made to the different respective sensor electrodes at different times. In certain instances, note that not all of the sensor electrodes have a signal assigned thereto. For example, fewer than all of the sensor electrodes may have a signal assigned to them at a given time. Considering the example at the right-hand side of the diagram, signal assignment based on frequency is performed dynamically to the respective e-pen sensor electrodes. For example, at or during time a, the frequency fa is assigned to a primary sensor electrode0of the e-pen, the frequency fe is assigned to a sensor electrode1of the e-pen, and so on as shown is that in the table of the diagram. Then, at or during time b, the frequency fx is assigned to a primary sensor electrode0of the e-pen, the frequency fd is assigned to a sensor electrode1of the e-pen, and so on as shown is that in the table of the diagram. Note that there may be instances in which a signal having the same frequency is assigned to the same sensor electrode at different times, while signals assigned to another sensor electrode made change in frequency based on assignments made at different times. Note that while this diagram provides an example of assignment of signals to different respective sensor electrodes of an e-pen based on frequency of the signals, assignment of signals may be made to the different respective sensor electrodes of the e-pen based on a number of different characteristics alternatively to frequency or in combination with frequency. Various examples are included herein including with respect to the following diagrams illustrate assignment of different respective signals having different respective characteristics to different sensor electrodes of an e-pen. In some examples, the assignment is dynamic such that different respective signals having different characteristics are assigned to a given sensor electrode of the e-pen at different times. FIG.33Ais a schematic block diagram of an embodiment3301of forward error correction (FEC)/error checking and correction (ECC) assignment to signals associated with e-pen sensor electrodes in accordance with the present invention. This diagram shows adaptive encoding for different respective signal associated with different respective e-pen sensor electrodes. In this diagram, one or more processing modules3330(which may be implemented to include memory and/or be coupled to memory) is implemented to perform encoding processing using any one or more of different types of FEC codes or ECCs. The one or more processing modules3330is configured to generate two or more encoded signals based on the various FEC codes or ECCs. In some examples, two or more encoded signals are based on the same FEC code or ECC. In other examples, two or more encoded signals are based on different FEC codes or ECCs. In this example, at or during a first time, the processor3330generates a first encoded signal based on low density parity check (LDPC) code, and a second encoded signal based on Reed-Solomon (RS) code. Generally, any number of additional encoded signals may be generated based on any one or more FEC codes or ECCs (e.g., up to an nth encoded signal based on turbo code). Note that these encoded signals subsequently are provided respective to one or more DSCs28that are in communication with one or more sensor electrodes. In an example of operation and implementation, a first encoded signal is provided via a first DSC28to a primary sensor electrode (SE)0. In some examples, this first encoded signal is an LDPC coded signal. Also, a second encoded signal is provided via a second DSC28to a secondary SE1. In some examples, this second encoded signal is an RS coded signal. if desired in some embodiments, an nth (e.g., where n is a positive integer greater than or equal to 3) encoded signal is provided via an nth DSC28to another SE. In some examples, this nth encoded signal is a turbo coded signal. FIG.33Bis a schematic block diagram of another embodiment3302of FEC/ECC assignment to signals associated with e-pen sensor electrodes in accordance with the present invention. The operations of this diagram may be viewed as being at or during a different time than the first time ofFIG.33A. Based on one or more considerations, the one or more processing modules3330adapts one or more of the FEC codes or ECCs used to generate the two or more encoded signals. For example, based on the one or more considerations, the one or more processing modules3330selects different one or more FEC codes or ECCs to generate the two or more encoded signals. In an example of operation and implementation, at or during a second time, the one or more processing modules3330generates the first encoded signal based on BCH (Bose and Ray-Chaudhuri, and Hocquenghem) code, the second encoded signal based on RS code, and an nth encoded signal based on turbo code. Generally speaking adaptation between different FEC codes or ECCs may be made for the various encoded signals at different times based on different criteria. FIG.34Ais a schematic block diagram of an embodiment3401of different types of modulations or modulation coding sets (MCSs) used for modulation of different bit or symbol streams. In this diagram, different types of modulations or modulation coding sets (MCSs) used for modulation of different bit or symbol streams. Information, data, signals, etc. may be modulated using various modulation coding techniques. Examples of such modulation coding techniques may include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK) or quadrature amplitude modulation (QAM), 8-phase shift keying (PSK), 16 quadrature amplitude modulation (QAM), 32 amplitude and phase shift keying (APSK), 64-QAM, etc., uncoded modulation, and/or any other desired types of modulation including higher ordered modulations that may include even greater number of constellation points (e.g., 1024 QAM, etc.). Generally speaking, and considering a communication system type implementation including at least two devices (e.g., a transmitting device and a recipient device), a device that generates two or more transmission streams based on different parameters can generate a first transmission stream based on a first at least one parameter such as a first MCS that is relatively more robust and provides for relatively lower throughput than a second transmission stream based on a second at least one parameter such as a second MCS that is relatively less robust and provides for relatively higher throughput. Relatively lower-ordered modulation/MCS (e.g., relatively fewer bits per symbol, relatively fewer constellation points per constellation, etc.) may be used for the first transmission stream to ensure reception by a recipient device and so that the recipient device can successfully recover information therein (e.g., being relatively more robust, easier to demodulate, decode, etc.). Relatively higher-ordered modulation/MCS (e.g., relatively more bits per symbol, relatively more constellation points per constellation, etc.) may be used for the second transmission stream so that any recipient device that can successfully recover information there from can use it as well. This second information within the second transmission stream may be separate and independent from first information included within the first transmission stream or may be intended for use in conjunction with the first information included within the first transmission stream. FIG.34Bis a schematic block diagram of an embodiment3402of different labeling of constellation points in a constellation. This diagram includes an example of different labeling of constellation points in a constellation. This diagram uses an example of a QPSK/QAM shaped constellation having different labeling of the constellation points therein that may be used at different times. In an example operation, a device generates a transmission stream based on the labeling1at or during a first time and based on the labeling2at or during a second time. The particular labeling of constellation points within a constellation is one example of a parameter that may be used to generate a transmission stream and that may change and vary over time. FIG.34Cis a schematic block diagram of an embodiment3403of different arrangements of constellation points in a type of constellation. This diagram includes different arrangements of constellation points in a type of constellation. This diagram also uses an example of a QPSK/QAM shaped constellation but with varying placement of the four constellation points based on different forms of QPSK (e.g., QPSK1, QPSK2, and QPSK3). Note that the relative distance of the four constellation points may be scaled differently at different times, yet such that each constellation point is included within a separate quadrant. Comparing QPSK2to QPSK1, the constellation points of QPSK2are relatively further from the origin than QPSK1. Comparing QPSK3to QPSK2, the constellation points of QPSK3are shifted up or down relative to QPSK2. Note that any other type of shape of constellation may similarly be varied based on the principles described with respect toFIG.34BandFIG.34C. For example, the labeling and or placement of the constellation points within an 8-PSK type constellation, a 16 QAM type constellation, and/or any other type constellation may change in very as a function of time based on any desired consideration as well. FIG.34Dis a schematic block diagram of an embodiment3404of adaptive symbol mapping/modulation for different transmission streams. This diagram includes adaptive symbol mapping/modulation for different transmission streams. In this diagram, one or more processing modules3430of a device is/are implemented to perform symbol mapping or modulation based on different modulations, symbol mappings, MCSs, etc. at or during different times. In some examples, two or more encoded streams are based on the same modulation, symbol mapping, MCS, etc. In other examples, two or more encoded streams are based on different modulations, symbol mappings, MCSs, etc. In this example, at or during a first time, the one or more processing modules3430generates a first symbol stream based on a first QAM/QPSK mode (e.g., QPSK1ofFIG.34C) and a second symbol stream based on a second QAM/QPSK mode (e.g., QPSK2ofFIG.34C). Generally, any number of additional symbol streams may be generated based on any one or more modulations, symbol mappings, MCSs, etc. (e.g., up to an nth symbol stream based on 16 QAM). In an example of operation and implementation, a first signal having a first modulation provided via a first DSC28to a primary SE0. In some examples, this first modulated signal is a 1stQAM signal (e.g., QAM1/QPSK1). Also, a second signal is provided via a second DSC28to a secondary SE1. In some examples, this second signal is a 2ndQAM signal (e.g., QAM2/QPSK2). if desired in some embodiments, an nth (e.g., where n is a positive integer greater than or equal to 3) signal is provided via an nth DSC28to another SE. In some examples, this nth encoded signal is a 16 QAM signal. In general, note that any number, type, etc. of various modulations and/or MCSs may be implemented and used by the one or more processing modules3430. Also, note that characteristics of a particular type of modulation may be varied to generate different variants of a common type of modulation (e.g., using different constellation point labeling/mapping such as with respect toFIG.34Bas applied for QPSK and such principles may be applied to any type, shape, etc. modulation, differently located constellation points such as with respect toFIG.34Eas applied for QPSK and such principles may be applied to any type, shape, etc. modulation, etc.) In addition, note that alternative forms of modulation may be used such as frequency-shift keying (FSK) (e.g., a frequency modulation scheme using discrete frequency changes of a carrier signal/wave in accordance with a frequency modulation, a simplest form of which is binary FSK using a pair of frequencies corresponding to binary information [e.g., first frequency for transmitting 0s and second frequency for transmitting 1s], such a scheme may be used in a continuous time, very fast signaling approach, such as in a multi-frequency analog system, in which multiple continuous transmission may be performed over carrier(s), etc.), multiple frequency-shift keying (MFSK) (e.g., a variant of FSK using two or more frequency, e.g., such as being an M-ary orthogonal modulation, where M is a positive integer), amplitude-shift keying (ASK) (e.g., an amplitude modulation implemented to represent digital data based on changes of the amplitude in a carrier signal/wave), etc. and/or any other form of analog modulation, digital modulation, hierarchical modulation, etc. FIG.34Eis a schematic block diagram of an embodiment3405of adaptive symbol mapping/modulation for different transmission streams. This diagram includes adaptive symbol mapping/modulation for different transmission streams. The operations of this diagram may be viewed as being at or during a different time than the first time ofFIG.34D. Based on one or more considerations, the one or more processing module3430adapts one or more of the modulations, symbol mappings, MCSs, etc. used to generate the two or more symbol streams. For example, based on feedback provided from a recipient device to which the two or more encoded streams ofFIG.6Dhave been transmitted, the one or more processing module3430selects different one or more modulations, symbol mappings, MCSs, etc. to generate the two or more symbol streams. In this example, at or during a second time, the one or more processing module3430generates the first symbol stream based on 16 QAM, the second symbol stream based on 64 QAM, and optional an nth symbol stream based on 256 QAM. In an example of operation and implementation, at or during a second time, the one or more processing modules3430generates a first symbol stream based on 16 QAM and a second symbol stream based on 64 QAM. Generally, any number of additional symbol streams may be generated based on any one or more modulations, symbol mappings, MCSs, etc. (e.g., up to an nth symbol stream based on 256 QAM). FIG.35is a schematic block diagram of another embodiment of a method3500for execution by one or more devices in accordance with the present invention. The method3500begins in step3510by monitoring for an e-pen. In some examples, this is performed using a touch sensor device. For example, one or more processing modules implemented within a touch sensor device or operative with the touch sensor device is configured to perform processing of signals associated with monitoring for the e-pen. For example, the one or more processing modules is configured to perform monitoring for one or more signals being coupled from one or more sensor electrodes of an e-pen to the row and column electrodes of the touchscreen. Then, based on no detection of an e-pen in step3520, the method3500loops back to the step3510to continue monitoring for an e-pen. Alternatively, based on detection of an e-pen in step3520, the method3500branches to step3530and operates by determining signal availability. For example, the determination of signal availability may be made based upon a variety of considerations. In some examples, one or more signal characteristics (e.g., frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc.) of one or more signals being used is considered and is the basis, at least in part, by which signal availability is determined. In other examples, one or more signals not being used by one or more e-pens and/or the touch sensor device is the basis, at least in part, by which signal availability is determined. Generally speaking, signal availability determination is based on determining which signals may be available to be used within the system. For example, those signals that are not currently being used and are available may be selected and used for one or more purposes. The method3500also operates in step3540by assigning one or more signals to e-pen sensor electrodes based on the signal availability that is determined. For example, in an implementation in which the e-pen does not have specific signals already assigned and associated with the e-pen sensor electrodes therein, dynamic assignment of one or more signals, among those signals that are available, may be made to the e-pen sensor electrodes. This assigning operation is performed by the touch sensor device and is communicated to the e-pen. The method3500also operates in step3550by operating the e-pen and/or the touch sensor device based on the assigned signals. This method3500provides a means by which different respective signals may be assigned for various purposes within an e-pen and touch sensor device system based on signal availability. Note that adaptation and reassignment of signals may be made at different times and based on various considerations. For example, based on a determination that operation using a first one or more assigned signals compares unfavorably to one or more performance criteria (e.g., poor performance, noise, noisy signaling, interference, latency, etc.), then different assignment of signals may be made for subsequent operation of the e-pen and/or the touch sensor device. FIG.36is a schematic block diagram of another embodiment of a method3600for execution by one or more devices in accordance with the present invention. The method3600operates in step3610by monitoring for a predetermined signal of an e-pen. For example, the predetermined signal may be a signal having any one or more predetermined characteristics such as frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc. Based on no detection of the predetermined signal in step3620, the method3600loops back to step3610to continue monitoring for the predetermined signal. Alternatively, based on detection of the predetermined signal in step3620this method3600branches to step3630and operates by processing the predetermined signal to determine whether an e-pen associated with the predetermined signal is an independent e-pen. Examples of different types of e-pens include independent e-pens and dependent e-pens. Generally speaking, an independent e-pen may be viewed as an e-pen that is a smart e-pen and that includes intelligence and associated processing capability therein. In some examples, an independent e-pen includes one or more processing modules that may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules. Generally speaking, a dependent e-pen may be viewed as an e-pen the that is not a smart e-pen and that does not include intelligence and associated processing capability therein. Based on a determination that the e-pen is an independent e-pen in step3640, the method3600branches to perform various operations associated with independent e-pen/smart e-pen operation. For example, determination of formatting of information to be provided via communications between the e-pen and the touch sensor device is made. For example, the method3600operates in step3680by coordinating formatting for e-pen/touch sensor device communications. Examples of formatting information related to communications between the e-pen and the touch sensor device may include any one or more of packet type, format, header format, payload, body, size, length, modulation, FEC/ECC, etc. Such determination may include the number of bits or bytes of the packet (e.g., 4-bits, 16-bits, etc.), the number of bits or bytes of various portions of the packet including different respective fields, information related to interpretation of the respective fields of packet used in those communication, etc. Details regarding the type of communications provided between the e-pen and the touch sensor device may be performed in a variety of ways. In one example, they are determined independently by an independent e-pen and communicated to the touch sensor device. In another example, they are determined cooperatively by both the touch sensor device and the independent e-pen. For example, this may be performed based on a handshake between the e-pen and the touch sensor device. When operating based on this path of the method3600, the method3600also operates in step3684by determining signal availability. This availability is performed by an independent e-pen or cooperatively using the touch sensor device and the independent e-pen. The signal availability is based on a number of factors as described herein including information regarding those signals already assigned to or in use by the touch sensor device, favorable or unfavorable comparison to one or more performance criteria (e.g., poor performance, noise, noisy signaling, interference, latency, etc.), etc. Then, based on one or more signals being available for use, the method3600operates in step3684by assigning one or more of those signals to the one or more e-pen sensor electrodes. This assigning operation is performed by an independent e-pen or cooperatively using the touch sensor device and the independent e-pen. Alternatively, based on a determination that the e-pen is a dependent e-pen in step3640, the method3600branches to step3650and operates by determining signal availability. For example, the determination of signal availability may be made based upon a variety of considerations. In some examples, one or more signal characteristics (e.g., frequency, amplitude, DC offset, modulation, modulation & coding set/rate (MCS), forward error correction (FEC) and/or error checking and correction (ECC), type, etc.) of one or more signals being used is considered and is the basis, at least in part, by which signal availability is determined. In other examples, one or more signals not being used by one or more e-pens and/or the touch sensor device is the basis, at least in part, by which signal availability is determined. Generally speaking, signal availability determination is based on determining which signals may be available to be used within the system. For example, those signals that are not currently being used and are available may be selected and used for one or more purposes. The method3600also operates in step3660by assigning one or more signals to e-pen sensor electrodes based on the signal availability that is determined. For example, in an implementation in which the e-pen does not have specific signals already assigned and associated with the e-pen sensor electrodes therein, dynamic assignment of one or more signals, among those signals that are available, may be made to the e-pen sensor electrodes. This assigning operation is performed by the touch sensor device and is communicated to the dependent e-pen. The method3600also operates in step3670by operating the e-pen and/or the touch sensor device based on the assigned signals. FIG.37is a schematic block diagram of another embodiment of a method3700for execution by one or more devices in accordance with the present invention. The method3700begins in step3710by monitoring for an e-pen. In some examples, this is performed using a touch sensor device. For example, one or more processing modules implemented within a touch sensor device or operative with the touch sensor device is configured to perform processing of signals associated with monitoring for the e-pen. For example, the one or more processing modules is configured to perform monitoring for one or more signals being coupled from one or more sensor electrodes of an e-pen to the row and column electrodes of the touchscreen. Then, based on no detection of an e-pen in step3720, the method3700loops back to the step3710to continue monitoring for an e-pen. Alternatively, based on detection of an e-pen in step3720, the method3700branches to step3730and operates by processing the e-pen signal that is detected to determine whether the e-pen signal corresponds to write or erase operation. For example, based on assignment of different respective signals to different respective sensor electrodes of an e-pen, detection, processing, and analysis of a signal will provide information regarding whether or not the e-pen signal corresponds to write or erase operation. For example, different respective sensor electrodes of an e-pen are implemented for various operation including write and/or erase operation. When a signal is detected, and that signal corresponds to a sensor electrode of the e-pen implemented for write operation, then the e-pen signal is determined to correspond to write operation. Alternatively, when a signal is detected, and that signal corresponds to a sensor electrode of the e-pen implemented for read operation, then the e-pen signal is determined to correspond to read operation. The method3700continues via step3740and branches to step3750by processing the e-pen signal based on write operation based on a determination that the e-pen signal corresponds to write operation. The method3700continues via step3740and step3760and branches to step3760by processing the e-pen signal based on erase operation based on a determination that the e-pen signal corresponds to erase operation. Alternatively, based on a failure to determine whether the e-pen signal corresponds to write or erase operation, the method3700continues via step3740and step3760and ends. In an alternative implementation, the method3700continues via step3740and step3760and loops back to the step3710or the step3730to continue monitoring foreign e-pen signal (step3710) or alternatively processing that e-pen signal to determine whether the e-pen signal corresponds to write or erase operation (step3730). For example, a failure in processing of the e-pen signal may result in a failure to perform proper identification of the e-pen signal corresponding to write or erase operation. Additional processing, reprocessing, etc. of the e-pen signal may result in proper determination of the correspondence of the e-pen signal. In some alternative embodiments of the various methods, systems, devices, etc. described herein, note that all touch sensing related processing is performed using the one or more processing modules included within a sensor device or associated with the touch sensor device, and all e-pen related processing is performed using one or more processing modules of the e-pen or that are associated with the e-pen. In other alternative embodiments of the various methods, systems, devices, etc. described herein, note that all touch sensing related processing as well as e-pen related processing is performed using the one or more processing modules included within a sensor device or associated with the touch sensor device. In yet other alternative embodiments of the various methods, systems, devices, etc. described herein, note that all touch sensing related processing as well as e-pen related processing is performed using the one or more processing modules of the e-pen or that are associated with the e-pen. In further alternative embodiments of the various methods, systems, devices, etc. described herein, note that the touch sensing related processing and the e-pen related processing is distributed among one or more processing modules included within a sensor device or associated with the touch sensor device and/or one or more processing modules of the e-pen or that are associated with the e-pen (e.g., in a hybrid implementation in which some processing for each of the touch sensing related processing and the e-pen related processing is performed in a distributed manner involving one or both of one or more processing modules corresponding to the touch sensor device and/or the e-pen). In addition, note that the functionality, methods, operations, capability, etc. described in associated with the various embodiments, examples, etc. and/or their equivalents may be performed in a variety of different ways. In certain limitations, one or more processing modules is/are coupled to one or more drive-sense circuits (DSCs). Note that the one or more processing modules may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules. In some examples, one or more of the DSCs generates digital information corresponding to one or more signals being simultaneously transmitted and sensed to one or more elements (e.g., such as to e-pen sensor electrodes, touch sensor electrodes, etc. and via single respective lines via which both transmission and sensing is performed simultaneously). Note that such one or more processing modules may also be in communication with and interact with one or more other elements in a given implementation. For example, one or more processing modules may be in communication with one or more other processing modules via various communication means (e.g., communication links, communication networks, etc.). Certain of the various functionality, methods, operations, capability, etc. described in associated with the various embodiments, examples, etc. and/or their equivalents as described herein may be implemented using appropriately connected one or more processing modules, DSCs, sensors (e.g., such as e-pen sensor electrodes, touch sensor electrodes, etc.). The one or more processing modules is configured to process one or more digital signals, provided from the one or more DSCs, that a representative of one or more electrical characteristics of the one or more elements of via which one or more signals are simultaneously driven and sensed (simultaneously transmitted and received) from the one or more DSCs. For example, considering a physical imitation in which one or more processing modules is in communication with one or more DSCs that are in communication with one or more e-pen sensor electrodes, touch sensor electrodes, sensors, transducers, etc., appropriate operation of the one or more DSCs, such as may be directed by the one or more processing modules or cooperatively performed with the one or more processing modules, facilitates implementation of various embodiments, examples, etc. and/or their equivalents as described herein. It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’). As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. For some industries, an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more. Other examples of industry-accepted tolerance range from less than one percent to fifty percent. Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics. Within an industry, tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/−1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences. As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal1has a greater magnitude than signal2, a favorable comparison may be achieved when the magnitude of signal1is greater than that of signal2or when the magnitude of signal2is less than that of signal1. As may be used herein, the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship. As may be used herein, one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”. In either phrasing, the phrases are to be interpreted identically. In particular, “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c. As an example, it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”. As may also be used herein, the terms “processing module”, “processing circuit”, “processor”, “processing circuitry”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture. One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines. In addition, a flow diagram may include an “end” and/or “continue” indication. The “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained. The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones. Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art. The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules. As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. The memory device may be in a form a solid-state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information. While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 218,841 |
11861083 | FIG.1shows an input device1according to the invention, with a touchscreen functioning as a capacitive detection device2. The detection device2defines a detection surface10facing towards the operator B, on which a handling means3is disposed so as to be mounted rotatably about an axis of rotation D by means of the supporting means, which are not shown inFIG.1for better clarity, thus forming a so-called rotary adjuster. The capacitive detection device2has array electrodes X1to X3that extend parallel to each other, and array electrodes Y1to Y3extending perpendicularly thereto as counter electrodes, whereby a first array is formed. The first array of array electrodes X1to X3, Y1to Y3is not depicted in full and to scale in the Figures and is only supposed to serve for the schematic illustration of the general structure. The crossing points of the array electrodes X1to X3with the array electrodes Y1to Y3each form an imaginary junction point which is in each case the starting point of a capacitive measuring field. For reasons of clarity, only one junction point, i.e. K31, is labeled more clearly in the figure. The numbering of the other junction points is analogous therewith. An electronic evaluation unit14is electrically connected to the array electrodes X1to X3and Y1to Y3, which, for generating an associated measuring field, applies an associated potential in each case to some of the array electrodes, e.g. to the electrodes X1to X3, selectively and in a sequence in time, in order to detect a touch by the operator B or, depending on the position of the respective junction points relative to the handling means3, a position of the handling means3, based on the influence on these measuring fields. In order to influence the respective measuring fields, the handling means3has on the side thereof facing towards the detection surface10a position indicator4, which in the present embodiment is disposed in an electrically insulated manner with respect to the operator B while the latter touches the handling means3, and which, instead of the potential of the operator being applied thereto, is coupled to the electrical field of at least one of the array electrodes. Several predefined positions are provided, in particularly ones that are uniformly distributed across the adjustment path of the handling means3, of which one possible position is shown inFIG.1. These positions are predefined by a latching device that is not shown. For an improved capacitive coupling between the position indicator4and, depending on the position, one of the measuring fields located at the junction points K11to K33, a coupling device5disposed in a stationary manner on the detection surface10is provided. It has a first surface facing towards the detection surface10and a second surface facing towards the position indicator4. For example, the first surface is disposed adjacent to the detection surface. Two possible embodiments of the first surface are shown inFIGS.4and5. An embodiment of the second surface is shown inFIG.3. The first surface carries a second array of coupling electrode6a,6b,6c, of which only a portion is shown inFIG.1, while the second surface carries a third array of contact surfaces7a,7b,7c, only a portion of which is also shown inFIG.1. The placement of the coupling electrode6a,6b,6cof the second array on the first side is not congruent with the placement of the contact surfaces7a,7b,7cof the third array on the second side, which can be ascribed to the fact that the placement of the contact surfaces7a,7b,7cis subject to different requirements from those of that of the coupling electrodes6a,6b,6c. In order to obtain an effective coupling, the latter are guided by the grid structure of the first array, so that the geometric center point of the coupling electrode6a,6b,6cis in each case opposite a junction point, e.g. K31fromFIG.1, without the coupling electrodes6a,6b,6cand the array electrodes X1to X3and Y1to Y3of the touchscreen touching each other. In contrast, the contact surfaces7a,7b,7cfollow the track of the position indicator4along which the latter moves during the manual movement of the handling means3and, depending on the position, establishes a touching contact with at least one of the contact surfaces7a,7b,7c. In order to now capacitively influence, in a position-dependent manner, one of the measuring fields of the array electrodes by means of the position indicator4via one of the coupling electrodes6a,6b,6c, one electrically conductive connection8a,8b,8c, respectively, is provided, which starts at one contact surface7a,7b,7cand extends towards one coupling electrode6a,6b,6c. In order to solve the problem of the arrangement of the coupling electrodes6a,6b,6con the one hand and the contact surfaces7a,7b,7c, the coupling device5has a substrate9a, which is shown in a cross-section inFIG.2, which is made from an electrically insulating material, and on which the conductive connections8aare formed, in each case at least in some portions, as a conductor path8a,8b,8cprovided by a conductive coating of the substrate9a. In the present configuration, the substrate9ais a fiber reinforced plastic or a plastic sheet and a part of the layer structure9associated with the coupling device. The conductor paths8a,8b,8care preferably integrated into the layer structure. The contact surfaces7a,7b,7cas well as the conductive coating6a,6b,6care in each case formed as conductive coatings of the outer layers of the layer structure9. For example, the layer structure is a multi-layer circuit board in which the conductor paths8a,8b,8care embedded in the multi-layer. Even though only one of the conductor paths8a,8b,8cfromFIG.1is shown, it also becomes clear fromFIG.2that the conductor paths8a,8b,8cextend substantially parallel to the detection surface10fromFIG.1and in each case serve for bridging the offset between the geometric center point of the contact surface7a,7b,7cand the geometric center point of the coupling electrodes6a,6b,6c. It also becomes clear that at least two connections differ with respect to the length of their conductor paths, i.e.8band8c. The structure of the coupling device5on its second surface, which faces towards the position indicator4, becomes clear fromFIG.3. For illustration purposes, the position indicator4is shown in a superposed manner in four adjacent, predefined positions, with the points12identifying the position of its spring tongues with which, depending on the position, it touch-contacts the contact surfaces7a,7b,7cbut also touches an annular feed contact surface11in order thus to be coupled to an electrical field of at least one array electrode, because the feed contact surface11is in electrical contact with several feed electrodes13disposed on the first surface of the coupling device5. The position and the outline of the coupling electrodes6a,6b,6c, which are located on the first surface of the coupling device5and which are contacted via the conductor paths8a,8b,8cintegrated into the layer structure9, is shown with a dotted line. FIG.4is a view of the first side of the coupling device5facing towards the detection surface10, wherein the view onto the detection surface10with the first array of array electrodes X1to Xn, Y1to Yn associated therewith is superposed thereon for illustrating the position of the coupling electrodes6a,6b,6c. The array forms an imaginary, regular grid structure, wherein the position of the junction points defines a smallest periodicity determined by the smallest distance a between most closely adjacent junction points. It becomes apparent that the coupling electrodes are arranged with their geometric center point above an associated junction point in each case. For example, Kin is marked as one of the many junction points inFIG.4. In accordance with the nomenclature, the coupling electrode6ais associated with the junction point K71. Given a corresponding position of the handling means3, the coupling electrodes6ato6cserve for providing a capacitive coupling between the position indicator4and the measuring fields located at the junction points, so that the respective measuring fields are influenced in the area of the junction points, which can be detected by the evaluation unit14and serves for the position detection of the evaluation unit14, so that the latter is capable of outputting a positional information or at least movement information. WhileFIG.4shows an embodiment of the coupling device5in which a number of coupling electrodes6a,6b,6cmatches that of the number of predefined positions of the handling means in order thus to be able to detect the position in absolute terms, embodiments are conceivable in which only a relative detection, in which only the extent of the rotation and/or the direction of rotation has to be detected as movement information, is of importance. Such an embodiment, in which several contact surfaces are connected to one coupling electrode6a,6b, is shown inFIG.5. Thus, as depicted, only two coupling electrodes6a,6bare provided, wherein the contact surfaces7a,7b,7care connected in an alternating manner to one of the coupling electrodes6a,6bin the peripheral direction about the axis of rotation D. The associated electrical connections in this case have conductor paths8a,8b,8c, which differ with respect to their length by a multiple of the smallest distance a between most closely adjacent junction points. | 9,426 |
11861084 | DETAILED DESCRIPTION Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces unless the context clearly dictates otherwise. Embodiments of the present invention relate to the field of computing, and more particularly to a system for splitting a mobile device display and mapping content with a single hand of a user. The following described exemplary embodiments provide a system, method, and program product to, among other things, create a plurality of split displays on a mobile device based on movement of a first finger of the user and, accordingly, map one or more items of a plurality of content to one or more of the plurality of split displays based on movement of a second finger of the user. Therefore, the present embodiment has the capacity to improve the technical field of mobile devices by dynamically splitting a mobile device display and mapping content using a single hand of the user. In addition, the present embodiment has the capacity to improve the graphical user interface (GUI) of a mobile device by displaying the plurality of content to be mapped adjacent to the second finger of the user. As previously described, in the late 20thcentury and during the beginning of the 21stcentury, mobile devices, especially cell phones, had small display surfaces (i.e., screens). These mobile devices had a physical keypad which took up a majority of the front portion of the mobile device. More recently, manufacturers have produced smart devices (e.g., smartphones and tablets) with larger display surfaces. These smart devices, particularly smartphones, have most if not all of their functionality on the display surface. For example, the physical keyboard has been replaced with a virtual keyboard. Since the display surfaces of current mobile devices are larger, these display surfaces may be split into different portions, with each portion displaying a variety of content. It is often difficult to split the display surface and map content to the split sections with one free hand when the other hand is engaged (e.g., carrying a briefcase). Many mobile device manufacturers have an option where the user can long press (i.e., tap and hold) an application and then tap on another option in a pop-up window to show the application is split-screen view. However, this long press option fails to consider that the user may only have one free hand and may not be able to tap on an application oriented in the center of the mobile device display. It may therefore be imperative to have a system in place to enable the user to split the mobile device display into multiple sections and map content to those sections with a single hand. Thus, embodiments of the present invention may provide advantages including, but not limited to, enabling the user to split the mobile device display into multiple sections and map content to those sections with a single hand, presenting content to the user based on a contextual situation of the user, and historically learning a correlation among content the user maps to the split displays. The present invention does not require that all advantages need to be incorporated into every embodiment of the invention. According to at least one embodiment, when a user is holding a mobile device with one hand while the other hand is engaged, configuration criteria regarding mobile device display splitting and content mapping be received from the user, and a first location of a first finger and a second location of a second finger of the user on a frame of the mobile device is identified. In embodiments of the present invention, the first finger may be used in the mobile device display splitting and the second finger may be used in the content mapping in accordance with the configuration criteria. In response to determining the user is moving the first finger in a first direction along the frame of the mobile device from the first location, a plurality of split displays on the mobile device may be created based on the movement of the first finger in the first direction. Upon creating the plurality of split displays, a contextual situation of the user may be identified in order to display a plurality of content adjacent to the second finger of the user based on the contextual situation. The plurality of content may be displayed adjacent to the second finger of the user such that the user is able to map one or more items of the plurality of content to one or more of the plurality of split displays by moving the second finger along the frame of the mobile device from the second location. According to at least one embodiment, a knowledge corpus of a correlation among the one or more mapped items of the plurality of content may be created based on historical learning. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed concurrently or substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The following described exemplary embodiments provide a system, method, and program product to create a plurality of split displays on a mobile device based on movement of a first finger of the user and, accordingly, map one or more items of a plurality of content to one or more of the plurality of split displays based on movement of a second finger of the user. Referring toFIG.1, an exemplary networked computer environment100is depicted, according to at least one embodiment. The networked computer environment100may include client computing device102, a server112, and Internet of Things (IoT) Device118interconnected via a communication network114. According to at least one implementation, the networked computer environment100may include a plurality of client computing devices102and servers112, of which only one of each is shown for illustrative brevity. The communication network114may include various types of communication networks, such as a wide area network (WAN), local area network (LAN), a telecommunication network, a wireless network, a public switched network and/or a satellite network. The communication network114may include connections, such as wire, wireless communication links, or fiber optic cables. It may be appreciated thatFIG.1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements. Client computing device102may include a processor104and a data storage device106that is enabled to host and run a software program108and a mobile device display splitting program110A and communicate with the server112and IoT Device118via the communication network114, in accordance with one embodiment of the invention. Client computing device102may be, for example, a mobile device, a telephone, a personal digital assistant, a netbook, a laptop computer, a tablet computer, a desktop computer, or any type of computing device capable of running a program and accessing a network. As will be discussed with reference toFIG.4, the client computing device102may include internal components402aand external components404a, respectively. The server computer112may be a laptop computer, netbook computer, personal computer (PC), a desktop computer, or any programmable electronic device or any network of programmable electronic devices capable of hosting and running a mobile device display splitting program110B and a database116and communicating with the client computing device102and IoT Device118via the communication network114, in accordance with embodiments of the invention. As will be discussed with reference toFIG.4, the server computer112may include internal components402band external components404b, respectively. The server112may also operate in a cloud computing service model, such as Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS). The server112may also be located in a cloud computing deployment model, such as a private cloud, community cloud, public cloud, or hybrid cloud. IoT Device118may be a plurality of sensors embedded in the client computing device102, including pressure sensors and motion sensors, and/or any other IoT Device118known in the art for capturing facial and/or hand gestures of a user that is capable of connecting to the communication network114, and transmitting and receiving data with the client computing device102and the server112. According to the present embodiment, the mobile device display splitting program110A,110B may be a program capable of receiving configuration criteria regarding mobile device display splitting and content mapping from a user, creating a plurality of split displays on a mobile device based on movement of a first finger of the user, mapping one or more items of a plurality of content to one or more of the plurality of split displays based on movement of a second finger of the user, enabling the user to split the mobile device display into multiple sections and map content to those sections with a single hand, presenting content to the user based on a contextual situation of the user, and historically learning a correlation among content the user maps to the split displays. The mobile device display splitting method is explained in further detail below with respect toFIG.2. Referring now toFIG.2, an operational flowchart for splitting a mobile device display and mapping content with a single hand of a user in a mobile device display splitting and mapping process200is depicted according to at least one embodiment. At202, the mobile device display splitting program110A,110B receives the configuration criteria regarding mobile device display splitting and content mapping from the user. The configuration criteria may be provided by the user via a user interface (UI), allowing the user to customize various parameters. It may be appreciated that in embodiments of the present invention, “screen” and “display” are used interchangeably. According to at least one embodiment, the configuration criteria may include which finger is to be used in the mobile device display splitting and which finger is to be used in the content mapping. In embodiments of the present invention, references are made to a first finger and a second finger of the user. The first finger may be used in the mobile device display splitting, and the second finger may be used in the content mapping. The user may define which finger is the first finger and which finger is the second finger. The first finger may be an index finger, and the second finger may be a thumb. Alternatively, the first finger may be the thumb, and the second finger may be the index finger. In the description that follows, it may be appreciated that for illustrative purposes and brevity the first finger is the index finger, and the second finger is the thumb. According to at least one other embodiment, the configuration criteria may include an action to be performed by the mobile device display splitting program110A,110B based on a direction of movement of the first finger along the frame of the mobile device. In embodiments of the present invention, references are made to a first direction and a second direction. Movement of the first finger in the first direction along the frame of the mobile device may be associated with the creation of the plurality of split displays, and movement of the first finger in the second direction may be associated with a removal of one or more of the plurality of split displays. The user may define the first direction and the second direction. The first direction may be a downward direction (e.g., from a top of the mobile device to a bottom of the mobile device) along the frame of the mobile device, and the second direction may be an upward direction (e.g., from the bottom of the mobile device to the top of the mobile device) along the frame of the mobile device. Alternatively, the first direction may be the upward direction along the frame of the mobile device, and the second direction may be the downward direction along the frame of the mobile device. In the description that follows, it may be appreciated that for illustrative purposes and brevity the first direction is the downward direction along the frame of the mobile device, and the second direction is the upward direction along the frame of the mobile device. According to at least one further embodiment, the configuration criteria may include whether the user would like a number of split displays to be created to be displayed on the mobile device. In this embodiment, the user may specify that as the user moves the first finger in the downward direction, the number of split displays to be created on the mobile device be displayed in real-time. Alternatively, the user may elect not to display the number of split displays to be created. According to at least one other further embodiment, the configuration criteria may include the user specifying a movement distance required of the first finger in the first direction to create the plurality of split displays, and a movement distance required of the first finger in the second direction to remove one or more of the plurality of split displays. For example, the user may specify the first finger should move at one inch intervals in the first direction to create the plurality of split displays. In another example, the user may specify the first finger should move at half inch intervals in the first direction to create the plurality of split displays. It may be appreciated that the examples described above are not intended to be limiting, and that in embodiments of the present invention the distance intervals may be configured differently be the user. Then, at204, the mobile device display splitting program110A,110B identifies the first location of the first finger and the second location of the second finger of the user on the frame of the mobile device. Once the configuration criteria are received from the user, the mobile device display splitting program110A,110B may identify the first location and the second location using the plurality of sensors described above with respect toFIG.1. As described above, the IoT Device118may be a plurality of sensors embedded in the client computing device102(e.g., the mobile device), including pressure sensors and motion sensors, and/or any other IoT Device118known in the art for capturing facial and/or hand gestures of the user. In embodiments of the present invention, references are made to the first location of the first finger and the second location of the second finger of the user. The first location may be a particular location of the first finger on the frame of the mobile device on one side of the mobile device, and the second location may be a particular location of the second finger on the frame of the mobile device on an opposite side of the mobile device. For example, if the user is right-handed, the first finger (e.g., the index finger) may be contacting the frame of the mobile device on the left side of the mobile device, and the second finger (e.g., the thumb) may be contacting the frame of the mobile device on the right side of the mobile device. Continuing the example, if the user is left-handed, the first finger may be contacting the frame of the mobile device on the right side of the mobile device, and the second finger may be contacting the frame of the mobile device on the left side of the mobile device. According to at least one embodiment, the user may specify, in the configuration criteria described above with respect to step204, whether the user is left-handed or right-handed. This information may be stored in a user profile for future use. According to at least one other embodiment, where the user does not specify whether the user is left-handed or right-handed, the plurality of sensors embedded in the mobile device may be used by the mobile device display splitting program110A,110B to infer whether the user is left-handed or right-handed. For example, based on a gripping pattern of the user and where pressure is applied on the frame of the mobile device, the plurality of sensors may be able to determine that the user is holding the mobile device with the left-hand, and thus the first finger is on the right side of the mobile device, and the second finger is on the left side of the mobile device. Additional details on the first location and the second location are described in further detail below with respect to steps206and214. Next, at206, the mobile device display splitting program110A,110B determines whether the user is moving the first finger in the first direction along the frame of the mobile device from the first location. As described above with respect to step204, the user may be gripping the mobile device in such a manner where the first finger is contacting the frame on the right side of the mobile device, or where the first finger is contacting the frame on the left side of the mobile device. According to at least one embodiment, when the user wishes to view content in a split-screen view, the user may move the first finger in the first direction along the frame of the mobile device from the first location. Additionally, as described above with respect to step202, as the user moves the first finger along the frame, the number of split displays to be created may be displayed to the user in real-time, which is illustrated inFIG.3. The plurality of sensors may be utilized by the mobile device display splitting program110A,110B to detect a change in pressure along the frame of the mobile device. This change in pressure may be used to infer the user is moving the first finger along the frame of the mobile device. For example, when the user is holding the mobile device in an initial gripping pattern, there would be a degree of pressure applied at the first location. As the user moves the first finger, the pressure of the first finger would be applied at a position other than the first location. According to at least one other embodiment, when the user does not wish to view content in the split-screen view, the user may simply refrain from moving the first finger from the first location. In this manner, the pressure applied at the first location is constant, which may be used to infer that the user is not moving the first finger along the frame of the mobile device. In response to determining the user is moving the first finger in the first direction (step206, “Yes” branch), the mobile device display splitting and mapping process200proceeds to step208to create the plurality of split displays on the mobile device. In response to determining the user is not moving the first finger in the first direction (step206, “No” branch), the mobile device display splitting and mapping process200ends, since it is inferred that the user does not wish to view content in a split-screen view. Then, at208, the mobile device display splitting program110A,110B creates the plurality of split displays on the mobile device based on the movement of the first finger in the first direction. The plurality of split displays may be created using conventional techniques. For example, the display of the mobile device may be split into sections with parallel lines separating the sections, illustrated inFIG.3. As shown inFIG.3, the user may move the first finger in the first direction from the first location. As described above with respect to steps202and206, as the user moves the first finger along the frame, the number of split displays to be created may be displayed to the user in real-time. According to at least one embodiment, the number of split displays created is directly proportional to the distance the first finger moves from the first location. For example, the greater the distance the first finger travels from the first location, the greater the number of split displays may be created. Information regarding how the specific number of split displays to be created is selected will be described in further detail below with respect toFIG.3. According to at least one other embodiment, after the plurality of split displays are created, the user may move the first finger in the second direction along the frame of the mobile device to remove one or more of the plurality of split displays. For example, if five split displays are created, and the user only wanted three split displays, the user may move the first finger in the second direction. As the user moves the first finger in the second direction, the number of split displays may be reduced sequentially. Next, at210, the mobile device display splitting program110A,110B identifies the contextual situation of the user. In embodiments of the present invention, the contextual situation may be an activity in which the user is engaged. For example, the user may be in the process of booking a reservation. Examples of such reservations may include, but are not limited to, a hotel reservation, an airline reservation, and/or a restaurant reservation. In another example, the user may be engaged in a different type of activity, such as listening to music and/or watching a video online. It may be appreciated that the examples described above are not intended to be limiting, and that in embodiments of the present invention the user may be engaged in a broad variety of activities. According to at least one embodiment, the contextual situation may be identified based on applications recently opened by the user. For example, many mobile devices keep records of when users open applications, how long these applications are open, how much battery power the applications draw, as well as the data obtained by the applications. Some mobile devices even have a “recently used apps” shortcut where the user can view and open recently opened applications. In this embodiment, the contextual situation may be identified from one or more recently used apps and/or a query entered into a search engine. For example, the user may open an application associated with a mass transit agency. Such an application may have features such as departure times for trains and buses, departure tracks and gates, service advisories, and/or fare information. In this example, the user may also enter the query “How do I get to Union Station?” into the search engine. Thus, the identified contextual situation may be that the user is looking to ride on mass transit. According to at least one other embodiment, the contextual situation may be specified by the user. For example, the user may open the same UI where the user set the configuration criteria. Continuing the example, the user may speak or type “I am looking for mass transit options” into a designated field of the UI. The contextual situation may be utilized to display the plurality of content, described in further detail below with respect to step212. Then, at212, the mobile device display splitting program110A,110B displays the plurality of content adjacent to the second finger of the user. The plurality of content displayed is based on the contextual situation of the user. Continuing the example above where the user opens the mass transit application, the plurality of content displayed may be relevant to mass transit. For example, the displayed content may include, but is not limited to, a map application so that a user can see a location of the station and ETA to the station, a ride-share application so that the user can book a ride to the station, a mobile ticketing application to buy tickets in advance, and/or other related content. Continuing the example above where the user is booking a hotel reservation, the plurality of content displayed may be relevant to hotel bookings. For example, the displayed content may include, but is not limited to, a map application so that a user can see the location of the hotel, a ride-share application so that the user can book a ride to the hotel, paragraphs of text regarding reviews of the hotel, and/or other related content. In this manner, any content irrelevant to the current activity may not be displayed adjacent to the finger of the user. According to at least one embodiment, the plurality of content may be displayed in aligned rows and columns along the edge of the display surface of the mobile device on the side of the second finger of the user. According to at least one other embodiment, the plurality of content may be displayed in an elliptical cluster along the edge of the display surface of the mobile device on the side of the second finger of the user. Upon displaying the plurality of content, individual items of this plurality of content may be mapped to one or more of the plurality of split displays, described in further detail below with respect to step214. Next, at214, the mobile device display splitting program110A,110B maps the one or more items of the plurality of content to the one or more of the plurality of split displays. The mapping may be executed based on the movement of the second finger along the frame of the mobile device from the second location. As described above with respect to step204, the second location may be a particular location of the second finger on the frame of the mobile device on an opposite side of the mobile device from the first location. According to at least one embodiment, the user may press and hold the second finger against the frame of the mobile device at the second location. In response to the press and hold action, an initial item of content is highlighted with a colored indicator (e.g., a yellow line around the perimeter of the item of content), and if the user continues the press and hold action, the highlight indicator cycles through different items of content until the user releases the second finger from the frame of the mobile device. In this manner, the user can select an item of content so that the item can be moved to one of the plurality of split displays. In this embodiment, once an item is highlighted, the user may move the item by moving the second finger along the frame of the mobile device in either an upward or downward direction, depending on the desired split display to which the user would like to map the item. According to at least one other embodiment, the mobile device display splitting program110A,110B may use gaze analytics to highlight an item of content which has been previously mapped to one of the plurality of split displays. In this manner, the user may move the item of content between the plurality of split displays. In this embodiment, the user may focus their eyes on a particular item in one of the split displays for a pre-defined period of time (e.g., five seconds). Once the pre-defined period of time is met, the particular item may be highlighted. In order to move the particular item to a different split display, the user may hover the second finger over the screen of the mobile device and, without touching the screen, make a motion in the air in either an upward or downward direction, depending on the desired split display to which the user would like to map the item. Additionally, in this embodiment, once a particular item already mapped to one of the plurality of the split displays is highlighted, the user may adjust a zoom-level of the particular item by moving the second finger along the frame of the mobile device. For example, moving the second finger along the frame in an upward direction may zoom-in on the particular item, whereas moving the second finger along the frame in a downward direction may zoom-out on the particular item. Then, at216, the mobile device display splitting program110A,110B creates the knowledge corpus of the correlation among the one or more mapped items of the plurality of content. The knowledge corpus may be created based on historical learning. For example, the mobile device display splitting program110A,110B may historically gather information for each contextual situation including, but not limited to, how many split displays were created, the items of content that were mapped to each split display, the zoom-level of each item that was mapped, and/or the arrangement of each item. For example, when the user is riding mass transit, the user may map a transit application to one split display, and map a ride-sharing application to the same or different split display. This arrangement may be stored in the knowledge corpus, and in the future when the contextual situation is the same, the mobile device display splitting program110A,110B may automatically arrange the items of content in the split displays without user input. According to at least one embodiment, different items of content in the same application may be mapped individually to the plurality of split displays. For example, many applications have different tabs, with each tab displaying different content. Continuing the example above, the mass transit application may have a tab to display service advisories, and another tab to display a trip planner. The user may map the service advisory content to one split display, and map the trip planner to another split display. This arrangement may also be stored in the knowledge corpus. Referring now toFIG.3, a diagram300depicting a user creating split display screens and mapping content to the split display screens is shown according to at least one embodiment. In the diagram300, the user may grip the mobile device with a single hand302. The user may move the thumb (i.e., the second finger) in either direction304to map individual items of content. The user may also move the index finger (i.e., the first finger) in either direction306, wherein in the first direction the plurality of split displays may be created, and in the second direction one or more of the pluralities of split displays may be removed. As the user moves the first finger in the first direction along the frame of the mobile device, the number of split displays to be created308may be presented to the user on the edge of the screen of the mobile device adjacent to the first finger of the user. In order to select the desired number of split displays to be created308, the user may apply pressure (e.g., press and hold) to the frame at a current position310next to the displayed number. For example, in the diagram300, the user applies pressure next to the number “3.” Thus, the screen of the mobile device may be split into three sections. Once the plurality of split displays are created, the user may move the thumb in either direction304to map the individual items of content to at least one of the plurality of split displays. For example, as illustrated in the diagram300, the user may move the thumb downward312to map the individual items of content. When the desired items of content are mapped, the selected content314may be shown in one or more of the plurality of split displays. It may be appreciated thatFIGS.2and3provide only an illustration of one implementation and do not imply any limitations with regard to how different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements. FIG.4is a block diagram400of internal and external components of the client computing device102and the server112depicted inFIG.1in accordance with an embodiment of the present invention. It should be appreciated thatFIG.4provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements. The data processing system402,404is representative of any electronic device capable of executing machine-readable program instructions. The data processing system402,404may be representative of a smart phone, a computer system, PDA, or other electronic devices. Examples of computing systems, environments, and/or configurations that may represented by the data processing system402,404include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, network PCs, minicomputer systems, and distributed cloud computing environments that include any of the above systems or devices. The client computing device102and the server112may include respective sets of internal components402a,band external components404a,billustrated inFIG.4. Each of the sets of internal components402include one or more processors420, one or more computer-readable RAMs422, and one or more computer-readable ROMs424on one or more buses426, and one or more operating systems428and one or more computer-readable tangible storage devices430. The one or more operating systems428, the software program108and the mobile device display splitting program110A in the client computing device102and the mobile device display splitting program110B in the server112are stored on one or more of the respective computer-readable tangible storage devices430for execution by one or more of the respective processors420via one or more of the respective RAMs422(which typically include cache memory). In the embodiment illustrated inFIG.4, each of the computer-readable tangible storage devices430is a magnetic disk storage device of an internal hard drive. Alternatively, each of the computer-readable tangible storage devices430is a semiconductor storage device such as ROM424, EPROM, flash memory or any other computer-readable tangible storage device that can store a computer program and digital information. Each set of internal components402a,balso includes a RAY drive or interface432to read from and write to one or more portable computer-readable tangible storage devices438such as a CD-ROM, DVD, memory stick, magnetic tape, magnetic disk, optical disk or semiconductor storage device. A software program, such as the mobile device display splitting program110A,110B, can be stored on one or more of the respective portable computer-readable tangible storage devices438, read via the respective RAY drive or interface432, and loaded into the respective hard drive430. Each set of internal components402a,balso includes network adapters or interfaces436such as a TCP/IP adapter cards, wireless Wi-Fi interface cards, or 3G or 4G wireless interface cards or other wired or wireless communication links. The software program108and the mobile device display splitting program110A in the client computing device102and the mobile device display splitting program110B in the server112can be downloaded to the client computing device102and the server112from an external computer via a network (for example, the Internet, a local area network or other, wide area network) and respective network adapters or interfaces436. From the network adapters or interfaces436, the software program108and the mobile device display splitting program110A in the client computing device102and the mobile device display splitting program110B in the server112are loaded into the respective hard drive430. The network may comprise copper wires, optical fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. Each of the sets of external components404a,bcan include a computer display monitor444, a keyboard442, and a computer mouse434. External components404a,bcan also include touch screens, virtual keyboards, touch pads, pointing devices, and other human interface devices. Each of the sets of internal components402a,balso includes device drivers440to interface to computer display monitor444, keyboard442, and computer mouse434. The device drivers440, R/W drive or interface432, and network adapter or interface436comprise hardware and software (stored in storage device430and/or ROM424). It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows:Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.5, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes100with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes100may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.5are intended to be illustrative only and that computing nodes100and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.6, a set of functional abstraction layers600provided by cloud computing environment50is shown. It should be understood in advance that the components, layers, and functions shown inFIG.6are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and splitting a mobile device display and mapping content with a single hand of a user96. Splitting a mobile device display and mapping content with a single hand of a user96may relate to creating a plurality of split displays on a mobile device based on movement of a first finger of the user in order to map one or more items of a plurality of content to one or more of the plurality of split displays based on movement of a second finger of the user. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 53,808 |
11861085 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS To help a person skilled in the art better understand the solutions of the present disclosure, the following clearly and completely describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present disclosure. It is understood that terminologies, such as “center,” “longitudinal,” “horizontal,” “length,” “width,” “thickness,” “upper,” “lower,” “before,” “after,” “left,” “right,” “vertical,” “horizontal,” “top,” “bottom,” “inner,” “outer,” “clockwise,” and “counterclockwise,” are locations and positions regarding the figures. These terms merely facilitate and simplify descriptions of the embodiments instead of indicating or implying the device or components to be arranged on specified locations, to have specific positional structures and operations. These terms shall not be construed in an ideal or excessively formal meaning unless it is clearly defined in the present specification. In addition, the term “first”, “second” are for illustrative purposes only and are not to be construed as indicating or imposing a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature that limited by “first”, “second” may expressly or implicitly include at least one of the features. In the description of the present disclosure, the meaning of “plural” is two or more, unless otherwise specifically defined. All of the terminologies containing one or more technical or scientific terminologies have the same meanings that persons skilled in the art understand ordinarily unless they are not defined otherwise. For example, “arrange,” “couple,” and “connect,” should be understood generally in the embodiments of the present disclosure. For example, “firmly connect,” “detachably connect,” and “integrally connect” are all possible. It is also possible that “mechanically connect,” “electrically connect,” and “mutually communicate” are used. It is also possible that “directly couple,” “indirectly couple via a medium,” and “two components mutually interact” are used. All of the terminologies containing one or more technical or scientific terminologies have the same meanings that persons skilled in the art understand ordinarily unless they are not defined otherwise. For example, “upper” or “lower” of a first characteristic and a second characteristic may include a direct touch between the first and second characteristics. The first and second characteristics are not directly touched; instead, the first and second characteristics are touched via other characteristics between the first and second characteristics. Besides, the first characteristic arranged on/above/over the second characteristic implies that the first characteristic arranged right above/obliquely above or merely means that the level of the first characteristic is higher than the level of the second characteristic. The first characteristic arranged under/below/beneath the second characteristic implies that the first characteristic arranged right under/obliquely under or merely means that the level of the first characteristic is lower than the level of the second characteristic. Different methods or examples are introduced to elaborate different structures in the embodiments of the present disclosure. To simplify the method, only specific components and devices are elaborated by the present disclosure. These embodiments are truly exemplary instead of limiting the present disclosure. Identical numbers and/or letters for reference are used repeatedly in different examples for simplification and clearance. It does not imply that the relations between the methods and/or arrangement. The methods proposed by the present disclosure provide a variety of examples with a variety of processes and materials. However, persons skilled in the art understand ordinarily that the application of other processes and/or the use of other kinds of materials are possible. Please refer toFIG.1andFIG.2.FIG.1is a top view of a touch panel according to an embodiment of the present invention.FIG.2is a diagram of a cross-section along the A-A direction shown inFIG.1. As shown inFIG.1andFIG.2, a display panel1is disclosed. The display panel1comprises a substrate10and a photochromic layer30. A touch layer20is positioned on the substrate10. The photochromic layer30is positioned on the touch layer. The photochromic layer30comprises a photosensitive resistor layer31and a color-changing cathode layer32. The photosensitive resistor layer31is positioned on the touch layer20and connecting to the touch layer20. The color-changing cathode layer32is positioned on the photosensitive resistor layer31. As previously mentioned, a display having an electronic whiteboard is desired especially in a meeting scenario or in an education scenario with ambient light. The non-contact-type touch function with huge view angle is desired to achieve the function of laser pen effect for the teacher to point out the location on the electronic whiteboard. However, the conventional TV display module cannot realize the non-contact-type touch effect. In this embodiment, the photochromic layer30is positioned on the touch layer20and comprises the photosensitive resistor layer31and the color-changing cathode layer32. The photosensitive resistor layer31in the photochromic layer30changes its resistance when specific light is applied on the photosensitive resistor layer31. This changes the voltage of the color-changing cathode layer32and thus the color-changing cathode layer32becomes a specific color. In this way, the light passing through the color-changing cathode layer32becomes that specific color. This realizes the display effect of showing a specific color in response to a non-contact-type touch in the touch display device. The photochromic layer30and the touch layer20constitute a photochromic device. The color-changing cathode layer32in the photochromic layer30could be regarded as a cathode of the photochromic device. The photosensitive resistor layer31in the photochromic layer30could be regarded as the photochromic resistor. The touch layer20could be regarded as an anode. The touch layer20not only has the touch function but also could be used as an anode to realize the non-contact-type touch function. Specifically, at least a part of the touch layer20could be used in a multiplexing way in the non-contact-type touch circuit and the conventional contact-type touch circuit. Apparently, this structure could reduce the thickness of the touch panel1and makes the touch panel1more compacted. In an embodiment, as shown inFIG.2, the touch layer20comprises a plurality of transmitter (Tx) electrodes21and a plurality of receiver (Rx) electrodes22. The transmitter electrodes21and the receiver electrodes22are arranged in a crisscross pattern on the substrate10. The photochromic layer30is positioned on the Tx electrodes21and connected to the Tx electrodes21. The orthographic projection of the photochromic layer30on the substrate10is not overlapped with the orthographic projection of the Rx electrodes22on the substrate10. It could be understood that, in the touch layer20, the Rx electrodes22are used to receive an external touch signal to sense an external touch event. As previously mentioned, the photochromic layer30is positioned on the Tx electrodes21and connected to the Tx electrodes21. The orthographic projection of the photochromic layer30on the substrate10is not overlapped with the orthographic projection of the Rx electrodes22on the substrate10. This arrangement could well prevent the photochromic layer30from being blocked by the Rx electrodes22such that the touch display function of the touch layer20is not influenced. In this way, the touch display function and the non-contact-type touch display function could be well integrated in the touch panel1. In addition, because the photochromic layer30is positioned on the Tx electrodes21and connected to the Tx electrodes21, the Tx electrodes21could be used in a multiplexing way in the non-contact-type touch circuit and the contact-type touch circuit. In the touch layer20, the Tx electrodes21and the Rx electrodes22could be positioned in the same layer or in different layers. When the Tx electrodes21and the Rx electrodes22are positioned in the different layers, an insulating layer could be further included in the touch layer20. The Tx electrodes21and the Rx electrodes22are respectively positioned on the two sides of the insulating layer. Specifically, when the Tx electrodes21are positioned above the insulating layer and the Rx electrodes22are positioned under the insulating layer, the photochromic layer30could be directly positioned on the Tx electrodes21. When the Tx electrodes21are positioned under the insulating layer and the Rx electrodes22are positioned above the insulating layer, the photochromic layer30could be positioned on the insulating layer and between Rx electrodes22. In this embodiment, the photochromic layer30could be connected to the Tx electrodes21through vias. The detailed structure is omitted here for simplicity. In an embodiment, as shown inFIG.2, the photosensitive resistor layer31comprises a plurality of photosensitive resistors311, positioned in areas between the Rx electrodes22and on the Tx electrodes21. The color-changing cathode layer32comprises a plurality of color-changing cathodes321. Each of the photosensitive resistors311corresponds to one color-changing cathode321of the color-changing cathodes321. The photosensitive resistors311are positioned in the areas between the Rx electrodes22as a matrix, and the Tx electrodes21, the photosensitive resistors311and the color-changing cathodes321are stacked. In this embodiment, each photosensitive resistor311and each color-changing cathode321are distributed in the display panel in a one-to-one correspondence. This allows the photosensitive resistor layer31to accurately sense the position of the touch panel1where a specific light is applied on and to change to a specific color through the color-changing cathodes321. In this way, the light passing through the color-changing cathodes321becomes the specific color as well such that the display effect of showing a specific color on the triggered potion of the touch panel1in response to a non-contact-type touch in the touch display device could be realized. Other positions of the touch panel1do not have any response because the specific light is not applied on those positions. In an embodiment, some of the Tx electrodes21could have the photosensitive resistors311on them. In another embodiment, all of the Tx electrodes21could have the photosensitive resistors311on them. In addition, in an embodiment, the orthogonal projection of the photosensitive resistor311on the substrate10could completely cover the orthogonal projection of the Tx electrode21on the substrate10. In another embodiment, the orthogonal projection of the photosensitive resistor311on the substrate10could partially cover the orthogonal projection of the Tx electrode21on the substrate10. This could be adjusted according to the actual demands. In this embodiment, the orthogonal projection of the photosensitive resistor311on the substrate10completely covers the orthogonal projection of the Tx electrode21on the substrate10. This could maximize the area of the photosensitive resistor311such that the photosensitive resistor311could more easily receive the external specific light and be triggered accordingly to change its resistance. In an embodiment, the material of the photochromic layer is a ultraviolet (UV) photochromic material or an infrared photochromic material. In the actual implementation, the photosensitive resistor layer31changes its resistance when it's lightened by the specific light. Here, when the material of the photochromic layer31is a UV photochromic material, the specific light is a UV light. When the material of the photochromic layer31is an infrared photochromic material, the specific light is an infrared. In addition, the UV photochromic material is zinc oxide, an alloy of zinc oxide and magnesium, cadmium sulfide or cadmium selenide. The infrared photochromic material is a lead sulfide, lead telluride, lead selenide or indium antimonide. In this embodiment, the material of the photochromic layer31is a UV photochromic material. Specifically, the UV photochromic material is zinc oxide or an alloy of zinc oxide and magnesium. Thus, the photochromic layer31is transparent. Furthermore, the amount of magnesium of the alloy of zinc oxide and magnesium could be adjusted to change the targeted UV light wavelength that the photosensitive resistor layer31could sense. By adopting zinc oxide or an alloy of zinc oxide and magnesium as the material of the photochromic layer31, the photochromic layer31is transparent to prevent from blocking light and influencing the aperture rate of the touch display device. In an embodiment, the color-changing cathode layer32is transparent in its normal stage. Here, the material of the color-changing cathode layer32could be an organic photochromic material or an inorganic photochromic material. In this embodiment, the normal state of the color-changing cathode layer32is set to be transparent. Specifically, the material of the color-changing cathode layer is tungsten oxide or viologen derivatives such that the normal state of the color-changing cathode layer32is transparent. In the actual implementation, the transparent color-changing cathode layer32could prevent from blocking light and influencing the aperture rate of the touch display device. Furthermore, the photosensitive resistor layer31changes its resistance when the specific light is applied on the photosensitive resistor layer31. This changes the voltage of the color-changing cathode layer32and thus the color-changing cathode layer32becomes a specific color. Here, the specific color could be selected according to the selected material. For example, when the material of the color-changing cathode layer32is a tungsten oxide (WO3) of an inorganic photochromic material, the specific color is deep blue. That is, when the specific light is applied on the photosensitive resistor layer31, the photosensitive resistor layer31changes its resistance and the color of the tungsten oxide becomes deep blue because the voltage of the color-changing cathode layer32changes. According to an embodiment of the present invention, a touch display device is disclosed. As shown inFIG.3, the touch display device comprises a display panel2, a backlight module, and the above-mentioned touch panel1. The touch panel1and the backlight module3are respectively positioned on both sides of the display panel2. The photosensitive resistor layer31changes its resistance when the specific light (such as UV light) is applied on it. This changes the voltage of the color-changing cathode layer32and the color-changing cathode layer32becomes a specific color. In this way, the light passing through the color-changing cathode layer32becomes the specific color as well. This realizes the display effect of showing a specific color in response to a non-contact-type touch in the touch display device. In addition, the operations of the display device in this embodiment are similar to those of the display panel2and further illustrations are omitted here for simplicity. From the above, the touch panel1comprises a substrate10, a touch layer20and a photochromic layer30. The touch layer20is positioned on the substrate10. The photochromic layer30is positioned on the touch layer. The photochromic layer30comprises a photosensitive resistor layer31and a color-changing cathode layer32. The photosensitive resistor layer31is positioned on the touch layer20and connecting to the touch layer20. The color-changing cathode layer32is positioned on the photosensitive resistor layer31. The photosensitive resistor layer31in the photochromic layer30changes its resistance when specific light is applied on the photosensitive resistor layer31. This changes the voltage of the color-changing cathode layer32and thus the color-changing cathode layer becomes a specific color32. In this way, the light passing through the color-changing cathode layer32becomes that specific color. This realizes the display effect of showing a specific color in response to a non-contact-type touch in the touch display device. Above are embodiments of the present invention, which does not limit the scope of the present invention. Any modifications, equivalent replacements or improvements within the spirit and principles of the embodiment described above should be covered by the protected scope of the invention. | 17,112 |
11861086 | DETAILED DESCRIPTION In order to make objects, technical solutions and advantages of the embodiments of the disclosure apparent, the technical solutions of the embodiment of the disclosure will be described in a clearly and fully understandable way in connection with the drawings related to the embodiments of the disclosure. It is obvious that the described embodiments are just a part but not all of the embodiments of the disclosure. Based on the described embodiments herein, those skilled in the art can obtain other embodiment(s) without any inventive effort, which should be within the scope of the disclosure. Unless otherwise defined, the technical terms or scientific terms here should be of general meaning as understood by those ordinarily skilled in the art. In the present disclosure, words such as “first”, “second” and the like do not denote any order, quantity, or importance, but rather are used for distinguishing different components. Similarly, words such as “include” or “comprise” and the like denote that elements or objects appearing before the words of “include” or “comprise”, or cover the elements or the objects enumerated or equivalents thereof after the words of “include” or “comprise”, not excluding other elements or objects. Words such as “connected” or “connecting” and the like are not limited to physical or mechanical connections, but may include electrical connections, either direct or indirect. Words such as “up”, “down”, “left”, “right”, etc. are only used to indicate relative position relationships, when the absolute position of described objects changes, the relative position relationships may also be changed accordingly. For example, liquid crystal display devices or organic light emitting diode (OLED) display devices include a variety of signal lines such as scanning lines, data lines, etc. In addition to the above-mentioned signal lines, the display device includes other types of lines, such as power lines, etc. The lines achieve electrical connection with a circuit board at least partially through a bonding method. At present, as functions of a display device increase, the number of lines of the display device also increases. For example, for a touch display device, in order to realize a touch-control function, the touch display device needs to configure corresponding touch-control lines to transmit touch signals. The touch-control lines are electrically connected to a touch-control chip also by way of bonding, for example. For example, for a 6.5″ (6.5 inches) mobile phone product, numbers of data lines and touch-control lines are shown in Table 1. TABLE 1ResolutionData lineTouch-control lineNumber of lines720*1280 (HD)2160 (mux1:1)64828081440*2560 (QHD)1440 (mux1:3)64820882160*3840 (UHD)2160 (mux1:3)6482808 For example, in the above table, mux1:1 indicates that one sub-pixel corresponds to one data line, so that in a case where each pixel unit includes a red sub-pixel R, a green sub-pixel G, and a blue sub-pixel B, and RGB sub-pixels of pixel units in a same column are connected to 3 data lines, respectively, 720 columns of pixel units corresponds to 2160 data lines. Mux1:3 indicates that three sub-pixels are controlled by a switch so as to correspond to one data line, i.e., three sub-pixels in one pixel unit correspond to one data line, so that 1440 column of pixel units correspond to 1440 data lines. Table 1 shows that the number of lines is increased by more than 20 percent for each connection method, compared to a stand-alone display device. In COF (Chip On Flex or Chip On Film) products, respective lines are connected to the driving circuit board via corresponding contact pads in bonding regions, so that as the number of lines increases, the number of contact pads correspondingly increases. Therefore, pad pitches between the contact pads correspondingly reduce in case that the size of the display substrate remains unchanged. Corresponding to the reduction of the pad pitches, pitches between the contact pads or pins on the circuit board or driver chip which bond with these contact pads also need to be reduced accordingly. For example, when the resolution of the display device is 3240*3240, the relationship between the pad pitches and the number of lines is shown in Table 2. TABLE 2Touch-controlResolutionData linelineNumber of linesPad pitch3240*32403240 (mux 1:3)64838888.5 um3240*32403240 (mux 1:3)—324010 um As can be seen in Table 2, in products with higher resolution (e.g., 3K or 4K), due to increased number of lines on the panel, the number of contact pads on a driver IC (Integrated Circuit) increases accordingly, and the pad pitches on the driver IC (Integrated Circuit) become too small, which is difficult to be implemented through current IC packaging process or for which a yield becomes lower. Therefore, how to reduce the number of contact pads on display devices (e.g., driving circuit boards and display panels) has become an urgent problem to be solved. To solve the above problem, at least one embodiment of the present disclosure provides a touch display panel comprising: a plurality of first data lines; a plurality of touch-control lines; a plurality of first contact pads; a plurality of first selection switches connected to the plurality of first contact pads in a one-to-one correspondence manner. Each first selection switch is electrically connected to one first contact pad, one first data line, and one touch-control line, and the first selection switch is configured to receive a first control signal and, according to the first control signal, electrically connect a first contact pad to a first data line during a first time period, electrically connect the first contact pad to a touch-control line during a second time period, and the first time period and the second time period do not overlap. Some embodiments of the present disclosure further provide a driving circuit board, touch display device, and driving method corresponding to the touch display panel described above. The touch display panel provided by the above embodiments of the present disclosure enables one first data line and one touch-control line to share one contact pad, so that the number of contact pads can be reduced and thus the pad pitches can be increased, which is advantageous to the realization of high-resolution display. Embodiments of the present disclosure and examples thereof are described in detail below with reference to the accompanying drawings. FIG.1is a schematic diagram of the touch display panel provided by at least one embodiment of the present disclosure. The touch display panel provided by at least one embodiment of the present disclosure is described in detail below with reference toFIG.1. As shown inFIG.1, the touch display panel10includes: a display circuit array11, a touch-control circuit array12, a plurality of first contact pads P1and a plurality of first selection switches131. For example, the display circuit array11is used to implement display operations, and includes signal lines such as a plurality of first data lines DL1. The plurality of first data lines DL1are provided to transmit data signals. The touch-control circuit array12is used to enable touch-control operations and includes a plurality of touch-control lines TL to transmit touch-control signals (e.g., touch-control sensing signals). Each first contact pad P1is used to bond with a corresponding contact pad on the driving circuit board (e.g. referring toFIG.7) to implement an electrical connection, so that a data driving circuit and a touch detection circuit included in the driving circuit board can be electrically connected to the plurality of first contact pads P1, respectively, so that the touch display panel10and the driving circuit board may transmit electrical signals between each other. For example, the first selection switch131is electrically connected to one first contact pad P1, one first data line DL1and one touch-control line TL and configured to receive a first control signal and, according to the first control signal, to electrically connect the first contact pad P1and the first data line DL1during a first time period, and to electrically connect the first contact pad P1to the touch-control line TL during a second time period. The first time period and the second time period do not overlap, in order to achieve time-sharing multiplexing for the first contact pad P1. For example, the first time period is a display phase, and the second time period is a touch-control phase. For example, during the display phase, in response to the first control signal, the first selection switch131electrically connects the first contact pad P1and the first data line DL1, so that a data signal provided by the data driving circuit on the driving circuit board is transmitted via the first contact pad P1and the first data line DL1to a pixel unit in the display circuit array11, in order to drive the pixel unit to emit light according to corresponding data signal (e.g., grayscale voltage data); in the touch-control phase, in response to the first control signal, the first selection switch131electrically connects the first contact pad P1to the touch-control line TL so that a touch signal generated in the touch-control circuit array12(e.g., the touch-sensing signal) is transmitted via the touch-control line TL and the first contact pad P1to the touch detection circuit on the driving circuit board, in order to determine, for example, a touch position of a finger or a stylus on the touch display panel according to the touch signal (e.g., capacitance variation data), thereby realizing a touch-control function. Thus, in this embodiment, by controlling a switch position of the first selection switch131, the touch-control line and first data line can be connected to the same first contact pad in different phases, thereby reducing the number of first contact pads and increasing the pad pitches. FIG.2Ais a schematic circuit diagram of an implementation example of the first selection switch shown inFIG.1. As shown inFIG.2A, this first selection switch131includes a first transistor M1and a second transistor M2. It should be noted that the transistors shown inFIG.2Aare all illustrated as N-type transistors. The embodiments of the present disclosure are not limited thereto, and the transistors may also be P-type transistors. For example, the first electrode of the first transistor M1is connected to the touch-control line TL, the second electrode of the first transistor M1is connected to the first contact pad P1via a wiring1311, and the gate of the first transistor M1is connected to a first switch signal terminal MUX1to receive a first switch signal. For example, the first transistor M1is turned on in response to the first switch signal, and causes the touch-control line TL and the first contact pad P1to be connected, so that the touch-control signal transmitted by the touch-control line TL is transmitted to the first contact pad P1. The first electrode of the second transistor M2is connected to the first data line DL1, the second electrode of the second transistor M2is connected to the first contact pad P1via a wiring, and the gate of the second transistor M2is connected to a second switch signal terminal MUX2to receive a second switch signal. For example, the second transistor M2is turned on in response to the second switch signal, and causes the first data line DL1and the first contact pad P1to be connected, so that the data signal transmitted to the first contact pad P1is transmitted to the first data line DL1, and is transmitted to a pixel unit in the display circuit array11to drive the pixel unit to emit light. For example, in this example, the first control signal includes the first switch signal and the second switch signal. For example, when the first transistor M1and the second transistor M2are of the same type (as shown inFIG.2A, the first transistor M1and the second transistor M2are both N-type transistors, although they can both be P-type transistors, and the embodiments of the present disclosure are not limited thereto), the first switch signal and the second switch signal are different signals. For example, one is of high level with respect to a reference level, and the other is of low level with respect to the reference level. When the first transistor M1and the second transistor M2are of different types (as shown inFIG.2BorFIG.2C, the first transistor M1is an N-type transistor and the second transistor M2is a P-type transistor; and of course, the first transistor M1can be a P type transistor, and the second transistor M2is an N-type transistor), the first switch signal and second switch signal are the same signal. For example, the first switch signal and second switch signal are both of high level or low level with respect to the reference level. Embodiments of the present disclosure are not limited thereto. For example, in other examples, when the first transistor M1and the second transistor M2are of different types, the two transistors can be controlled to switch on or off by one switch signal.FIG.2Bis a schematic circuit diagram of another implementation example of the first selection switch provided by some embodiments of the present disclosure.FIG.2Cis a schematic circuit diagram of yet another implementation example of the first selection switch provided by some embodiments of the present disclosure. It should be noted that inFIGS.2B and2C, it is taken as an example that the first transistor M1is an N-type transistor and the second transistor M2is a P-type transistor. The embodiments of the present disclosure are not limited thereto. Also, the first transistor M1may be a P-type transistor and the second transistor M2may be an N-type transistor. For example, as shown inFIG.2B, the circuit structure of the first selection switch shown inFIG.2Bis substantially the same as that of the first selection switch shown inFIG.2A, with the difference that the gate of the second transistor M2is connected to the gate of the first transistor M1, i.e., to the first switch signal terminal MUX1, to be turned on or off under the control of the first switch signal provided by the first switch signal terminal MUX1. Of course, the gate of the second transistor M2and the gate of the first transistor M1may also both be connected to the second switch signal terminal MUX2(as shown inFIG.2C), to be turned on or off under the control of the second switch signal provided by the second switch signal terminal MUX2, and the embodiments of the present disclosure are not limited thereto. For example, in the example shown inFIG.2BorFIG.2C, the first control signal terminal is the first switch signal terminal MUX1or the second switch signal terminal MUX2, and the first control signal is the first switch signal or the second switch signal, and the embodiments of the present disclosure are not limited thereto. In the embodiments of the present disclosure, by means of this first selection switch, the first contact pad can be controlled to connect to different lines during different time periods, so that a plurality of lines may share one first contact pad, which can effectively reduce the number of first contact pads. Because the third contact pads on the driving circuit board and the first contact pads on the touch display panel are connected in an one-to-one correspondence manner, the number of third contact pads on the driving circuit board can also be effectively reduced, which can increase the pad pitches and reduce the difficulty of implementing the bonding process, thereby improving the yield of the products, and reducing manufacturing costs. For example, as shown in Tables 1 and 2 above, in some examples, because the number of touch-control lines is less than the number of first data lines, the touch-control lines TL shares only a part of the contact pads with the first data lines DL1in the touch display panel. For example, the part of the contact pads are the first contact pads P1. Another part of the contact pads are only connected to the second data lines DL2. For example, the other part of the contact pads are first data contact pads P11shown inFIG.1. For example, in some examples, because the first data contact pads P11are not connected to the touch-control lines TL, it is possible to directly connect the second data lines DL2to the first data contact pads P11. For example, as shown inFIG.1, in other examples, the touch display panel10further includes a plurality of first dummy selection switches132. For example, each first dummy selection switch132is electrically connected to one first data contact pad P11and one second data line DL2, and is configured to receive the first control signal and, according to the first control signal, electrically connect the first data contact pad P11and the second data line DL2. FIG.3is a schematic circuit diagram of an implementation example of the first dummy selection switch132shown inFIG.1. As shown inFIG.3, the first dummy selection switch132includes a first collocated transistor M11and a second collocated transistor M12. For example, the first electrode of the first collocated transistor M11is in a floating state (e.g., a state where the first collocated transistor M11is not connected to any signal lines), the second electrode of the first collocated transistor M11is connected to the first data contact pad P11, and the gate of the first collocated transistor M11is connected to the first switch signal terminal MUX1to receive the first switch signal. The first electrode of the second collocated transistor M12is connected to the second data line DL2, the second electrode of the second collocated transistor M12is connected to the first data contact pad P11, and the gate of the second collocated transistor M12is connected to the second switch signal terminal MUX2to receive the second switch signal. It should be noted that the first collocated transistor M11and the second collocated transistor M12may also have the same types, connection manners and operation principles as those of the first transistor M1and the second transistor M2as shown inFIG.2BorFIG.2C, i.e., when the type of the first collocated transistor M11and the type of the second collocated transistor M12are different, the gate of the first collocated transistor M11and the gate of the second collocated transistor M12may both be connected to the first control signal terminal (e.g., the first switch signal terminal or the second switch signal terminal), which will not be repeated herein. For example, by setting the first dummy selection switch132having the same structure as that of the first selection switch131at the first data contact pad, the load connected to each first data contact pad P11and the load connected to each first contact pad P1may be the same, so the impact on the data signals or the touch-control signals due to the difference between the loads connected to each contact pad can be avoided, thereby improving touch-control accuracy and display quality of the touch display panel. Another specific implementation example of the first dummy selection switch132includes only the first collocated transistor M11without including the second collocated transistor M12. FIG.4Ais a schematic diagram of a display circuit array provided by at least one embodiment of the present disclosure, andFIG.4Bis a schematic diagram of an example of a pixel unit shown inFIG.4A. It should be noted that the pixel unit shown inFIG.4Btakes a pixel unit used in a liquid crystal display panel as an example, while the embodiments of the present disclosure are not limited thereto. The pixel unit may also employ a pixel unit used in an organic light emitting diode display panel, and details are not described herein. The display circuit array provided by the present disclosure is described in detail below with reference toFIGS.4A and4B. As shown inFIG.4A, the display circuit array11includes a plurality of columns of pixel units110. Each column of pixel units110are connected to a same first data line DL1to receive a data signal. As shown inFIG.4B, each pixel unit110includes red, green, and blue (RGB) sub-pixels located in a same row, and sub-pixels in each column are connected to a same data line DL1or a same second data line DL2. Each sub-pixel includes at least one thin film transistor111, pixel electrode114and common electrode113. The thin film transistor111acts as a switch element, and includes a gate, a source and a drain, and the gate, the source and the drain are respectively connected to a gate line GL, the first data line DL1/the second data line DL2and the pixel electrode114. The pixel electrode114and the common electrode113form a capacitor. For example, the common electrode113is connected to a common electrode line112to receive a common voltage, and the thin film transistor111is turned on under the control of a gate scanning signal on the gate line GL, in order to apply the data signal on the first data line DL1or the second data line DL2to the pixel electrode114, to charge the capacitor formed by the pixel electrode114and the common electrode113, thereby forming an electric field to control deflections of liquid crystal molecules. FIG.5Ais a schematic diagram of a touch-control circuit array provided by at least one embodiment of the present disclosure, andFIG.5Bis a schematic diagram of anotherthe touch-control circuit array provided by at least one embodiment of the present disclosure. The touch-control circuit array provided by the embodiments of the present disclosure are described in detail below with reference toFIGS.5A and5B. For example, in some examples, as shown inFIG.5A, the touch-control circuit array12includes a plurality of first touch-control electrodes121. Each of the first touch-control electrodes121is connected to one touch-control line TL. For example, in this example, the plurality of first touch-control electrodes121are self-capacitive electrodes to implement a touch-control. For example, a touch sensing signal generated by each first touch-control electrode is transmitted via the touch-control line TL connected to the touch detection circuit on the driving circuit board. For example, in some embodiments of the present disclosure, the first touch-control electrode121may be multiplexed as the common electrode113shown inFIG.4B. It should be noted that in other examples, the touch-control circuit array may further include touch sensors that each forms a mutual capacitor to implement the touch detection, and the embodiments of the present disclosure do not limited thereto. For example, in this example, as shown inFIG.5B, the touch-control circuit array12includes a plurality of touch sensors122arranged in an array. Each touch sensor122includes a first touch-control electrode1221and a second touch-control electrode1222. The second touch-control electrodes1222of touch sensors122in each column are connected to the same touch-control line TL. For example, the first touch-control electrode1221is a touch-control driving electrode, for example, to receive a touch-control driving signal. The second touch-control electrode1222is a touch-control sensing electrode, for example, to receive a touch-control sensing signal and transmit this touch-control sensing signal to the touch detection circuit via the touch-control line TL. Of course, in other examples, the first touch-control electrode1221is the touch-control sensing electrode and the second touch-control electrode1222is the touch-control driving electrode, the embodiments of the present disclosure are not limited thereto. For example, in some embodiments, the first touch-control electrode1221is identical to the first touch-control electrode121as shown inFIG.5A. For example, in some embodiments of the present disclosure, the display circuit array includes common electrodes, and the first touch-control electrodes1221may be multiplexed as the common electrodes shown inFIG.4B, and are configured to receive the common voltage. FIG.6Ais a schematic diagram of another touch display panel provided by at least one embodiment of the disclosure, andFIG.6Bis a schematic diagram of yet another touch display panel provided by at least one embodiment of the disclosure. For example, in some examples, as shown inFIG.6A, in the example shown inFIG.1, the touch display panel10further includes a common signal line113. For example, the common signal line113is connected to the first touch-control electrodes. For example, the common signal line113is connected to the first touch-control electrodes121shown inFIG.5A, or is connected to first touch-control electrodes1221in the touch sensor122shown inFIG.5B. In this embodiment, for example, the first touch-control electrode121or the first touch-control electrode1221can be multiplexed by driving the touch display panel in a manner of time-sharing. In this example, since the first touch-control electrodes121or the first touch-control electrodes1221are multiplexed as the common electrodes (as shown inFIG.4B) for the display circuit array11, during the display phase, the common signal line113provides the common voltage for display to the first touch-control electrodes121or the first touch-control electrodes1221, to enable the first touch-control electrodes121or the first touch-control electrodes1221to act as the common electrodes in this phase to drive pixel units to emit light. In the touch-control phase, the touch-control driving signal is provided to the first touch-control electrodes121or the first touch-control electrodes1221to enable touch detection. For example, in other examples, the touch display panel10further includes a second contact pad P2. For example, the second contact pad P2is connected to the common signal line113to provide a voltage signal to the common signal line113. For example, the voltage signal is used as a signal for providing the common voltage in the display phase and used as the touch driving signal in the touch-control phase, the embodiments of the present disclosure are no limited thereto. For example, as shown inFIG.6B, on the basis of the example shown in FIG.6A, each first selection switch131further includes a third transistor M3. For example, the first electrode of the third transistor M3is connected to the common signal line113, the second electrode of the third transistor M3is connected to one first touch-control electrode121/1221, and the gate of the third transistor M3is connected to a third switch signal terminal MUX3, to receive a third switch signal. For example, the third transistor M3is turned on under the control of the third switch signal, so that the common signal line113is connected to the first touch-control electrodes121/1221, to provide the signal for providing the common voltage or the touch driving signal to the first touch-control electrodes121/1221. For example, during the display phase, the third transistors M3in respective rows may be simultaneously turned on to transmit the signal for providing the common voltage provided by the common signal line113to the touch-control circuit array. For example, in the touch-control phase, when the touch-control circuit array12is of the structure shown inFIG.5B, i.e., when the touch detection is performed based on the mutual capacitance, the third transistors M3may be turned on line by line so that the touch-control driving signal on the common signal line113is input to the touch-control circuit array line by line, in order to achieve a scanning of the touch display panel line by line, thereby realizing the touch-control function. For example, in the touch-control phase, when the touch-control circuit array12is of the structure shown inFIG.5A, i.e., when the touch detection is performed based on the self-capacitance, it is possible to use the structure shown inFIG.6A, i.e., it is possible to simultaneously apply the touch-control driving signal to the first touch-control electrodes121via the common signal line113. At the same time, each first touch-control electrode121transmits the touch-control sensing signal generated by the first touch-control electrode121via the touch-control line TL connected thereto, respectively, to the touch detection circuit. Of course, for example, the structure shown inFIG.6Bcan also be adopted, such that the third transistors M3in respective rows are simultaneously turned on to apply the touch-control driving signal to all of the first touch-control electrodes121simultaneously. At least one embodiment of the present disclosure also provides a driving circuit board used in the touch display panel, the driving circuit board is integrated in the bonding region of the touch display device and connected correspondingly to the touch display panel10, to provide corresponding driving signals (e.g., the data signal, the gate scanning signal, the touch-control driving signal, the signal for providing the common voltage, and the signal for providing other supply voltages, etc.) to the touch display panel10. FIG.7is a schematic diagram of a driving circuit board provided by at least one embodiment of the present disclosure. As shown inFIG.7, the driving circuit board20includes a data driving circuit21, a touch detection circuit22, a plurality of third contact pads P3and a plurality of second selection switches133. For example, the plurality of second selection switches133are connected to the plurality of third contact pads P3in an one-to-one correspondence manner. For example, each second selection switch133is electrically connected to one third contact pad P3, the touch detection circuit22, and the data driving circuit21, and is configured to receive a second control signal, and based on the second control signal, the third contact pad P3and the data driving circuit21are electrically connected during a first time period. Also, the third contact pad P3is electrically connected to the touch detection circuit22during a second time period. The first time period and the second time period do not overlap. The driving circuit board20is bonded with the first contact pads P1of the touch display panel10shown inFIG.1, for example, via the third contact pads P3to establish an electrical connection, so that the touch display panel10and the driving circuit board can transmit electrical signals to each other. For example, the data driving circuit21and the touch detection circuit22may be prepared directly on the substrate of the driving circuit board20, or realized as an integrated circuit chip mounted on the substrate of the driving circuit board20in an appropriate manner (e.g., bonding) and electrically connected to the lines on the substrate, and thus connected with the third contact pads P3. For example, during the display phase, the second selection switch133is turned on in response to the second control signal, causing the data driving circuit21to be connected to the third contact pad P3, thereby transmitting the data signal generated by the data driving circuit21to the third contact pad P3. During the touch-control phase, the second selection switch133is turned on in response to the second control signal, causing the touch detection circuit22to connect to the third contact pad P3, thereby transmitting the touch sensing signal received by the third contact pad P3from the touch display panel10to the touch detection circuit to so that the touch detection circuit determines touch positions, for example, the touch position of a finger, stylus, etc., on the touch display panel, according to the capacitance changes derived from the touch sensing signal. Thus, in this embodiment, by controlling the on/off time of the second selection switch132, the touch detection circuit and the data driving circuit are connected to the same third contact pad during different phases, respectively, thus reducing the number of third contact pads on the driving circuit board, increasing the pad pitches, reducing the difficulty of the bonding process, improving the yield of products, and reducing manufacturing costs, which is advantageous for the realization of the high resolution display. FIG.8Ais a schematic circuit diagram of an implementation example of the second selection switch shown inFIG.7. As shown inFIG.8A, the second selection switch133includes a fourth transistor M4and a fifth transistor M5. It should be noted that the transistors shown inFIG.8Aare all illustrated as N-type transistors, for example, and the embodiments of the present disclosure are not limited thereto, the transistors may also be P-type transistors, of course. For example, the first electrode of the fourth transistor M4is connected to the touch detection circuit22, and the second electrode of the fourth transistor M4is connected to the third contact pad P3via a wiring1331, and the gate of the fourth transistor M4is connected to a fourth switch signal terminal MUX4to receive a fourth switch signal. For example, in some examples, the fourth transistor M4is turned on in response to the fourth switch signal, causing the touch detection circuit22to be connected to the third contact pad P3so that the touch sensing signal received by the third contact pad P3from the touch display panel10is transmitted to the touch detection circuit. For example, the first electrode of the fifth transistor M5is connected to the data driving circuit21, and the second electrode of the fifth transistor M5is connected to the third contact pad P3via a wiring1332, and the gate of the fifth transistor M5is connected to a fifth switch signal terminal MUX5to receive a fifth switch signal. For example, the fifth transistor M5is turned on in response to the fifth switch signal, causing the data driving circuit21to be connected to the third contact pad P3, so that the data signal generated by the data driving circuit21is transmitted to the third contact pad P3. For example, the second control signal includes the fourth switch signal and the fifth switch signal. For example, when the fourth transistor M4and the fifth transistor M5are of the same type (as shown inFIG.8A), the fourth switch signal and the fifth switch signal are different signals. For example, one is of high level, and the other is of low level. when the fourth transistor M4and fifth The transistor M5are of different types (as shown inFIG.8BorFIG.8C), the fourth switch signal and the fifth switch signal are the same signal, e.g., both of high level or both of low level. The embodiments of the present disclosure are not limited thereto. For example, in other examples, when the fourth transistor M4and the fifth transistor M5are of different types, one switch signal (e.g., the second control signal) can be used to control the on/off of the two transistors.FIG.8Bis a schematic circuit diagram of another implementation example of the second selection switch provided by some embodiments of the disclosure.FIG.8Cis a schematic circuit diagram of another implementation example of the second selection switch provided by some embodiments of the disclosure. It should be noted that inFIG.8BandFIG.8C, illustrations are provided by taking that the fourth transistor M4is an N-type transistor and the fifth transistor M5is a P-type transistor as an example, and the embodiments of the present disclosure are not limited thereto. The fourth transistor M4may also be a P-type transistor and the fifth transistor M5may also be an N-type transistor. For example, as shown inFIG.8B, the circuit structure of the second selection switch shown inFIG.8Bis substantially the same as the circuit structure of the second selection switch shown inFIG.8A, with a difference that the gate of the fourth transistor M4is connected to the gate of the fifth transistor M5, i.e., to the fourth switch signal terminal MUX4, so that the fourth transistor M4is turned on or off under the control of the fourth switch signal provided by the fourth switch signal terminal MUX4. Of course, the gate of the fourth transistor M4and the gate of the fifth transistor M5may also both be connected to the fifth switch signal terminal MUX5(as shown inFIG.8C), so that the fourth transistor M4is turned on or off under the control of the fifth switch signal provided by the fifth switch signal terminal MUX5, and the embodiments of the present disclosure are not limited thereto. For example, in this example, the fourth switch signal terminal MUX4or the fifth switch signal terminal MUX5may be used as the second control signal terminal, with the fourth switch signal and the fifth switch signal serving as the second control signal. For example, in this example, the fourth transistor M4is an N-type transistor and the fifth transistor M5is a P-type transistor or the fourth transistor M4is a P-type transistor and the fifth transistor M5is an N-type transistor. The embodiments of the present disclosure are not limited thereto. As shown inFIG.7, the driving circuit board20further includes a plurality of second dummy selection switches134and a plurality of second data contact pads P31. Each second dummy selection switch134is electrically connected to one second data contact pad P31and the data driving circuit21, and is configured to receive the second control signal, and electrically connecting the second data contact pad P31and the data driving circuit21according to the second control signal. FIG.8Dis a schematic circuit diagram of the second dummy selection switch provided by at least one embodiment of the present disclosure. As shown inFIG.8D, the second dummy selection switch includes a third collocated transistor M15and a fourth collocated transistor M14. The first electrode of the third collocated transistor M15is connected to the data driving circuit21, and the second electrode of the third collocated transistor M15is connected to the second data contact pad P31via the wiring1332, the gate of the third collocated transistor M15is connected to the fifth switch signal terminal MUX5to receive the fifth switch signal. The first electrode of the fourth collocated transistor M14is in a float state, the second electrode of the fourth collocated transistor M14is connected to the second data contact pad P31via the wiring1331, and the gate of the fourth dummy transistor M14is connected to the fourth switch signal terminal MUX4to receive the fourth switch signal. It should be noted that the second dummy selection switch may include only the third collocated transistor M15, and the embodiments of the present disclosure are not limited thereto. The connection structure of the second dummy selection switch is not limited thereto, and may also be similar to the connection structure of the respective transistors in the second selection switch shown inFIGS.8B and8C, which will not be described in detail herein. FIG.9is a schematic diagram of another driving circuit board provided by at least one embodiment of the present disclosure. For example, as shown inFIG.9, on the basis of the example shown inFIG.7, the driving circuit board20further includes a fourth contact pad P4and a voltage signal circuit23connected to the fourth contact pad P4. For example, the voltage signal circuit23is configured to provide a supply voltage (e.g., the common voltage or a touch-control driving voltage or other supply voltages) to the fourth contact pad P4. For example, the voltage signal circuit23may be prepared directly on the substrate of the driving circuit board20, or implemented as an integrated circuit chip mounted on the substrate of the driving circuit board20in an appropriate manner (e.g., Bonding) and electrically connected to the lines on the substrate, and then electrically connected to the fourth contact pad P4. The driving circuit board, the data driving circuit21, the touch detection circuit22provided by above embodiments of the present disclosure may share at least part of the contact pads, thereby allowing one first data line and one touch-control line of the touch display panel to share one contact pad, thus reducing the number of contact pads, and increasing the pad pitches, which is advantageous for implementation of high-resolution display. The transistors used in embodiments of the present disclosure can all be thin film transistors or field effect transistors or other switch devices with the same characteristics. The embodiments of the present disclosure are illustrated by taking thin-film transistors as an example. The source and drain of each transistor used herein can be symmetrical in structures, so that the source and drain may have no difference in structures. In the embodiments of the present disclosure, in order to distinguish between the two electrodes of each transistor other than the gate, one of the two electrodes is directly described as the first electrode, and the other as the second electrode. In addition, the transistors can be divided into N-type and P-type transistors according to the characteristics of the transistors. When a transistor is P-type transistor, a turn-on voltage thereof is a low-level voltage and a turn-off voltage thereof is a high-level voltage. When a transistor is N-type transistor, the turn-on voltage thereof is a high-level voltage and the turn-off voltage thereof is a low-level voltage. In addition, the transistors in the embodiments of the present disclosure are all illustrated as N-type transistors, and in this case, the first electrode of each transistor is the drain, and second electrode is the source. It should be noted that the present disclosure is not limited to this. For example, one or more transistors of the respective selection switches provided by embodiments of the present disclosure may also adopt P-type transistors, wherein the first electrode of each transistor is the source and the second electrode is the drain. It is only required to connect the electrodes of the selected type of transistors by referring the connections of respective electrodes of the transistors in the embodiments of the present disclosure, and supply corresponding high or low voltages to the corresponding voltage terminals. If N type transistors are used, Indium Gallium Zinc Oxide (IGZO) can be used as the active layer in thin-film transistors, and compared with use of low-temperature polysilicon (LTPS) or amorphous silicon (e.g., hydrogenated amorphous silicon) as the active layer of the thin-film transistors, the size of the transistors can be effectively reduced and current leakage is prevented. At least one embodiment of the present disclosure also provides a touch display device. For example, in some examples, the touch display device includes, for example, the touch display panel as shown inFIG.1and the driving circuit board as shown inFIG.7. For example, in other examples, the touch display device may include the touch display panel as shown inFIG.6AorFIG.6Band the driving circuit board as shown inFIG.9. FIG.10is a schematic diagram of the touch display device provided by at least one embodiment of the present disclosure.FIG.11is a signal timing diagram of the touch display device provided by at least one embodiment of the present disclosure. As shown inFIG.10, the touch display device includes the touch display panel as shown inFIG.6Band the driving circuit board as shown inFIG.9. The touch display device shown inFIG.10is illustrated below by taking the first selection switch inFIG.2Aand the second selection switch inFIG.8as an example. The embodiments of the present disclosure are not limited thereto. Hereinafter, the operating principle of the touch display device provided by the embodiments of the present disclosure is described in detail with reference toFIGS.10and11. For example, as shown inFIG.10, the plurality of first contact pads P1and the plurality of third contact pads P3are electrically connected in an one-to-one correspondence manner, and the second contact pad P2and the fourth contact pad P4are connected in an one-to-one correspondence manner, thereby enabling the bonding of the touch display panel10and the driving circuit board20. For example, in some examples, when the touch display panel10includes a plurality of first data contact pads P11(only one first data contact pad P11is shown by way of example), the plurality of first data contact pads P11may also be connected in one-to-one correspondence with the third contact pads P3(not shown in the figure). For example, as shown inFIG.10, when the touch display panel10further includes a plurality of first data contact pads P11and the driving circuit board20further includes a plurality of second data contact pads P31, the plurality of first data contact pads P11and the plurality of second data contact pads P31are connected in an one-to-one correspondence manner. For example, as shown inFIG.11, during the display phase t1, the first switch signal terminal MUX1provides a signal of low level, and the second switch signal terminal MUX2provides a signal of high level, the fourth switch signal terminal MUX4provides a signal of low level, and the fifth switch signal terminal MUX5provides a signal of high level. Therefore, each first transistor M1and each fourth transistor M4are turned off, and each second transistor M2, each second collocated transistor M12, each fifth transistor M5and each fifth collocated transistor M15are turned on such that the first data lines DL1and the first contact pads P1are connected, and the second data lines DL2are connected to the first data contact pads P11, and the third contact pads P3are electrically connected to the data driving circuit21. Since first contact pads P1and the third contact pads P3are connected, and the first data contact pads P11and the second data contact pads P31are connected, the first data lines DL1and the second data lines DL2are electrically connected to the data driving circuit21, thus the data signal Vdata provided by the data driving circuit21is transmitted to the first data lines DL1via the third contact pads P3and the first contact pads P1, and transmitted to the second data lines DL2via the first data contact pads P11and the second data contact pads P31, and the data signal is transmitted to the pixel electrodes114of the pixel units in the display circuit array11via the first data lines DL1and the second data lines DL2, to drive the pixel units to emit light. During this phase, the common signal line113provides the common signal Vcom to the touch-control circuit array, so that the touch-control circuit array (e.g., the first touch-control electrodes) may be multiplexed as the common electrodes. Thus, During this phase, it is possible to charge the capacitor formed by the pixel electrode114and the common electrode113in the pixel unit110shown inFIG.4B, thereby forming an electric field to control the deflections of the liquid crystal molecules. During the touch stage t2, the first switch signal terminal MUX1provides a signal of high level, the second switch signal terminal MUX2provides a signal of low level. the fourth switch signal terminal MUX4provides a signal of high level, and the fifth switch signal terminal MUX5provides a signal of low level, so that the first transistor M1and the fourth transistor M4are turned on, and the second transistor M2and the fifth transistor M5are turned off, causing the touch-control lines TL connected to the first contact pads P1, and the third contact pads P3electrically connected to the touch detection circuit22. Since the first contact pads P1and the third contact pads P3are connected, the touch-control lines TL are electrically connected to the touch detection circuit22, such that the touch-control sensing signal generated by the touch-control circuit array12is transmitted to the touch detection circuit22on the driving board20via the touch-control lines TL, the first contact pads P1and the third contact pads P3, and the touch detection circuit22determines touch positions(for example, the touch position of a finger or stylus, etc.) on the touch display panel10, according to the capacitance changes in the touch-control sensing signal, thereby enabling the touch function. During this phase, the common signal line113provides the touch-control driving signal Tx to the touch-control circuit array to generate the touch-control sensing signal. Thus, during this phase, the touch-control circuit array acts as the touch-control electrodes. For example, when the touch-control circuit array12perform the touch detection based on the self-capacitance shown inFIG.5A, the third switch signal terminal provided a signal of high level during both the display phase t1and touch phase t2simultaneously, to allow the third transistors M3in respective rows to be turned on simultaneously, so that the signal for providing the common voltage or the touch-control driving signal is transmitted to the touch-control circuit array12via the common signal line113simultaneously. When the touch-control circuit array12perform the touch detection based on the mutual-capacitance shown inFIG.5B, the third switch signal terminal provide a signal of high level during the display phase t1, to allow the third transistors M3in respective rows to be turned on simultaneously to provide the signal for providing the common voltage to the touch-control circuit array via the common signal line113, and, during the touch-control phase, the third switch signal terminal provides a signal of high level line by line to control the third transistors M3to be turned on line by line to input the touch driving signal on the common signal line113to the touch-control circuit array line by line, to achieve the scanning of the touch display panel line by line, thus enabling the touch-control function. For example, the touch control phase t2may be located during a blanking phase in a frame of the display. For example, both the second transistor M2and the fifth transistor M5are turned off during this phase, and the gate scanning signal is of a turn-off level to turn off the display, so that the touch detection of the touch display device will not be affected. It should be noted that for the sake of clarity and brevity, the embodiments of the present disclosure do not give the all of composition units of the touch display device30. In order to realize the basic function of the touch display unit30, those skilled in the art can provide and set other structures not shown, and the embodiments of the present disclosure are not limited thereto. Technical effects of the touch display panel provided by the above mentioned embodiments may be referred to that of the touch display panel or the driving circuit board provided in the embodiments of the present disclosure, which will not be described in detail herein. At least one embodiment of the present disclosure further provides a driving method for driving the touch display device as shown inFIG.10. For example, in some examples, the driving method comprises the following operations. During the display phase, the first selection switch(es)131electrically connects the first contact pad(s) P1to the first data line(s) DL1in response to a first control signal, and the second selection switch(es)133electrically connects the third contact pad(s) P3to the data driving circuit21in response to a second control signal. During the touch phase, the first selection switch(es)131electrically connects the first contact pad(s) P1to the touch-control line(s) TL, in response to the first control signal, and the second selection switch(es)133electrically connects the third contact pad(s) P3to the touch detection circuit22in response to the second control signal. For example, in other examples, when the touch display device30includes a common electrode line, the driving method further comprises the following operations. During the display phase, the common signal line113provides a common voltage to the touch-control circuit array12. During the touch phase, the common signal line113provides a touch signal (e.g., the touch driving signal) to the touch-control circuit array12. It should be noted that, in the embodiments of the present disclosure, the process of the driving method may include more or fewer operations. The operations can be executed sequentially or in parallel. The driving method described above may be executed once or may be executed multiple times according to predetermined conditions. The technical effects of the driving methods provided in the above embodiments can be referred to the technical effects of the touch display device provided in the embodiments of the present disclosure and will not be described in detail herein. It should be noted that:(1) the drawings of the examples of the present disclosure relate only to the structures involved with the embodiments of the present disclosure, and other structures may refer to the usual designs.(2) without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other to obtain new embodiments. The foregoing description merely illustrates exemplary embodiments of the present disclosure and is not intended to limit the scope of protection of the present disclosure, which is determined by the appended claims. | 53,416 |
11861087 | It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention. Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes. Although embodiments of the invention are not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”. The terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. In accordance with the exemplary embodiment shown inFIG.1, a foldable computing device11is shown with a first flexible display segment31and a second flexible display segment18that can both fold flat against each other through hinge35, which is situated below and in between both segments. The diagram ofFIG.1further illustrates a camera and sensor module15located at the edge of flexible display segment31which also includes a speaker19. On the opposing side of the device where flexible display segment18is located, a fold over camera window33is situated along the edge with the same geometry as camera and sensor module15, such that when the device is folded, as shown in position38, the window33aligns with camera and sensor module15to provide transparency so that the camera and sensors from module15can maintain functionality when the device is in a folded state. The window can be made from a transparent material such as glass or acrylic, but it also may be just an opening absent of any material. In the first position36, foldable computing device11is shown in an unfolded state where camera and sensor module15are positioned along the same surface plane as fold over camera window33. The middle position37shows foldable computing device11in a partially folded state where its peripheral port32and microphone and speaker openings37are more fully shown at the base of the device. To allow for speaker19to be accessed when the device in a compact state as shown is position38, a small opening at the center of camera window33could also be integrated so that the device could be used as a handheld phone when it is in a folded state. The features of foldable computing device11are further shown inFIG.2through a front, back, and side view. A rigid display39may be integrated at the back side of the device so that it can still be used as a phone or for notifications and other applications when foldable computing device11is in a folded state. An additional camera51is integrated at the back side of flexible display segment17so that it can be used when the device is unfolded or folded.FIG.3shows a front, back, and side view of foldable computing device11in a folded state to emphasize how the fold over camera window33aligns in front of camera and sensor module15. FIG.4is a diagram showing a folding sequence of foldable computing device55, which is similar to folding computing device11fromFIG.1, in that is has a similar camera and sensor module57with a fold over camera window65on the opposite side. In the case of foldable computing device H, a flip phone form factor configuration is implemented, whereas mFIG.4, a form factor with a larger tablet form factor is implemented and shown in an expanded state from position73where flexible display segments61and67of flexible display63can fold against each other through hinge60such that the device can transition into a handheld phone configuration as shown in its transitional partially folded position74and then in its final folded position75. Rigid display78and speaker79are also shown on the back side of foldable computing device55which further illustrates how the device can be used with a phone form factor in its folded state. Another embodiment that could utilize the fold over camera window is for a flexible display device that is able to fold having its two structural segments facing each other in the folded state while its display segments are facing outward such that one of the flexible display segments can still be used to view the camera application. The window itself does not have to be limited to the position it is shown within the embodiments. It could also be located offset from the edge and in other shapes such as a circle to align with the circular geometry of the camera. Various other shapes could be implemented as well. Similar toFIGS.2&3,FIGS.5&6each show a front, back, and side view of the foldable computing device55in the unfolded state and folded state to further illustrate its core features. Additionally, just as foldable computing device11fromFIGS.2&3show the additional camera51on the back side of structural segment50, which is opposite structural segment53, inFIGS.5&6an additional camera81is included on the back side of structural support83, which is opposite structural support85on foldable computing device55. FIGS.7-9show a third embodiment with foldable computing device87transitioning from an unfolded tablet state in position101to a partially folded state in position102and then to a folded phone state in position103, where fold over camera window97is instead situated at the corner of the device along edge92next to flexible display segment91so that it can fold over the camera and sensor module95located along edge96next to flexible display segment90through hinge93. This provides transparency and functionality when the foldable computing device87is configured into a folded state, as shown in position103. This ultimately allows camera and sensor module95to be used in the unfolded tablet state and the folded phone state. Similar to foldable computing devices11and55, foldable computing device87also has an additional camera105which is situated on the same face as rigid display107where speaker109is also located. The flexible display integrated with foldable computing device11may also be implemented with different aspect ratios beyond what is shown in the drawings and through different types of flexible display technologies. The ratios may include ranges that would result in a rectangular unfolded state shape when the flexible display segments are approximately square in shape, as is illustrated with segments17and18inFIG.3and a square unfolded state shape, when flexible display segments are rectangular in shape, as is shown with segments61and67, and90and91fromFIGS.5and8. These aspect ratios may range from approximately 22:9 to 1:1 and are applicable to the full flexible display, the segments that make up the flexible display, and the rigid display as well. The flexible display technology may include, but is not limited to OLED, Mini-LED, and Micro-LED technology. | 8,351 |
11861088 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The embodiments of the present disclosure are clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all embodiments. All other embodiments obtained by those ordinary skilled in the art based on the embodiments of the present disclosure without creative effort are not departing from the spirit and scope of the present disclosure. In the description of the present application, it is to be understood that terms such as “central”, “longitudinal”, “lateral”, “length”, “width”, “thickness”, “upper”, “lower”, “front”, “rear”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inner”, “outer”, “clockwise”, “counterclockwise” should be construed to refer to the orientation as then described or as shown in the drawings. These terms are merely for convenience and concision of description and not intended to indicate or imply that that the present disclosure be constructed or operated in a particular orientation. Accordingly, it should be understood that the present disclosure is not limited thereto. Moreover, terms such as “first” and “second” are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features. Thus, the feature defined with “first” and “second” may comprise one or more of the features. In the description of the present disclosure, “a plurality of” means two or more than two, unless specified otherwise. In the description of the present disclosure, it should be understood that, unless specified or limited otherwise, the terms “mounted”, “connected”, and “coupled” are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to the detail embodiment of the present disclosure. In the present invention, unless specified or limited otherwise, a structure in which a first feature is “on” or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween. Furthermore, a first feature “on”, “above”, or “on top of” a second feature may include an embodiment in which the first feature is right or obliquely “on”, “above”, or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below”, “under”, or “on bottom of” a second feature may include an embodiment in which the first feature is right or obliquely “below”, “under”, or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature. Various embodiments and examples are provided in the following description to implement different structures of the present disclosure. In order to simplify the present disclosure, certain elements and settings are described. However, these elements and settings are only by way of example and are not intended to limit the present disclosure. In addition, reference numerals may be repeated in different examples in the present disclosure. This repeating is for the purpose of simplification and clarity and does not refer to relations between different embodiments and/or settings. Furthermore, examples of different processes and materials are provided in the present disclosure. However, it would be appreciated by those skilled in the art that other processes and/or materials may be also applied. Reference is made toFIG.1. One aspect of the present application is to provide a display panel, which includes an array substrate10, a driving circuit30, a plurality of touch electrode blocks201, and a plurality of touch signal lines40. The array substrate10is provided with a display area101and a non-display area102, in which the non-display area102surrounds the display area101. The driving circuit30is disposed at one end of the non-display area102to realize the touch-driving function; specifically, the driving circuit30is disposed along a first direction E. Each of the plurality of touch signal lines40is connected with the driving circuit30and one of the touch electrode blocks201. Specifically, each touch signal line40extends from the driving circuit30to one touch electrode block201along a second direction F. The plurality of touch electrode blocks201are disposed in an array in the display area101. Specifically, the plurality of touch electrode blocks201are arrayed along the first direction E and the second direction F, and the first direction E and the second direction F are perpendicular to each other. Exemplarily, the first direction E is a horizontal direction and the second direction F is a vertical direction. Each of the touch electrode blocks201includes a touch electrode21and a compensation electrode22which are disposed to be insulated. The orthographic projections of the touch electrode21and the compensation electrode22projected on the array substrate10have an overlapping area. The overlapping area on the touch electrode block201close to the driving circuit30is larger than the overlapping area on the touch electrode block201away from the driving circuit30, such that the impedance difference of each of the touch electrode blocks is within a preset range, which can be set according to the actual needs, such as 0%-10%. Exemplarily, the range of the impedance difference between the touch electrode blocks201of each column is 5%, thereby improving the uniformity of touch of the display panel. In a display panel and a display device provided by the present disclosure, the compensation electrode22is added on the basis of the original touch electrode21of the touch electrode block201, so as to change the capacitance of the touch electrode block201by the overlapping area of the compensation electrode22and the touch electrode21on the array substrate10. Since the touch electrode block201close to the driving circuit30has small impedance and the touch electrode block201away from the driving circuit30has large impedance, the impedance difference between each of the touch electrode blocks is to be within the preset range by providing the large overlapping area on the touch electrode block201close to the driving circuit30and the small overlapping area on the touch electrode block201away from the driving circuit30. As a result, the impedance difference between the touch electrode blocks201in the near end and the far end is compensated and the touch performance difference caused by the impedance of the touch signal line40is improved, thereby improving the touch performance of the display panel. In some embodiments, as shown inFIG.1in conjunction withFIG.4, the first area is the projection area of each of the touch electrodes21projected on the array substrate10. When the area of each touch electrode21is the same, multiple first areas are in the same size, and the first area is larger than the overlapping area, in which the overlapping area is the projection area of each compensation electrode22projected on the array substrate10. The area of the touch electrode21is provided to remain unchanged, and the increased amount of the capacitance of the touch electrode block can be controlled by merely changing the area of the compensation capacitor, whereby facilitating the simplification for the processing on the touch electrode blocks. In some embodiments, as shown inFIG.1, in the direction from a side close to the driving circuit30to a side away from the driving circuit30(i.e., in the first direction F), the overlapping area of the orthographic projections of the touch electrode21and the compensation electrode22projected on the array substrate10decreases successively. Without adding the compensation electrodes, the impedance increases successively along the first direction F. The capacitance of the compensation capacitor is determined based on the overlapping area. The smaller the overlapping area is, the smaller capacitance the compensation capacitor has. Therefore, the compensation capacitance resulted from the compensation electrode22is enabled to be decreased successively along the direction F by providing the overlapping area of the orthographic projections of the touch electrode21and the compensation electrode22projected on the array substrate10to be decreased successively. As a result, the impedance difference between the touch electrode blocks20in the near end and the far end is compensated, and the touch performance difference caused by the impedance of the touch signal line40is improved, thereby improving the touch performance of the display panel. In some embodiments, the display panel further includes a cathode layer53, which is disposed between the array substrate10and the touch electrode21. The distance between the cathode layer53and the touch electrode21is greater than the distance between the touch electrode and the compensation electrode. Specifically, in some embodiments, as shown inFIG.3, the display panel further includes a light-emitting layer50, a packaging layer60, and a touch electrode layer20. The light-emitting layer50is disposed on the array substrate10; specifically, the light-emitting layer50includes an anode layer51, an organic light-emitting layer52, and the cathode layer53which are disposed in order on the array substrate10. The anode layer51is disposed on the substrate, the organic light-emitting layer52is located on the anode, the cathode layer53covers the organic light-emitting layer52, and an electric field is formed by the cathode layer53and the anode layer51. The distance between the cathode layer53and the touch electrode21is greater than the distance between the touch electrode and the compensation electrode. The capacitance of the parallel plate capacitor is given by the formula: C=k•Sd where k is the dielectric constant, S is the relative area, and d is the spacing. As shown inFIG.2in conjunction withFIG.3, it is supposed that the relative area between the touch electrode21and the compensation electrode22, i.e., the overlapping area of the projections of the touch electrode21and the compensation electrode22projected on the array substrate10, is defined as S1, the spacing is D1, then the capacitance C1is formed between the touch electrode21and the compensation electrode22. It is supposed that the relative area between the touch electrode21and the cathode of the light-emitting layer, i.e., the area of the orthographic projection of the touch electrode21projected on the cathode, is defined as S2, the spacing is D2, then the capacitance C2is formed between the touch electrode21and the cathode. When S1=S2and D1<D2and, that is, the distance between the cathode layer53and the touch electrode21is greater than the distance between the touch electrode and the compensation electrode, then C1>C2. Since S1+S2is fixed, S2decreases when S1increases, then |ΔC1|>|ΔC2|, in which ΔC1is the increased amount of the capacitance caused by the increased S1, and ΔC2is the decreased amount of the capacitance caused by the decreased S2, as a result, for the touch electrode block201, the total capacitance is increased. Therefore, the impedance difference between the touch electrode blocks201in the near end and the far end can be compensated by increasing the capacitance of the touch electrode block201, so as to improve the consistency of RC loading for the display panel, thereby improving the uniformity of the touch performance of the display panel. The packaging layer60is disposed on the light-emitting layer50and covers the light-emitting layer50. The touch electrode21and the compensation electrode22are disposed on the packaging layer60. The touch electrode layer20is disposed on the packaging layer60, and the touch signal line40and a plurality of the touch electrode blocks201disposed to be insulated from each other are disposed on the touch electrode layer20. In some embodiments, the display panel further includes an insulating layer23and a through hole41. The insulating layer23is disposed between the touch electrode21and the compensation electrode22. The compensation electrode22and the touch signal line40are disposed to be insulated from each other, such that the touch signal line40work normally without affecting the touch performance, thereby ensuring the realization of the touch function of the touch electrode21. Specifically, the compensation electrode22and the touch signal line40are connected and insulated from each other through the insulating layer23. There are various ways for providing the compensation electrode22and the touch signal line40to be insulated on the touch electrode layer20. Exemplarily, as shown inFIG.3, the touch signal line40and the compensation electrode22are disposed on the same layer, the compensation electrode22and the touch signal line40are spaced apart and disposed on the packaging layer60, and the touch electrode21is disposed on the compensation electrode22and the touch signal line40. Certainly, there are further other ways for providing the compensation electrode22and the touch signal line40. In some embodiments, as shown inFIG.5, the touch electrode21is disposed on the packaging layer60, the touch signal line40and the compensation electrode22are disposed on the same layer, and the compensation electrode22and the touch signal line40are spaced apart and disposed on the touch electrode21. The distance between the cathode layer53and the touch electrode21is greater than the distance between the touch electrode and the compensation electrode, such that C1>C2when S1=S2, thereby increasing the total capacitance. As a result, the impedance difference between the touch electrode blocks201in the near end and the far end can be compensated by the compensation electrode22, so as to improve the consistency of RC loading for the display panel, thereby improving the uniformity of the touch performance of the display panel. The touch signal line40and the compensation electrode22are disposed on the same layer. Meanwhile, the compensation electrode22and the touch signal line40being disposed on the same layer enables the film structure to be more compact. The through hole41is defined by the insulating layer23, and the touch signal line40is connected to the corresponding touch electrode21through the through hole41. Specifically, as also shown inFIG.2, one end of the touch signal line40is connected to the drive circuit30, and the other end of the touch signal line40is connected to the through hole41on the touch electrode block201. The touch signal line40being connected to the touch electrode block201through the through hole41may shorten the length of the touch signal line40and is advantage of further preventing the impedance of the far-end touch electrode block201from increasing, so as to further improve the uniformity of the touch performance. In some embodiments, as shown inFIG.1, the number of the touch signal lines40is the same as the number of the touch electrode blocks201, such that the structure of the touch panel is compact, and no more redundant touch signal lines are increased, which is advantage of preventing the impedance of the far-end touch electrode block201from increasing and further improving the uniformity of the touch performance. In some embodiments, the touch electrode21and the compensation electrode22are made of conductive oxide materials. The conductive oxide materials may be transparent conductive oxide materials, such as aluminum-doped zinc oxide (AZO) and indium zinc oxide (IZO), and may be thinner metal materials, such as Mg/Ag, Ca/Ag, Sm/Ag, Al/Ag, Ba/Ag and other composite materials. It may also be formed by non-transparent materials, such as titanium aluminum titanium (Ti/Al/Ti) and aluminum alloy. Since the touch electrode21is required to be patterned, the use of non-transparent materials is beneficial to avoid the sub light-emitting units to ensure the display effect. Another aspect of the present disclosure further provides a display device, which includes the display panel. Since the display device has the aforementioned display panel, the display device has the same beneficial effect and the further description is not given herein. The embodiment of the present disclosure has no specific restrictions on the application of the display devices, which can be any product or component with the display function, such as TVs, notebooks, tablets, wearable display devices (e.g., smart bracelets, smart watches, etc.), mobile phones, virtual reality devices, augmented reality devices, vehicle displays, and advertising light boxes. In the aforementioned embodiments, the description of each embodiment has its own emphasis. The part not detailed in one embodiment may refer to the related description of other embodiments. A display panel and a display device provided by the embodiment of the present disclosure are described in detail above. The principles and implementations of the present disclosure are described using specific examples in this disclosure. The description of the embodiments is merely intended to better understand the methods and core concepts of the present disclosure. Those of ordinary skill in the art should realize that the technical solutions described in the aforementioned embodiments still can be modified, or some of the technical features can be equivalently replaced; and these modifications or replacements can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. | 18,137 |
11861089 | DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts. The term “coupling/coupled” used in this specification (including claims) of the disclosure may refer to any direct or indirect connection means. For example, “a first device is coupled to a second device” should be interpreted as “the first device is directly connected to the second device” or “the first device is indirectly connected to the second device through other devices or connection means.” In addition, the term “signal” can refer to a current, a voltage, a charge, a temperature, data, electromagnetic wave or any one or multiple signals. FIG.1illustrates a schematic diagram of a touch panel and a sensor driving circuit according to an embodiment of the invention.FIG.2illustrates a schematic waveform of a signal for driving sensor pads according to an embodiment of the invention. Referring toFIG.1andFIG.2, a touch panel300of the present embodiment includes a plurality of sensor pads310. The sensor pads310are arranged in an array. A sensor driving circuit120is coupled to the sensor pads310. The sensor driving circuit120drives the sensor pads310with a modulated driving signal as illustrated inFIG.2during a sensing period in the present embodiment. To be specific, a whole common electrode of a display panel is divided into the plurality of sensor pads310in the present embodiment. In a display period, the sensor pads310serve as common electrodes. In the sensing period, the sensor pads310serve as sensor electrodes. The sensor driving circuit120modulates voltage signals VML, VMM and VMH on a driving signal VCOM to generate the modulated driving signal as illustrated inFIG.2. In the present embodiment, the driving signal VCOM may be a signal applied to the common electrodes in the display period. When the sensor pads310serve as the sensor electrodes in the sensing period, the sensor driving circuit120drives the sensor pads310with the modulated driving signal230via sensor trace312. Next, an analog-front-end (AFE) circuit calculates capacitive variations of each of the sensor pads310relative to ground, so as to determine whether a touch event happens. In the present embodiment, the touch panel300may be embedded into the display panel in a manner of in-cell or on-cell, and the invention is not limited thereto. Enough teaching, suggestion, and implementation illustration for the aforesaid touch panel may be obtained with reference to common knowledge in the related art, which is not repeated hereinafter. FIG.3illustrates a schematic diagram of parasitic capacitances between sensor electrodes and panel elements according to an embodiment of the invention. Referring toFIG.3, sensor pads SP-A, SP-B, SP-C and SP-D are disposed above a plurality of data lines DL[n] to DL[n+3], a plurality of gate lines GL[n] to GL[n+2], and a plurality of pixel electrodes520, where n is an integer large than or equal to 1. Each of the sensor pads SP-A, SP-B, SP-C and SP-D includes a plurality of sub-common electrodes521in the present embodiment. The plurality of sub-common electrodes521are electrically connected to form a single sensor pad, e.g. the sensor pad SP-A, SP-B, SP-C or SP-D. In addition, sensor trances ST[n] and ST[n+1] connects the sensor pads SP-A, SP-B, SP-C and SP-D and a sensor driving circuit. In the present embodiment, parasitic capacitances Csd, Cdg, Csg and Csp may be generated among the sensor electrodes and the panel elements. For example, the parasitic capacitance Csd may be generated between the sensor electrode and the data line, the parasitic capacitance Cdg may be generated between the data line and the gate line, the parasitic capacitance Csg may be generated between the sensor electrode and the gate line, and the parasitic capacitance Csp may be generated between the sensor electrode and the pixel electrode. FIG.4illustrates an equivalent circuit diagram of the parasitic capacitances depicted inFIG.3.FIG.5illustrates schematic waveforms of signals for driving sensor pads and panel elements according to an embodiment of the invention. Referring toFIG.1,FIG.4andFIG.5, the parasitic capacitances Csd and Cdg are coupled in series in the present embodiment. When the sensor driving circuit120drives the sensor pads310with the modulated third driving signal230via the sensor trace312, the data lines, e.g. DL[n] to DL[n+3], are controlled to be electrically floating during the sensing period. In the present embodiment, the pixel electrodes520may also be controlled to be electrically floating during the sensing period. At the same time, a first driving signal VGH and a second driving signal VGL are modulated with the voltage signals VML, VMM and VMH to generate the modulated first driving signal210and the modulated second driving signal220, respectively. The gate lines, e.g. GL[n] to GL[n+2], are driven by the modulated first driving signal210and the modulated second driving signal220during the sensing period. In the present embodiment, the first driving signal VGH and the second driving signal VGL may be signals that are respectively applied to a VGH power line and a VGL power line in the display period. In the present embodiment, the waveforms of the modulated first driving signal210, the modulated second driving signal220, and the modulated third driving signal230are substantially identical as shown inFIG.5. For example, during the sensing period, each of the modulated first driving signal210, the modulated second driving signal220, and the modulated third driving signal230may have a plurality of step waveforms located in corresponding timing. In the present embodiment, since the data lines are electrically floating, and the waveforms of the first driving signal VGH and the second driving signal VGL are modulated to be similar to that of the third driving signal VCOM during the sensing period, the parasitic capacitances Csd and Cdg are effectively reduced. In the present embodiment, voltage levels of the first driving signal VGH, the second driving signal VGL and the third driving signal VCOM may be the same or different according to design requirements, and the invention is not limited thereto. FIG.6illustrates a schematic diagram of a display touch apparatus having a low temperature poly-silicon (LIPS) touch panel according to an embodiment of the invention. Referring toFIG.6, data lines DL are controlled by multiplexer circuits634on a LIPS touch panel640, and gate lines GL are controlled by a gate control circuit614in the present embodiment. Operation voltages and control/driving signals of the multiplexer circuits634and the gate control circuit614are provided by an external gate driver612. The gate driver612is arranged out of the LIPS touch panel640. The gate driver612controls the multiplexer circuits634located on the LIPS touch panel640to turn off the output of the multiplexer circuits634during the sensing period, and thus the data lines DL are floating in the present embodiment. In one embodiment, the outputs SOUT[1] to SOUT[N] of the source driver632may be coupled to the data lines DL via a switch circuit, and the switch circuit is controlled to make the data lines DL electrically floating by a control signal during the sensing period, where N is an integer large than or equal to 4. In the present embodiment, the gate driver612also controls the output of the gate control circuit614located on the LIPS touch panel640to turn off the gate terminals of the thin film transistors, e.g.300depicted inFIG.3, during the sensing period, and thus the pixel electrodes, e.g.520depicted inFIG.3, are floating in the present embodiment. In the present embodiment, the waveforms of the first driving signal VGH and the second driving signal VGL are modulated to be similar to that of the third driving signal VCOM during the sensing period as illustrated inFIG.5, and thus the parasitic capacitances Csd and Cdg are effectively reduced. FIG.7illustrates a schematic diagram of a display touch apparatus having an amorphous silicon (a-Si) touch panel according to an embodiment of the invention. Referring toFIG.7, data lines DL and gate lines GL are respectively controlled by an external source driver730and an external gate driver710in the present embodiment. The source driver730and the gate driver710are arranged out of the a-Si touch panel740. In the present embodiment, the source driver730is coupled to the data lines DL of the a-Si touch panel740via a switch circuit750. The switch circuit750is controlled to electrically float the data lines DL by a control signal S2during the sensing period. In the present embodiment, the control signal S2may be provided by the source driver730, the gate driver710, a timing control circuit, or other similar circuits according to design requirements, and it is not limited in the invention. In the present embodiment, the gate driver710also turns off the gate terminals of the thin film transistors, e.g.300depicted inFIG.3, via the gate lines GL during the sensing period, and thus the pixel electrodes, e.g.520depicted inFIG.3, are floating in the present embodiment. In the present embodiment, the waveforms of the first driving signal VGH and the second driving signal VGL are modulated to be similar to that of the third driving signal VCOM during the sensing period as illustrated inFIG.5, and thus the parasitic capacitances Csd and Cdg are effectively reduced. FIG.8illustrates a schematic diagram of a display touch apparatus according to an embodiment of the invention. Referring toFIG.8, the display touch apparatus800of the present embodiment includes a driving circuit830and a display panel850. The display panel850includes a touch panel840. The driving circuit830is configured to drive the display panel850. The driving circuit830includes a signal generating circuit810and a sensor driving circuit820. In the present embodiment, the signal generating circuit810modulates a plurality of voltage signals VML, VMM and VMH on a first driving signal VGH and a second driving signal VGL, and drives the gate lines, e.g. GL depicted inFIG.6orFIG.7, with the modulated first driving signal210and the modulated second driving signal220during a sensing period. In the present embodiment, the sensor driving circuit820modulates the voltage signals VML, VMM and VMH on a third driving signal VCOM, and drives the sensor pads, e.g. SP depicted inFIG.6orFIG.7, with the modulated third driving signal230during the sensing period. In the present embodiment, the data lines, e.g. DL depicted inFIG.6orFIG.7, are controlled to be electrically floating during the sensing period, and waveforms of the modulated first driving signal210, the modulated second driving signal220, and the modulated third driving signal230are substantially identical. In addition, the voltage signals VML, VMM and VMH and the driving signals VGH, VGL, VCOM and GND may be provided by a power generator circuit (not shown) in the present embodiment. To be specific, the signal generating circuit810includes a gate driver circuit812, a first signal modulation circuit814, and a control circuit816in the present embodiment. The gate driver circuit812is coupled to the gate lines. The gate driver circuit812operates between the modulated first driving signal210and the modulated second driving signal220during the sensing period, and outputs the modulated first driving signal210and the modulated second driving signal220to the coupled gate lines. The first signal modulation circuit814is coupled to the gate driver circuit812. The first signal modulation circuit814receives the voltage signals VML, VMM and VMH, and modulates the voltage signals VML, VMM and VMH on the first driving signal VGH and the second driving signal VGL. In the present embodiment, the first signal modulation circuit814includes a first modulation channel815and a second modulation channel817. The first modulation channel815receives the voltage signals VML, VMM and VMH, and modulates the voltage signals VML, VMM and VMH on the first driving signal VGH. The second modulation channel817receives the voltage signals VML, VMM and VMH, and modulates the voltage signals VML, VMM and VMH on the second driving signal VGL. In the present embodiment, each of the first modulation channel815and the second modulation channel817includes a capacitor and a multiplexer circuit. Taking the first modulation channel815for example, the capacitor C1is coupled to the gate driver circuit812. The capacitor C1modulates the voltage signals VML, VMM and VMH on the first driving signal VGH. The multiplexer circuit MUX1is coupled to the gate driver circuit812via the capacitor C1. The multiplexer circuit MUX1is controlled to sequentially transmit the voltage signals VML, VMM and VMH to the capacitor C1by one of a plurality of control signals S1. Elements and operations of the second modulation channel817may be deduced by analogy according to descriptions of the first modulation channel815, and it is not further described herein. In the present embodiment, the control circuit816outputs the plurality of control signals S1to control the multiplexer circuits MUX1and MUX2. The multiplexer circuits MUX1and MUX2select one of the voltage signals VML, VMM, VMH and GND according to the control signals S1, and thus output the selected signal to the capacitors C1and C2, respectively. In the present embodiment, the sensor driving circuit820includes a second signal modulation circuit822, and the second signal modulation circuit822includes a plurality of third modulation channels823. In the present embodiment, each of the third modulation channels823includes a multiplexer circuit MUX3. The multiplexer circuits MUX3receive the voltage signals VML, VMM and VMH, and modulate the voltage signals VML, VMM and VMH on the third driving signal VCOM according to the plurality of control signals S1. In the present embodiment, the sensor pads are grouped into active sensor pads and non-active sensor pads during the sensing period. The multiplexer circuits MUX3coupled to the non-active sensor pads, i.e. the multiplexer circuits MUX3located in the non-active sensing region, are controlled to sequentially transmit the voltage signals VML, VMM, VMH, GND and VCOM to the touch panel840by the plurality of control signals S1. The multiplexer circuits MUX3coupled to the active sensor pads, i.e. the multiplexer circuits MUX3located in the active sensing region, are controlled to transmit sensing signals S3to a determination circuit900by the plurality of control signals S1. In the present embodiment, the determination circuit900may include a plurality of analog-front-end (AFE) circuits respectively denoted by AFE[a], AFE[b], AFE[c] and AFE[d], as illustrated inFIG.9.FIG.9illustrates a block diagram of a determination circuit according to an embodiment of the invention. Enough teaching, suggestion, and implementation illustration for the aforesaid determination circuit and AFE circuits may be obtained with reference to common knowledge in the related art, which is not repeated hereinafter. In the present embodiment, the waveforms of the first driving signal VGH and the second driving signal VGL are modulated to be similar to that of the third driving signal VCOM during the sensing period as illustrated inFIG.5, and the data lines, e.g. DL depicted inFIG.6orFIG.7, are controlled to be electrically floating during the sensing period. Therefore, the parasitic capacitances Csd and Cdg are effectively reduced. FIG.10is a flowchart illustrating steps in a method for driving a display panel having a touch panel according to an embodiment of the invention. Referring toFIG.6toFIG.8andFIG.10, the method for driving the display panel having the touch panel of the present embodiment is at least adapted to one of the display touch apparatus600ofFIG.6, the display touch apparatus700ofFIG.7, and the display touch apparatus800ofFIG.8, but the invention is not limited thereto. Taking the display touch apparatus800ofFIG.8for example, in step S100, the driving circuit830modulates a plurality of voltage signals VML, VMM and VMH on a first driving signal VGH, a second driving signal VGL, and a third driving signal VCOM during a sensing period. In step S110, the driving circuit830drives the gate lines GL with the modulated first driving signal210and the modulated second driving signal220during the sensing period, and drives the sensor pads SP with the modulated third driving signal230during the sensing period. In step S120, the driving circuit830controls the data lines DL to be electrically floating during the sensing period. Besides, the method for driving the display panel having the touch panel described in the present embodiment of the invention is sufficiently taught, suggested, and embodied in the embodiments illustrated inFIG.1toFIG.9, and therefore no further description is provided herein. In summary, in the exemplary embodiment of the invention, the first driving signal and the second driving signal are modulated to drive the gate lines of the display panel during the sensing period, and the third driving signal is also modulated to drive the sensor pads of the touch panel. The waveforms of the modulated first driving signal, the modulated second driving signal, and the modulated third driving signal are substantially identical. The data lines of the display panel are controlled to be electrically floating during the sensing period. Therefore, the parasitic capacitances between the sensor pads and the data lines and the parasitic capacitances between the data lines and the gate lines are effectively reduced. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents. | 18,304 |
11861090 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The following describes this application by using specific embodiments. FIG.1is a schematic diagram of a plane structure of a touch display panel10according to an embodiment of this application. The touch display panel10includes an active area AA (active area) and a non-active area NA (non-active area). The active area AA is disposed on the touch display panel10as a picture display area, and is configured to display an image. The non-active area NA is configured to dispose function modules such as a display drive control module and a touch drive control module. The touch display panel10may be applied to a touch display apparatus, for example, an electronic apparatus, such as a mobile phone or a tablet computer, that can perform display and touch functions. FIG.2is a schematic diagram of a cross-sectional structure of the touch display panel10shown inFIG.1along an II-II line. As shown inFIG.2, the touch display panel10is configured to implement image display and touch operation detection. In this embodiment, the touch display panel10includes an array substrate11, a display medium layer13, and a package substrate15that are sequentially stacked from bottom to top in the figure. The display medium layer13is sandwiched between the array substrate11and the package substrate15. In this embodiment, the display medium layer13is an organic light-emitting diode (Organic Light-Emitting Diode, OLED); pixel areas that are arranged in a matrix form are disposed on the array substrate11, and a drive circuit configured to drive the display medium layer13to emit light is disposed in each pixel area; and the package substrate15is configured to package the display medium layer13. The drive circuit is configured to drive a material of the display medium layer to emit light to display an image. In this embodiment, the package substrate15includes two opposite surfaces: a first surface151and a second surface152. The first surface is adjacent to the display medium layer13, and the second surface152is far away from the display medium layer13. A touch sensing layer17and a protective layer19are sequentially disposed on the second surface152. The touch sensing layer17is configured to identify a location of touch performed on the touch display panel10. The protective layer19is configured to protect layer structures such as the touch sensing layer17and the package layer substrate15. In this embodiment, when the display medium layer13is an organic light-emitting diode, the touch display panel10may be made into a flexible and bent panel structure, and therefore can be applied to a flexible touch display apparatus, for example, a foldable mobile phone or a tablet computer. FIG.3is a schematic diagram of a plane structure of a pixel area on the array substrate11shown inFIG.1. As shown inFIG.3, in the active area AA, a plurality of pixel areas (not shown) that are arranged in a matrix form in a first direction X and a second direction Y are disposed on a surface that is of the array substrate11and that is adjacent to the display medium layer13. A drive circuit is disposed in each pixel area. A thin-film transistor and a capacitor included in the drive circuit may be formed by depositing and etching a semiconductor material on the surface of the array substrate. In this embodiment, a shape of the pixel area may be set according to an actual requirement, for example, may be a square, a diamond, a pentagon, or a hexagon. Certainly, the foregoing shape of the pixel area is merely an example for description, and no limitation is imposed thereto. The drive circuit in each pixel area can drive a light-emitting material that faces the pixel area and that is included in the display medium layer13to emit light. In this embodiment, a drive circuit in a pixel area cooperates with the corresponding display medium layer13to form one pixel unit (Pixel). Adjacent pixel units (Pixels) may correspond to different light-emitting materials included in the display medium layer13to emit light of different colors. Preferably, there is a black matrix (BM) between adjacent pixel units (Pixels), to prevent light emitted between the adjacent pixel units (Pixels) from interfering with each other. In the non-active area NA, a display drive circuit configured to drive the drive circuit in each pixel area and a touch controller TC (FIG.4) are disposed. The display drive circuit includes a data drive circuit configured to provide an image data signal, a scan drive circuit configured to perform line scanning, and a timing controller (Tcon) configured to control operating timing of the data drive circuit and the scan drive circuit. FIG.4is a schematic diagram of a plane structure of the touch sensing layer17disposed on the package substrate15shown inFIG.1. As shown inFIG.4, the touch sensing layer17includes a plurality of conductive patterns P1that are arranged in a matrix form in the first direction X and the second direction Y. Each conductive pattern P1is electrically connected to the touch controller TC through a signal transmission line Li extending in the second direction Y. In this embodiment, the conductive pattern P1is used to sense a first sensing signal generated in response to user touch, and transmit the first sensing signal to the touch controller TC through the signal transmission line Li. The touch controller TC identifies a location of the touch operation based on the first sensing signal. In this embodiment, the touch sensing layer17implements self-capacitance touch sensing by using the conductive pattern. In this embodiment, the conductive pattern is of a grid shape formed by metal conducting wires. More specifically,FIG.5andFIG.6are respectively schematic diagrams of side surface structures of two adjacent conductive patterns on the touch display panel shown inFIG.4along a V-V line.FIG.5is a schematic diagram of a side surface structure of one conductive pattern in the two adjacent conductive patterns shown inFIG.4along the V-V line.FIG.6is a schematic diagram of a side surface structure of the other conductive pattern in the two adjacent conductive patterns shown inFIG.4along the V-V line. As shown inFIG.5andFIG.6, the touch sensing layer17includes a first metal layer171, an insulation dielectric layer173, and a second metal layer172that are sequentially stacked. The first metal layer171includes the signal transmission line Li shown inFIG.4. The second metal layer172includes a plurality of conductive patterns P1that are arranged in a matrix form. The first metal layer171and the second metal layer172belong to different layer structures. As shown inFIG.5, at a location at which the signal transmission line Li is electrically connected to the conductive pattern, a through-hole H1is disposed at the insulation dielectric layer173, and the first metal layer171is electrically connected to the second metal layer172by using a conductive material in the first through-hole H1. As shown inFIG.6, at a location at which the signal transmission line Li is not electrically connected to the conductive pattern P1, the insulation dielectric layer173insulates the first metal layer171from the second metal layer172, to prevent the first metal layer from being electrically connected to the second metal layer. In this embodiment, a first conductive pattern P1and the signal transmission line Li may be formed through etching or printing by using a patterned photomask. FIG.7is a schematic diagram of a plane structure of any conductive pattern P1shown inFIG.4. As shown inFIG.7, an area in which each conductive pattern P is located includes a first area A1and a second area A2that do not overlap. Some metal conducting wires in the conductive pattern P1and the signal transmission line Li are correspondingly disposed in the first area A1. In this case, the second area A2does not include the signal transmission line Li, but includes only a metal conducting wire in the conductive pattern. In other words, the signal transmission line Li is disposed only in the first area A1but not in the second area A2, and does not overlap the second area A2. FIG.8is a schematic diagram of an enlarged structure of the first area A1in the any conductive pattern shown inFIG.7along an xx line according to a first embodiment of this application. As shown inFIG.8, in the first area A1, a plurality of first metal sub-conducting wires C11extending in the first direction X are disposed at the second metal layer172, and the plurality of first metal conducting wires C11are disposed at a preset distance from each other. In other words, the plurality of first metal sub-conducting wires C11are disposed in parallel in the second direction Y, and two adjacent first metal sub-conducting wires C11are spaced at the preset distance. In the second area A2, at least one second metal sub-conducting wire C12extending in a direction different from the first direction X is disposed, and the second metal sub-conducting wire C12is electrically connected to a plurality of first metal sub-conducting wires C11in the first area A1. Therefore, the first metal sub-conducting wires C11discretely disposed in the first area A1are electrically connected to and conducted with the metal conducting wire in the second area A2, so that all metal conducting wires in the first conductive pattern P1are electrically connected and are at a same potential. A plurality of second metal conducting wires C2extending in the second direction Y are disposed in the first area A1corresponding to the first metal layer171. The second metal conducting wire C2does not continuously overlap the first metal sub-conducting wire C11in the extension direction (the second direction Y) of the second metal conducting wire. In other words, the second metal conducting wire C2and the first metal sub-conducting wire C11do not overlap except for a point of intersection between the second metal conducting wire and the first metal sub-conducting wire in the extension directions thereof. Any second metal conducting wire C2is electrically connected to one conductive pattern. In this embodiment, the second metal conducting wire C2is used as the signal transmission line Li shown inFIG.7, and is configured to transmit a first sensing signal provided by the conductive pattern electrically connected to the second metal conducting wire to the touch controller TC. In addition, a plurality of second metal conducting wires C2extending in the second direction Y are disposed at the first metal layer171, and when the first metal layer171and the second metal layer172are stacked, in other words, when the plurality of first metal sub-conducting wires C11at the second metal layer172are projected onto the first metal layer171in a direction perpendicular to the first metal layer171, the plurality of first metal sub-conducting wires C11intersect the plurality of second metal conducting wires C12to form a plurality of closed metal grids. In an embodiment of this application, each metal grid faces one pixel unit (Pixel), and a shape of the metal grid is the same as a shape of the pixel unit, so that a metal conducting wire is located on a light shield layer. This prevents a metal conducting wire from blocking a light-emitting area of a pixel unit (Pixel), and ensures transmittance and intensity of light emitted by the pixel unit (Pixel) and image brightness. In the first area A1, a first dielectric layer includes a first via H1, and the second metal conducting wire C2is electrically connected to the conductive pattern P1through the first via H1. In other words, the second metal conducting wire C2used as the signal transmission line Li is electrically connected to the conductive pattern P1through the first via H1. FIG.9(a),FIG.9(b), andFIG.9(c)are a schematic diagram of an enlarged structure of the second area A2in the any conductive pattern shown inFIG.7along an XI line. As shown inFIG.9(a),FIG.9(b), andFIG.9(c), in addition to a plurality of first metal sub-conducting wires C11extending in the first direction X, the second area A2further includes a plurality of second metal sub-conducting wires C12extending in the second direction Y. The plurality of second metal sub-conducting wires C11intersect the plurality of second metal sub-conducting wires C12to form patterns of a plurality of metal grids. The plurality of second metal sub-conducting wires C11and the plurality of second metal sub-conducting wires C12are disposed at a same layer, and are electrically connected at intersection locations. In this embodiment, a pattern shape of a metal grid formed in the second area A2may be a hexagon formed by intersecting continuous trapezoidal first metal sub-conducting wires C11extending in the first direction X and triangular-wave-shaped second metal conducting wires C2extending in the second direction Y, as shown inFIG.9(a); or may be an irregular shape formed by intersecting continuous trapezoidal first metal sub-conducting wires C11extending in the first direction X and rectilinear second metal conducting wires C2extending in the second direction Y, as shown inFIG.9(b); or may be a hexagon formed by intersecting continuous triangular-wave-shaped first metal sub-conducting wires C11extending in the first direction X and continuous trapezoidal second metal conducting wires C2extending in the second direction Y, as shown inFIG.9(c). Certainly, the shape of the metal grid in the second area A2is not limited to the foregoing enumerated shapes, and it only needs to be ensured that the shape of the metal grid and the shape of the pixel area are the same and fully overlap. The metal grid pattern formed in the second area A2may be obtained by patterning a material of the second metal layer172. In this embodiment, the first metal sub-conducting wire C11, the second metal sub-conducting wire C12, and the second metal conducting wire are all disposed to face the black matrix BM. This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. At a location corresponding to the second metal conducting wire C2at the second metal layer172other than a location of the first metal sub-conducting wire C11, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11may be disposed. Certainly, at a location corresponding to the second metal conducting wire C2at the second metal layer172, the first metal sub-conducting wire C11may alternatively not be disposed. Specifically,FIG.10is a schematic diagram of a cross-sectional structure of the conductive pattern shown inFIG.8along a B-B line. As shown inFIG.10, in this embodiment, at a location corresponding to the second metal conducting wire C2at the second metal layer172other than a location of the first metal sub-conducting wire C11, a floating (floating) metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed. In this embodiment, a part of the floating metal conducting wire is not electrically connected to a ground terminal of the touch display panel. The floating metal conducting wire is disposed, so that the part of the metal conducting wire is not affected by electrical performance of the ground terminal. This further reduces signal interference between the first metal sub-conducting wires C11and between the first metal sub-conducting wire C11and the second metal conducting wire C2, so that the second metal conducting wire C2used as the signal transmission line Li accurately transmits the first sensing signal sensed by the conductive pattern to the touch sensing module TC. FIG.11is a schematic diagram of a cross-sectional structure of the conductive pattern shown inFIG.8along a B-B line according to a second embodiment of this application. As shown inFIG.11, in this embodiment, at a location corresponding to the second metal conducting wire C2at the second metal layer172other than a location of the first metal sub-conducting wire C11, the first metal sub-conducting wire C11is not disposed at the second metal layer172. This prevents a case in which the second metal conducting wire C2used as the signal transmission line Li is electrically connected to the conductive pattern to mistakenly transmit an electrical signal sensed by the conductive pattern to the touch sensing module TC, and ensures accuracy of a touch sensing signal. FIG.12is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a third embodiment of this application.FIG.13is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.12. As shown inFIG.12andFIG.13, the first metal sub-conducting wire C11is a square-wave-shaped metal conducting wire extending in the first direction X, and the second metal conducting wire C2is a rectilinear metal conducting wire extending in the second direction Y. Two adjacent first metal sub-conducting wires C11intersect two adjacent second metal conducting wires C2to form one square metal grid, and a shape and a size of one metal grid are substantially the same as a shape and a size of a pixel unit (Pixel). In this embodiment, preferably, the metal grid faces the black matrix BM (FIG.3) and surrounds the pixel unit (Pixel). This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. FIG.14is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a fourth embodiment of this application.FIG.15is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.14. As shown inFIG.14andFIG.15, the first metal sub-conducting wire C11is a plurality of continuous trapezoidal metal conducting wires extending in the first direction X, and the second metal conducting wire C2is a rectilinear metal conducting wire extending in the second direction Y. Two first metal sub-conducting wires C11intersect two adjacent second metal conducting wires C2to form one irregular polygonal metal grid, and a shape and a size of one metal grid are substantially the same as a shape and a size of a pixel unit (Pixel). In this embodiment, the metal grid faces the black matrix BM and surrounds the pixel unit (Pixel). This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. FIG.16is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a fifth embodiment of this application.FIG.17is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.16. As shown inFIG.16andFIG.17, the first metal sub-conducting wire C11is a triangular-wave-shaped metal conducting wire extending in the first direction X, and the second metal conducting wire C2is a plurality of continuous trapezoidal metal conducting wires extending in the second direction Y. Two first metal sub-conducting wires C11intersect two adjacent second metal conducting wires C2to form one hexagonal metal grid, and a shape and a size of one metal grid are substantially the same as a shape and a size of a pixel unit (Pixel). In this embodiment, the metal grid faces the black matrix BM and surrounds the pixel unit (Pixel). This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. In another embodiment of this application, compared with those shown inFIG.16andFIG.17, the extension direction of the first metal sub-conducting wire C11and the extension direction of the second metal conducting wire C2may be exchanged. In other words, the first metal sub-conducting wire C11shown inFIG.16is a triangular-wave-shaped metal conducting wire extending in the second direction Y, and the second metal conducting wire C2is a plurality of continuous trapezoidal metal conducting wires extending in the first direction X. FIG.18is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a sixth embodiment of this application.FIG.19is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.18. As shown inFIG.18andFIG.19, the first metal sub-conducting wire C11and the second metal conducting wire C2are continuous Z-shaped metal conducting wires extending in a same direction. In this embodiment, the first metal sub-conducting wire C11is a triangular-wave-shaped metal conducting wire extending in the second direction Y, and the second metal conducting wire C2is a triangular-wave-shaped metal conducting wire extending in the second direction Y. In another embodiment of this application, the first metal sub-conducting wire C11is a triangular-wave-shaped metal conducting wire extending in the first direction X, and the second metal conducting wire C2is a triangular-wave-shaped metal conducting wire extending in the first direction X. In this embodiment, one second metal conducting wire C2is disposed between two adjacent first metal sub-conducting wires C11, and one first metal sub-conducting wire C11is disposed between two adjacent second metal conducting wires C2. Therefore, one first metal sub-conducting wire C11intersects one second metal conducting wire C2to form quadrilateral metal grids that are sequentially arranged in the second direction Y. In addition, two adjacent first metal sub-conducting wires C11intersect two adjacent second metal conducting wires C2to form a quadrilateral including four metal grids. In this embodiment, the metal grid is a diamond arranged in the second direction Y, the four metal grids are arranged in a diamond shape, and a shape and a size of one metal grid are substantially the same as a shape and a size of a pixel unit (Pixel). In this embodiment, the metal grid faces the black matrix BM. This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. FIG.20is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a seventh embodiment of this application.FIG.21is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.20. As shown inFIG.20andFIG.21, the first metal sub-conducting wire C11is a metal conducting wire that extends in the first direction X and that forms a plurality of closed grids, and the second metal conducting wire C2is a triangular-wave-shaped metal conducting wire extending in the second direction Y. In this embodiment, the closed grid is a quadrilateral, and a diagonal line of the quadrilateral is parallel to the first direction X, or a diagonal line of the quadrilateral is perpendicular to the first direction X. In other words, a plurality of closed grids formed by the first metal sub-conducting wire C11extending in the first direction X are diamonds continuously arranged in the first direction X. In the extension direction of the second metal conducting wire C2, the second metal conducting wire C2does not fully overlap a grid line that forms a metal grid. This effectively reduces an area of full overlapping between the second metal conducting wire used as the signal transmission line Li and a metal conducting wire in a conductive pattern, and effectively reduces drive load of the touch drive module TC. In this embodiment, one metal grid corresponds to four pixel units (Pixels). In another embodiment of this application, a quantity of pixel units corresponding to one metal grid is not limited thereto. For example, one metal grid corresponds to eight pixel units (Pixels). Specifically, for a case in which one metal grid corresponds to four pixel units (Pixels), refer toFIG.22andFIG.23.FIG.22is a schematic diagram of a plane structure obtained after the conductive grid and the corresponding pixel unit shown inFIG.20are exploded.FIG.23is a schematic diagram of a plane structure obtained after the conductive grid and the pixel unit shown inFIG.20overlap. As shown inFIG.22, each metal grid corresponds to four pixel units (Pixels), and only a metal conducting wire that forms a metal grid fully overlaps an edge area of a pixel unit (Pixel). In other words, the metal grid faces the black matrix BM. This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel unit (Pixel) corresponding to a pixel area. Certainly, in another embodiment of this application, for a metal grid pattern shown inFIG.22orFIG.23, the second metal conducting wire C2may further face and fully overlap an edge of a metal grid on the first metal sub-conducting wire C11in the second direction Y. This can further enable both the first metal sub-conducting wire C11and the second metal conducting wire C2to face a black matrix BM between pixel units (Pixels) and surround a pixel unit (Pixel), prevent a metal grid from blocking a light-emitting area of a pixel unit (Pixel), and further improve light transmittance and image display brightness of the image display panel10. FIG.24is a schematic diagram of a cross-sectional structure of the conductive pattern at III-III shown inFIG.20. As shown inFIG.24, at a corresponding location at which the second metal conducting wire C2is disposed in an area inside each metal grid, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed at the second metal layer172. In this embodiment, a part of the floating metal conducting wire is not electrically connected to a ground terminal of the touch display panel. The floating metal conducting wire is disposed, so that the part of the metal conducting wire is not affected by electrical performance of the ground terminal. This further reduces signal interference between the first metal sub-conducting wires C11and between the first metal sub-conducting wire C11and the second metal conducting wire C2, so that the second metal conducting wire C2used as the signal transmission line Li accurately transmits the first sensing signal sensed by the conductive pattern to the touch sensing module TC. FIG.25is a schematic diagram of a cross-sectional structure of the conductive pattern at III-III shown inFIG.20according to another embodiment of this application. As shown inFIG.25, at a corresponding location at which the second metal conducting wire C2is disposed in an area inside each metal grid, the first metal sub-conducting wire C11is not disposed at the second metal layer172. This reduces procedure complexity, and further improves light transmittance and image display brightness of the touch display panel. FIG.26is a schematic diagram of a cross-sectional structure of the conductive pattern at IV-IV shown inFIG.20. As shown inFIG.24, in a black matrix BM between corresponding adjacent pixel units (Pixels), at a location at which the first metal layer171and the second metal conducting wire C2are not disposed on the second surface152, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed at the second metal layer172. This reduces signal interference C between the first metal sub-conducting wires C11and between the first metal sub-conducting wire C11and the second metal conducting wire C2. FIG.27is a schematic diagram of a cross-sectional structure of the conductive pattern at IV-IV shown inFIG.20according to another embodiment of this application. As shown inFIG.27, in a black matrix BM between corresponding adjacent pixel units (Pixels), at a location at which the first metal layer171and the second metal conducting wire C2are not disposed on the second surface152, the first metal sub-conducting wire C11is not disposed at the second metal layer172, and the second metal conducting wire C2is not disposed at the first metal layer171. This reduces procedure complexity. FIG.28is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to an eighth embodiment of this application.FIG.29is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.28. As shown inFIG.28andFIG.29, the first metal sub-conducting wire C11is a metal conducting wire that extends in the first direction X and that forms a plurality of closed grids, and the second metal conducting wire C2is a metal conducting wire that extends in the second direction Y and that forms a plurality of closed grids. In this embodiment, the closed grid is a quadrilateral, and a diagonal line of the quadrilateral is parallel to the first direction X, or a diagonal line of the quadrilateral is perpendicular to the first direction X. In other words, a plurality of closed grids formed by the first metal sub-conducting wire C11extending in the first direction X are diamonds continuously arranged in the first direction X. In the extension direction of the second metal conducting wire C2, the second metal conducting wire C2fully overlaps a grid line that forms a metal grid. This effectively reduces an area of full overlapping between the second metal conducting wire used as the signal transmission line Li and a metal conducting wire in a conductive pattern, and effectively reduces drive load of the touch drive module TC. FIG.30is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a ninth embodiment of this application.FIG.31is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.30. As shown inFIG.29andFIG.30, the first metal sub-conducting wire C11is a metal conducting wire that extends in the first direction X and that forms a plurality of closed grids, and the second metal conducting wire C2is a triangular-wave-shaped metal conducting wire extending in the second direction Y. In this embodiment, the closed grid is a quadrilateral, and a diagonal line of the quadrilateral is parallel to the first direction X, or a diagonal line of the quadrilateral is perpendicular to the first direction X. In other words, a plurality of closed grids formed by the first metal sub-conducting wire C11extending in the first direction X are diamonds continuously arranged in the first direction X. In this embodiment, in the first metal grid C11, there is one metal connection point CP between any two adjacent metal grids in the first direction X. The second conducting wire C2is a metal conducting wire that extends in the second direction Y and that forms a plurality of closed metal grids. When the second conducting wire C2and the first metal sub-conducting wire C11fully overlap, the second conducting wire and the first metal sub-conducting wire do not fully overlap except for an intersection point generated due to different extension directions. Each metal grid on the second conducting wire C2surrounds one metal conducting wire connection point CP, and there is at least one metal connection point CP between any two adjacent second conducting wires C2. In the extension direction of the second metal conducting wire C2, the second metal conducting wire C2does not fully overlap a grid line that forms a metal grid. This effectively reduces an area of full overlapping between the second metal conducting wire used as the signal transmission line Li and a metal conducting wire in a conductive pattern, and effectively reduces drive load of the touch drive module TC. In this embodiment, a shape and a size of a metal grid on the first metal sub-conducting wire C11are substantially the same as a shape and a size of one pixel unit (Pixel), and the metal grid faces the black matrix BM and surrounds the pixel unit (Pixel). This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. In this embodiment, in the first metal sub-conducting wire C11except for the location of intersection between the first metal sub-conducting wire C11and the second metal conducting wire C2, a shape and a size of a metal grid on the first metal sub-conducting wire C11are substantially the same as a shape and a size of one pixel unit (Pixel), and the metal grid faces the black matrix BM and surrounds the pixel unit (Pixel). In other words, because there is at least one metal connection point CP between any two adjacent second conducting wires C2, some metal grids on the first metal sub-conducting wire C11do not intersect a metal grid on the second metal conducting wire C2between any two adjacent second conducting wires C2. Therefore, a shape and a size of a metal grid that is in the first metal sub-conducting wire C11and that does not intersect a metal grid on the second metal conducting wire C2are substantially the same as a shape and a size of one pixel unit (Pixel), and the metal grid surrounds one pixel unit (Pixel). This effectively increases an effective area for performing touch sensing by the first metal sub-conducting wire C11, increases a quantity of output first sensing signals, and ensures touch operation identification accuracy. FIG.32is a schematic diagram of an enlarged structure of the any conductive pattern shown inFIG.7along an xx line according to a tenth embodiment of this application.FIG.33is a schematic diagram of an exploded structure of the first metal sub-conducting wire C11and the second metal conducting wire C2in the conductive pattern shown inFIG.32. As shown inFIG.32andFIG.33, the first metal sub-conducting wire C11is a metal conducting wire that extends in the first direction X and that forms a plurality of closed grids, and the second metal conducting wire C2is a triangular-wave-shaped metal conducting wire extending in the second direction Y. In this embodiment, the closed grid is a quadrilateral, and a diagonal line of the quadrilateral is parallel to the first direction X, or a diagonal line of the quadrilateral is perpendicular to the first direction X. In other words, a plurality of closed grids formed by the first metal sub-conducting wire C11extending in the first direction X are diamonds continuously arranged in the first direction X. In this embodiment, there is one metal connection point CP between any two adjacent metal grids in the first direction X. The second conducting wire C2is a metal conducting wire that extends in the second direction Y and that forms a plurality of closed metal grids. When the second conducting wire C2and the first metal sub-conducting wire C11fully overlap, the second conducting wire and the first metal sub-conducting wire do not fully overlap except for an intersection point generated due to different extension directions. Each metal grid on the second conducting wire C2surrounds one metal conducting wire connection point CP, and metal connection points CP surrounded by any two adjacent second conducting wires C2are adjacent in the first direction X. In the extension direction of the second metal conducting wire C2, the second metal conducting wire C2does not fully overlap a grid line that forms a metal grid. This effectively reduces an area of full overlapping between the second metal conducting wire used as the signal transmission line Li and a metal conducting wire in a conductive pattern, and effectively reduces drive load of the touch drive module TC. In this embodiment, a shape and a size of a metal grid on the first metal sub-conducting wire C11are substantially the same as a shape and size of one pixel unit (Pixel), one metal grid faces one pixel unit (Pixel), one metal grid faces one pixel unit (Pixel), and the metal grid faces the black matrix BM and surrounds the pixel unit (Pixel). This effectively prevents display brightness of a pixel unit (Pixel) from being affected when a metal conducting wire fully overlaps an edge of a pixel area. At a corresponding location at which the second metal conducting wire C2is disposed in an area inside each metal grid, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed at the second metal layer172. For example, for cross sections of the location along the VI-VI line shown inFIG.30and the location along the VIII-VIII line shown inFIG.32, refer toFIG.24. In this embodiment, a part of the floating metal conducting wire is not electrically connected to a ground terminal of the touch display panel. The floating metal conducting wire is disposed, so that the part of the metal conducting wire is not affected by electrical performance of the ground terminal. This further reduces signal interference between the first metal sub-conducting wires C11and between the first metal sub-conducting wire C11and the second metal conducting wire C2, so that the second metal conducting wire C2used as the signal transmission line Li accurately transmits the first sensing signal sensed by the conductive pattern to the touch sensing module TC. In another embodiment of this application, at a corresponding location at which the second metal conducting wire C2is disposed in an area inside each metal grid, the first metal sub-conducting wire C11is not disposed at the second metal layer172. For example, for cross sections of the location along the VI-VI line shown inFIG.30and the location along the VIII-VIII line shown inFIG.32, refer toFIG.25. Because the first metal sub-conducting wire C11is not disposed at the location, procedure complexity can be effectively reduced, and light transmittance and image display brightness of the touch display panel can be further improved. At a corresponding location at which the second metal conducting wire C2is not disposed in an area inside each metal grid, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed at the second metal layer172. At a location at which the first metal layer171and the second metal conducting wire C2are not disposed on the second surface152, a floating metal conducting wire having a same material as the first metal sub-conducting wire C11is disposed at the second metal layer172. For example, for cross sections of the location along the VII-VII line shown inFIG.30and the location along the IX-IX line shown inFIG.32, refer toFIG.26. The floating metal conducting wire is disposed, so that the part of the metal conducting wire is not affected by electrical performance of the ground terminal. This further reduces signal interference between the first metal sub-conducting wires C11and between the first metal sub-conducting wire C11and the second metal conducting wire C2, so that the second metal conducting wire C2used as the signal transmission line Li accurately transmits the first sensing signal sensed by the conductive pattern to the touch sensing module TC. At a corresponding location at which the second metal conducting wire C2is disposed in an area inside each metal grid, the first metal sub-conducting wire C11is not disposed at the second metal layer172. For example, for cross sections of the location along the VII-VII line shown inFIG.30and the location along the IX-IX line shown inFIG.32, refer toFIG.27. Because the first metal sub-conducting wire C11and the second metal conducting wire C2are not disposed, procedure complexity can be effectively reduced, and light transmittance and image display brightness of the touch display panel can be further improved. The foregoing descriptions are embodiments of this application. It should be noted that a person of ordinary skill in the art may still make several improvements or polishing without departing from the principle of this application, and the improvements or polishing shall also fall within the protection scope of this application. | 40,744 |
11861091 | DETAILED DESCRIPTION Advantages and characteristics of the present disclosure and a method of achieving the advantages and characteristics will be clear by referring to exemplary embodiments described below in detail together with the accompanying drawings. However, the present disclosure is not limited to the exemplary embodiments disclosed herein but will be implemented in various forms. The exemplary embodiments are provided by way of example only so that those skilled in the art can fully understand the disclosures of the present disclosure and the scope of the present disclosure. Therefore, the present disclosure will be defined only by the scope of the appended claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the exemplary embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the specification. Further, in the following description of the present disclosure, a detailed explanation of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “including,” “having,” and “comprising” used herein are generally intended to allow other components to be added unless the terms are used with the term “only”. Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two parts is described using the terms such as “on”, “above”, “below”, and “next”, one or more parts may be positioned between the two parts unless the terms are used with the term “immediately” or “directly”. When an element or layer is disposed “on” another element or layer, another layer or another element may be interposed directly on the other element or therebetween. Although the terms “first”, “second”, and the like are used for describing various components, these components are not confined by these terms. These terms are merely used for distinguishing one component from the other components. Therefore, a first component to be mentioned below may be a second component in a technical concept of the present disclosure. Like reference numerals generally denote like elements throughout the specification. A size and a thickness of each component illustrated in the drawing are illustrated for convenience of description, and the present disclosure is not limited to the size and the thickness of the component illustrated. The features of various embodiments of the present disclosure can be partially or entirely adhered to or combined with each other and can be interlocked and operated in technically various ways, and the embodiments can be carried out independently of or in association with each other. Hereinafter, the present disclosure will be described in detail with reference to accompanying drawings. FIG.1is a plan view of a display device according to an exemplary embodiment of the present disclosure. For the convenience of description, inFIG.1, among various components of the display device100, a substrate110, a plurality of flexible films160, and a plurality of printed circuit boards170are illustrated. Referring toFIG.1, the display device100according to the exemplary embodiment of the present disclosure includes a substrate110, a plurality of flexible films160, and a plurality of printed circuit boards170. The substrate110is a substrate which supports and protects a plurality of components of the display device100. The substrate110may be formed of a glass or a plastic material having flexibility. When the substrate110is formed of a plastic material, for example, the substrate may be formed of polyimide (PI), but it is not limited thereto. The substrate110includes an active area AA and a non-active area NA. The active area AA is disposed at a center portion of the substrate110and images are displayed in the active area of the display device100. In the active area AA, a display element and various driving elements for driving the display element may be disposed. For example, the display element may be configured by a light emitting diode ED (e.g., a light emitting element) including an anode AN, an emission layer EL, and a cathode CT. Further, various driving elements for driving the display element, such as transistors TR1, TR2, TR3, a capacitor SC, or wiring lines may be disposed in the active area AA. A plurality of sub pixels SP may be included in the active area AA. The sub pixel SP is a minimum unit which configures a screen and each of the plurality of sub pixels SP may include a light emitting diode ED and a driving circuit. The plurality of sub pixels SP may be defined as intersecting areas of a plurality of gate lines GL disposed in a first direction and a plurality of data lines DL disposed in a second direction which is different from the first direction. Here, the first direction may be a horizontal direction ofFIG.1and the second direction may be a vertical direction ofFIG.1, but are not limited thereto. Each of the plurality of sub pixels SP may emit light having different wavelengths. For example, the plurality of sub pixels SP includes a red sub pixel SPR, a green sub pixel SPG, a blue sub pixel SPB, and a white sub pixel SPW. The driving circuit of the sub pixel SP is a circuit for controlling the driving of the light emitting diode ED. For example, the driving circuit may include a switching transistor, a driving transistor, and a capacitor SC. The driving circuit may be electrically connected to signal lines such as a gate line GL and a data line DL which are connected to a gate driver and a data driver disposed in the non-active area NA. The non-active area NA is disposed in a circumferential area of the substrate110and in the non-active area, images are not displayed. The non-active area NA is disposed so as to enclose the active area AA but is not limited thereto. Various components for driving a plurality of sub pixels SP disposed in the active area AA may be disposed in the non-active area NA. For example, a driver, a driving circuit, a signal line, and a flexible film160which supply a signal for driving the plurality of sub pixels SP may be disposed. The plurality of flexible films160are disposed at one end of the substrate110. The plurality of flexible films160are electrically connected to one end of the substrate110. The plurality of flexible films160are films in which various components are disposed on a base film having malleability to supply a signal to the plurality of sub pixels SP of the active area AA. One end of the plurality of flexible films160is disposed in the non-active area NA of the substrate110to supply a data voltage to the plurality of sub pixels SP of the active area AA. In the meantime, even though the plurality of flexible films160is four inFIG.1, the number of flexible films160may vary depending on the design but is not limited thereto. In the plurality of flexible films160, a driver such as a gate driver or a data driver may be disposed. The driver is a component which processes data for displaying images and a driving signal for processing the data. The driver may be disposed by a chip on glass (COG), a chip on film (COF), or a tape carrier package (TCP) technique depending on a mounting method. In the present specification, for the convenience of description, it is described that the driver is mounted on the plurality of flexible films160by a chip on film technique but is not limited thereto. The printed circuit board170is connected to the plurality of flexible films160. The printed circuit board170is a component which supplies signals to the driver. Various components may be disposed in the printed circuit board170to supply various driving signals such as a driving signal or a data voltage to the driver. In the meantime, even though two printed circuit boards170are illustrated inFIG.1, the number of printed circuit boards170may vary depending on the design and is not limited thereto. In the meantime, the display device100according to the exemplary embodiment of the present disclosure may be a display device with a touch structure. Accordingly, the display device100may further include a touch driver. The touch driver is disposed in the gate driver or disposed in the printed circuit board170. When the touch driver is disposed in the gate driver, the gate driver may be mounted in the non-active area NA of the substrate110in a gate in panel (GIP) manner or attached to the non-active area NA. In the meantime, the display device100may be configured by a top emission type or a bottom emission type, depending on an emission direction of light which is emitted from the light emitting diode. According to the top emission type, light emitted from the light emitting diode is emitted above the substrate on which the light emitting diode is disposed. In the case of the top emission type, a reflective layer may be formed below the anode to allow the light emitted from the light emitting diode to travel above the substrate, that is, toward the cathode. According to the bottom emission type, light emitted from the light emitting diode is emitted below the substrate on which the light emitting diode is disposed. In the case of the bottom emission type, the anode may be formed only of a transparent conductive material and the cathode may be formed of the metal material having a high reflectance to allow the light emitted from the light emitting diode to travel below the substrate. Hereinafter, for the convenience of description, the description will be made by assuming that the display device100according to an exemplary embodiment of the present disclosure is a bottom emission type display device, but it is not limited thereto. FIG.2is a circuit diagram of a sub pixel of a portion A ofFIG.1according to one embodiment. Referring toFIG.2, the plurality of sub pixels SP includes a red sub pixel SPR, a white sub pixel SPW, a blue sub pixel SPB, and a green sub pixel SPG. Further, the driving circuit for driving the light emitting diodes ED of sub pixels SPR, SPW, SPB, SPG includes a first transistor TR1, a second transistor TR2, a third transistor TR3, and a storage capacitor SC. In order to drive the driving circuit, a plurality of wiring lines including a gate line GL, a data line DL, a high potential power line VDD, a sensing line SL, and a reference line RL is disposed on the substrate110. Each sub pixel SPR, SPW, SPB, SPG has the same structure so that the red sub pixel SPR will be described below as a reference. Each of the first transistor TR1, the second transistor TR2, and the third transistor TR3 included in the driving circuit of the red sub pixel SPR includes a gate electrode, a source electrode, and a drain electrode. The first transistor TR1, the second transistor TR2, and the third transistor TR3 may be P-type thin film transistors or N-type thin film transistors. For example, since in the P-type thin film transistor, holes flow from the source electrode to the drain electrode, the current flows from the source electrode to the drain electrode. Since in the N-type thin film transistor, electrons flow from the source electrode to the drain electrode, the current flows from the drain electrode to the source electrode. Hereinafter, the description will be made under the assumption that the first transistor TR1, the second transistor TR2, and the third transistor TR3 are N-type thin film transistors in which the current flows from the drain electrode to the source electrode, but the present disclosure is not limited thereto. The first transistor TR1 includes a first active layer, a first gate electrode, a first source electrode, and a first drain electrode. The first gate electrode is connected to a first node N1, the first source electrode is connected to the anode of the light emitting diode ED, and the first drain electrode is connected to the high potential power line VDD. When a voltage of the first node N1 is greater than a threshold voltage, the first transistor TR1 is turned on and when the voltage of the first node N1 is less than the threshold voltage, the first transistor TR1 is turned off. When the first transistor TR1 is turned on, a driving current may be transmitted to the light emitting diode ED through the first transistor TR1. Therefore, the first transistor TR1 which controls the driving current transmitted to the light emitting diode ED may be referred to as a driving transistor. The second transistor TR2 includes a second active layer, a second gate electrode, a second source electrode, and a second drain electrode. The second gate electrode is connected to the gate line GL, the second source electrode is connected to the first node N1, and the second drain electrode is connected to a first data line DLL The second transistor TR2 may be turned on or off based on a gate voltage from the gate line GL. When the second transistor TR2 is turned on, a data voltage from the data line DL may be charged in the first node N1. Therefore, the second transistor TR2 which is turned on or turned off by the gate line GL may also be referred to as a switching transistor. In the meantime, in the case of the white sub pixel SPW, a second drain electrode of the second transistor TR2 is connected to a second data line DL2, in the case of the blue sub pixel SPB, a second drain electrode of the second transistor TR2 is connected to a third data line DL3, and in the case of the green sub pixel SPG, a second drain electrode of the second transistor TR2 is connected to a fourth data line DL4. The third transistor TR3 includes a third active layer, a third gate electrode, a third source electrode, and a third drain electrode. The third gate electrode is connected to the sensing line SL, the third source electrode is connected to the second node N2, and the third drain electrode is connected to the reference line RL. The third transistor TR3 may be turned on or off based on a sensing voltage from the sensing line SL. When the third transistor TR3 is turned on, a reference voltage Vref from the reference line RL may be transmitted to the second node N2 and the storage capacitor SC. Therefore, the third transistor TR3 may also be referred to as a sensing transistor. In the meantime, even though inFIG.3, it is illustrated that the gate line GL and the sensing line SL are separate wiring lines, the gate line GL and the sensing line SL may be implemented as one wiring line, but it is not limited thereto. The storage capacitor SC is connected between the first gate electrode and the first source electrode of the first transistor TR1. That is, the storage capacitor SC may be connected between the first node N1 and the second node N2. The storage capacitor SC maintains a potential difference between the first gate electrode and the first source electrode of the first transistor TR1 while the light emitting diode ED emits light, so that a constant driving current may be supplied to the light emitting diode ED. The storage capacitor SC includes a plurality of capacitor electrodes and for example, one of a plurality of capacitor electrodes is connected to the first node N1 and the other one is connected to the second node N2. The light emitting diode ED includes an anode, an emission layer, and a cathode. The anode of the light emitting diode ED is connected to the second node N2 and the cathode is connected to the low potential power line VSS. The light emitting diode ED is supplied with a driving current from the first transistor TR1 to emit light. In the meantime, inFIG.2, it is described that the driving circuit of each sub pixel SPR, SPW, SPB, SPG of the display device100according to the exemplary embodiment of the present disclosure has a 3T1C structure including three transistors and one storage capacitor SC. However, the number and a connection relationship of the transistors and the storage capacitor may vary in various ways depending on the design and are not limited thereto. FIG.3is a diagram of a touch electrode of a display device according to an exemplary embodiment of the present disclosure.FIG.4is an enlarged view of a portion T1 ofFIG.3according to an exemplary embodiment of the present disclosure. InFIG.3, for the convenience of description, touch electrodes TE1, TE2, transistors for touching TC1, TC2, TS1, TS2, touch gate lines TG1, TG2, and six reference lines RL1, RL2, RL3-1, RL3-2, RL3-3, RL3-4 are simply illustrated. Referring toFIGS.3and4, the display device100according to the exemplary embodiment of the present disclosure includes a plurality of touch electrode blocks T1, T2, T3, T4, . . . , Tn. The plurality of touch electrode blocks T1, T2, T3, T4, . . . , Tn may be arranged in the first direction and the second direction while overlapping the plurality of sub pixels SP in the active area AA. The plurality of touch electrode blocks T1, T2, T3, T4, . . . , Tn has the same structure so that the first touch electrode block T1 will be described below as a reference. The touch electrode block T1 includes a first touch electrode TE1, a second touch electrode TE2, and a plurality of transistors for touching TC1, TC2, TS1, TS2. Further, the plurality of transistors for touching TC1, TC2, TS1, TS2 is connected to a first reference line RL1, a second reference line RL2, a plurality of third reference lines RL3-1, RL3-2, RL3-3, RL3-4, a first touch gate line TG1, and a second touch gate line TG2. The first touch electrode TE1 includes a plurality of first sub electrodes121extending in the first direction and a first connection electrode122which extends in the second direction to connect together the plurality of first sub electrodes121. The plurality of first sub electrodes121may be disposed to be spaced apart from each other in the second direction. Each of the plurality of first sub electrodes121may overlap the plurality of sub pixels SP disposed in the first direction. The second touch electrode TE2 is disposed to be spaced apart from the first touch electrode TEE The second touch electrode TE2 includes a plurality of second sub electrodes123which extends in the first direction and a second connection electrode124which extends in the second direction to connect together the plurality of second sub electrodes123. The plurality of second sub electrodes123may be disposed to be spaced apart from each other in the second direction. Each of the plurality of second sub electrodes123may overlap the plurality of sub pixels SP disposed in the first direction. The plurality of first sub electrodes121and the plurality of second sub electrodes123may be alternately disposed in the second direction. That is, the second sub electrodes123are interleaved between the first sub electrodes121such that at least one first sub electrode is disposed between two second sub electrodes. Further, in one sub pixel SP, one of the plurality of first sub electrodes121and one of the plurality of second sub electrodes123are disposed. In one embodiment, the first sub electrode121is disposed above the sub pixel SP and the second sub electrode123is disposed below the sub pixel SP, but the positions of the first sub electrode121and the second sub electrode123are not limited thereto. The plurality of transistors for touching TC1, TC2, TS1, TS2 includes a first charging transistor TC1, a second charging transistor TC2, a first sensing transistor TS1, and a second sensing transistor TS2. The first charging transistor TC1 includes a fourth active layer, a fourth gate electrode, a fourth source electrode, and a fourth drain electrode. The fourth gate electrode is connected to the first touch gate line TG1, the fourth source electrode is connected to the first touch electrode TE1, and the fourth drain electrode is connected to the first reference line RL1. The first charging transistor TC1 is turned on or off based on a first touch gate signal from the first touch gate line TG1. When the first charging transistor TC1 is turned on, a first touching voltage from the first reference line RL1 is charged in the first touch electrode TE1. The second charging transistor TC2 includes a fifth active layer, a fifth gate electrode, a fifth source electrode, and a fifth drain electrode. The fifth gate electrode is connected to the first touch gate line TG1, the fifth source electrode is connected to the second touch electrode TE2, and the fifth drain electrode is connected to the second reference line RL2. The second charging transistor TC2 is turned on or off based on the first touch gate signal from the first touch gate line TG1. When the second charging transistor TC2 is turned on, a second touching voltage from the second reference line RL2 is charged in the second touch electrode TE2. The first sensing transistor TS1 includes a sixth active layer, a sixth gate electrode, a sixth source electrode, and a sixth drain electrode. The first sensing transistor TS1 has the same structure as the first charging transistor TC1. Specifically, the sixth gate electrode is connected to the second touch gate line TG2, the sixth drain electrode is connected to the first touch electrode TE1, and the sixth source electrode is connected to the 3-1st reference line RL3-1. The first sensing transistor TS1 is turned on or off based on a second touch gate signal from the second touch gate line TG2. When the first sensing transistor TS1 is turned on, the touch sensing signal from the first touch electrode TE1 is sensed to the 3-1st reference line RL3-1. The second sensing transistor TS2 includes a seventh active layer, a seventh gate electrode, a seventh source electrode, and a seventh drain electrode. The second sensing transistor TS2 has the same structure as the second charging transistor TC2. Specifically, the seventh gate electrode is connected to the second touch gate line TG2, the seventh drain electrode is connected to the second touch electrode TE2, and the seventh source electrode is connected to the 3-4th reference line RL3-4. The second sensing transistor TS2 is turned on or off based on a second touch gate signal from the second touch gate line TG2. When the second sensing transistor TS2 is turned on, the touch sensing signal from the second touch electrode TE2 is sensed to the 3-4th reference line RL3-4. In the meantime, in the present disclosure, it is assumed that the first charging transistor TC1, the second charging transistor TC2, the first sensing transistor TS1, and the second sensing transistor TS2 are N type thin film transistors in which the current flows from the drain electrodes to the source electrodes, but it is not limited thereto. A plurality of first charging transistors TC1, a plurality of second charging transistors TC2, a plurality of first sensing transistors TS1, and a plurality of second sensing transistors TS2 may be provided. That is, in one touch electrode block T1, not one first charging transistor, one second charging transistor, one first sensing transistor, and one second sensing transistor, but the plurality of first charging transistors TC1, second charging transistors TC2, first sensing transistors TS1, and second sensing transistors TS2 may be provided. Therefore, in one touch electrode block T1, the plurality of first charging transistors TC1, second charging transistors TC2, first sensing transistors TS1, and second sensing transistors TS2 are provided so that the load is suppressed to be concentrated in each of the first charging transistors TC1, second charging transistors TC2, first sensing transistors TS1, and second sensing transistors TS2 of the touch electrode block T1. A plurality of first touch gate lines TG1 and a plurality of second touch gate lines TG2 are provided. At this time, the plurality of first touch gate lines TG1 disposed in one touch electrode block T1 is electrically connected to each other to apply the same first touch gate signal to the plurality of first charging transistors TC1 and the plurality of second charging transistors TC2. Further, the plurality of second touch gate lines TG2 disposed in one touch electrode block T1 is electrically connected to each other to apply the same second touch gate signal to the plurality of first sensing transistors TS1 and the plurality of second sensing transistors TS2. The first touch gate lines TG1 and the second touch gate lines TG2 extend in the first direction. Further, the first touch gate lines TG1 and the second touch gate lines TG2 are alternately disposed. For example, one of the first touch gate line TG1 and the second touch gate line TG2 is disposed between the plurality of sub pixels which is adjacent to each other in the second direction. Further, the first touch gate line TG1 is disposed between the plurality of sub pixels SP in the first line and the plurality of sub pixels SP in the second line and the second touch gate line TG2 is disposed between the plurality of sub pixels SP in the second line and the plurality of sub pixels SP in the third line. This structure is alternately repeated. Alternatively, one first sub electrode121and one second sub electrode123form one sub electrode pair and one of the first touch gate line TG1 and the second touch gate line TG2 is disposed between sub electrode pairs which are adjacent to each other in the second direction. Further, the first touch gate line TG1 is disposed between the first sub electrode pair and the second sub electrode pair and the second touch gate line TG2 is disposed between the second sub electrode pair and the third sub electrode pair, and this structure is alternately repeated. The first reference line RL1, the second reference line RL2, and the plurality of third reference lines RL3-1, RL3-2, RL3-3, and RL3-4 extend in the second direction. The first reference line RL1, the second reference line RL2, and the plurality of third reference lines RL3-1, RL3-2, RL3-3, and RL3-4 may be the same reference line RL described in FIG.2. That is, the reference line RL applies a reference voltage Vref to the plurality of sub pixels SP during the display period and transmits or receives signals for touching to or from the first touch electrode TE1 and the second touch electrode TE2 during the touch period. One first reference line RL1 which applies the first touching voltage to the first touch electrode TE1 is provided and one second reference line RL2 which applies the second touching voltage to the second touch electrode TE2 is provided. Further, a plurality of third reference lines RL3-1, RL3-2, RL3-3, and RL3-4 which transmits the touch sensing signals from the first touch electrode TE1 and the second touch electrode TE2 is provided. In the meantime, the third reference lines RL3-1, RL3-2, RL3-3, and RL3-4 may be wiring lines branched from the multiplexer MUX. At this time, the multiplexer MUX is disposed at the edge of the substrate110, but the present disclosure is not limited thereto. The first sensing transistor TS1 connected to the first touch electrode TE1 and the second sensing transistor TS2 connected to the second touch electrode TE2 are connected to different wiring lines among the plurality of third reference lines RL3-1, RL3-2, RL3-3, and RL3-4. Therefore, voltages of the first touch electrode TE1 and the second touch electrode TE2 may be sensed individually through different third reference lines RL3-1, RL3-2, RL3-3, and RL3-4. For example, as illustrated inFIGS.3and4, all the first sensing transistors TS1 of the first touch electrode block T1 are connected to the 3-1-th reference line RL3-1 and all the second sensing transistors TS2 are connected to the 3-4-th reference line RL3-4. Further, as illustrated inFIG.3, all the first sensing transistors TS1 of the second touch electrode block T2 are connected to the 3-2nd reference line RL3-2 and all the second sensing transistors TS2 are connected to the 3-1st reference line RL3-1. However, the connection relationship of the sensing transistors TS1 and TS2 and the third reference lines RL3-1, RL3-2, RL3-3, RL3-4 is not limited to those illustrated inFIGS.3and4. In the meantime, the number of first sub electrodes121, second sub electrodes123, first charging transistors TC1, second charging transistors TC2, first sensing transistors TS1, second sensing transistors TS2, first touch gate lines TG1, second touch gate lines TG2, third reference lines RL3-1, RL3-2, RL3-3, and RL3-4 is not limited to the number illustrated inFIG.4. That is, the number may vary depending on the design. FIG.5illustrates a schematic operation timing for explaining a driving method of a display device according to an exemplary embodiment of the present disclosure. InFIG.5, for the convenience of description, signals of two reference lines RL1 and RL2 and the first touch gate lines TG1-1, TG1-2, . . . , TG1-nand the second touch gate lines TG2-1, TG2-2, . . . , TG2-nincluded in each of the plurality of touch electrode blocks T1, T2, T3, T4, . . . , Tn are schematically illustrated. Here, TG1-nand TG2-ndenote the first touch gate line TG1 and the second touch gate line TG2 of the n-th touch electrode block Tn, respectively. Referring toFIG.5, the display device100is time-divisionally driven in the display period and the touch period in one frame. Here, the touch period includes a plurality of touch periods TP1, TP2, . . . , TPn. Specifically, the n-th touch period TPn may refer to a period in which signals are applied to a first touch gate line TG1-nand a second touch gate line TG2-nof the n-th touch electrode block Tn. First, in the display period, the same reference voltage Vref is applied to the plurality of sub pixels SP in the first reference line RL1, the second reference line RL2, and the third reference lines RL3-1, RL3-2, RL3-3, RL3-4. Further, even though not illustrated, during the display period, the gate signal may be applied to the plurality of gate lines GL. At this time, the first touch gate lines TG1-1, TG1-2, . . . , TG1-nand the second touch gate lines TG2-1, TG2-2, . . . , TG2-nare wiring lines to apply the touch gate signal in the touch period. Therefore, during the display period, a low level signal is input so that the plurality of transistors for touching TC1, TC2, TS1, TS2 is turned off. During the touch period, a first touching voltage V+is applied to the first reference line RL1 and a second touching voltage V−is applied to the second reference line RL2. Here, the first touching voltage V+is a sum (Vref+V0) of the reference voltage Vref and a predetermined voltage V0 and the second touching voltage V−is a difference (Vref−V0) of the reference voltage Vref and a predetermined voltage V0. At this time, the predetermined voltage V0 is an arbitrary voltage value and may be freely set depending on the design. Further, the first touch gate signal and the second touch gate signal may be sequentially applied to the first touch gate lines TG1-1, TG1-2, . . . , TG1-nand the second touch gate lines TG2-1, TG2-2, . . . , TG2-n. At this time, during each of the plurality of touch periods TP1, TP2, . . . , TPn, the first touch gate signal and the second touch gate signal are inverted signals. For example, during the first touch period TP1, the first touch gate signal and the second touch gate signal applied to the first touch gate line TG1-1 and the second touch gate line TG2-1 of the first touch electrode block T1 may be inverted signals from each other. During the remaining touch period excluding the first touch period TP1, a signal for turning off the touching transistors TC1, TC2, TS1, TS2 is applied to both the first touch gate line TG1-1 and the second touch gate line TG2-1 of the first touch electrode block T1. To be more specific, a high level of first touch gate signal is applied to the first touch gate line TG1-1 of the first touch electrode block T1. Therefore, the first charging transistor TC1 and the second charging transistor TC2 of the first touch electrode block T1 are turned on. The first touching voltage V+and the second touching voltage V−are charged in the first touch electrode TE1 and the second touch electrode TE2 through the first reference line RL1 and the second reference line RL2, respectively. Next, a high level of second touch gate signal is applied to the second touch gate line TG2-1 of the first touch electrode block T1. Therefore, the first sensing transistor TS1 and the second sensing transistor TS2 of the first touch electrode block T1 are turned on. The touch sensing signal from the first touch electrode TE1 is sensed by the third reference line RL3-1 connected to the first sensing transistor TS1 and the touch sensing signal from the second touch electrode TE2 is sensed by the third reference line RL3-4 connected to the second sensing transistor TS2. At this time, a low level of first touch gate signal is applied to the first touch gate line TG1-1 so that the first charging transistor TC1 and the second charging transistor TC2 are turned off. After sensing the signal in the first touch electrode block T1, a high level of touch gate signal is sequentially applied to the first touch gate line TG1-2 and the second touch gate line TG2-2 of the second touch electrode block T2. This operation may be sequentially performed to the n-th touch electrode block Tn. However, even though inFIG.5, it is described that the sensing in the touch electrode blocks T1, T2, T3, T4, . . . , Tn is sequentially performed, the present disclosure is not limited thereto. That is, the touch sensing may be simultaneously performed in the plurality of touch electrode blocks T1, T2, T3, T4, . . . , Tn. If the touch is performed in an area corresponding to a specific touch electrode block, voltages of the first touch electrode TE1 and the second touch electrode TE2 may vary. That is, a predetermined first touching voltage V+and second touching voltage V−are applied to the first touch electrode TE1 and the second touch electrode TE2 so that when the touch is not performed, the sensed voltage value may be within a predetermined range at all times. When a finger of the user is located to be adjacent to the first touch electrode TE1 or the second touch electrode TE2 of a specific touch electrode block, a quantity of electric charges of the first touch electrode TE1 and the second touch electrode TE2 is changed. Specifically, when the voltage value sensed from the first touch electrode TE1 and the second touch electrode TE2 is equal to or larger than a predetermined range, it is determined that the touch operation is performed in an area corresponding to the specific touch electrode block. FIG.6Ais an enlarged plan view of a portion B ofFIG.4according to one embodiment.FIG.6Bis a cross-sectional view taken along VIb-VIb′ ofFIG.6Aaccording to one embodiment.FIG.6Cis a cross-sectional view taken along VIc-VIc′ ofFIG.6A.FIG.6Ais an enlarged plan view of a red sub pixel SPR, a white sub pixel SPW, a blue sub pixel SPB, and a green sub pixel SPG which configure one pixel. Referring toFIGS.6A to6C, the display device100according to the exemplary embodiment of the present disclosure includes a substrate110, a first touch electrode TE1, a second touch electrode TE2, a first transistor TR1, a second transistor TR2, a third transistor TR3, a storage capacitor SC, a first charging transistor TC1, a second charging transistor TC2, a first sensing transistor TS1, a second sensing transistor TS2, a light emitting diode ED, a gate line GL, a sensing line SL, a first touch gate line TG1, a second touch gate line TG2, a data line DL, a reference line RL, a high potential power line VDD, and a color filter CF. InFIGS.6A to6C, among the plurality of transistors for touching TC1, TC2, TS1, TS2, the first charging transistor TC1 is illustrated and among the touch electrodes TE1 and TE2, the first sub electrode121and the second sub electrode123are illustrated. Referring toFIG.6A, the plurality of sub pixels SP includes a red sub pixel SPR, a green sub pixel SPG, a blue sub pixel SPB, and a white sub pixel SPW. For example, the red sub pixel SPR, the white sub pixel SPW, the blue sub pixel SPB, and the green sub pixel SPG may be sequentially disposed along a first direction. However, the placement order of the plurality of sub pixels SP is not limited thereto. Each of the plurality of sub pixels SP includes an emission area EA and a circuit area CA. The emission area EA is an area where one color light is independently emitted and the light emitting diode ED may be disposed therein. Specifically, an area which is exposed from the bank116and allows light emitted from the light emitting diode ED to travel to the outside may be defined as the emission area EA. For example, as illustrated inFIGS.6Band6C, the anode AN is exposed by the bank116so that an area where the anode AN and the emission layer EL are in direct contact with each other may be an emission area EA. The circuit area CA is an area excluding the emission area EA and a driving circuit for driving the plurality of light emitting diodes ED and a plurality of wiring lines which transmits various signals to the driving circuit may be disposed. The circuit area CA in which the driving circuit, the plurality of wiring lines, and the bank116are disposed may be a non-emission area. For example, in the circuit area CA, a driving circuit including the first transistor TR1, the second transistor TR2, the third transistor TR3, the storage capacitor SC, the first charging transistor TC1, the second charging transistor TC2, the first sensing transistor TS1, and the second sensing transistor TS2, a plurality of high potential power line VDD, a plurality of data lines DL, a plurality of reference lines RL, a plurality of gate lines GL, a sensing line SL, a plurality of touch gate lines TG1 and TG2, and a bank116are disposed. Referring toFIGS.6A to6C, the first touch electrode TE1 and the second touch electrode TE2 are disposed on the substrate110. As show inFIGS.6A to6C, the first touch electrode TE1 and the second touch electrode TE2 are closer to the substrate than the light emitting diode ED. Specifically, the first sub electrode121of the first touch electrode TE1 and the second sub electrode123of the second touch electrode TE2 may be disposed to extend along the first direction in the plurality of sub pixels SP. The first sub electrode121includes a first main electrode unit121aoverlapping the anode AN in the emission area EA of each of the plurality of sub pixels SP and a first connection unit121bwhich connects to the first main electrode unit121a. The first main electrode unit121amay be formed to have a larger area than the first connection unit121b. The first connection unit121bmay be disposed in the circuit area CA between the emission areas EA of adjacent sub pixels SP to connect a first main electrode unit121aof a first subpixel to another first main electrode unit121aof an adjacent second subpixel SP. The second sub electrode123includes a second main electrode unit123aoverlapping the anode AN in the emission area EA of each of the plurality of sub pixels SP and a second connection unit123bwhich connects to the second main electrode unit123a. The second main electrode unit123amay be formed to have a larger area than the second connection unit123b. The second connection unit123bmay be disposed in the circuit area CA between the emission areas EA of adjacent sub pixels SP. Further, the second sub electrode123may further include an extension unit123cwhich extends from the second connection unit123bto be electrically connected to the second charging transistor TC2, which will be described below with reference toFIGS.7A to7C. In the emission area EA of one sub pixel SP, both the first sub electrode121and the second sub electrode123are disposed. Specifically, as illustrated inFIG.6A, the first sub electrode121and the second sub electrode123are disposed above and below each other in the emission area EA, respectively. However, the positions of the first sub electrode121and the second sub electrode123are not limited thereto. The first touch electrode TE1 and the second touch electrode TE2 are formed of a transparent conductive material. Therefore, light emitted from the light emitting diode ED may pass through the first touch electrode TE1 and the second touch electrode TE2 to be easily emitted. The first touch electrode TE1 and the second touch electrode TE2 may be configured by a transparent conducting oxide (TCO) such as indium tin oxide (ITO), indium zinc oxide (IZO), or indium zinc tin oxide (ITZO) or a transparent oxide semiconductor such as indium gallium zinc oxide (IGZO), indium gallium oxide (IGO), or indium tin zinc oxide (ITZO), but is not limited thereto. In the meantime, even though it is not illustrated, a buffer layer may be disposed between the substrate110and the touch electrodes TE1 and TE2. The buffer layer may reduce permeation of moisture or impurities through the substrate110. The buffer layer may be configured by a single layer or a double layer of silicon oxide SiOx or silicon nitride SiNx, but is not limited thereto. Further, the buffer layer may be omitted depending on a type of the substrate110or a type of the transistor, but is not limited thereto. Referring toFIGS.6B and6C, a first insulating layer111is disposed on the touch electrodes TE1 and TE2. The first insulating layer111is a layer for insulating components disposed above and below the first insulating layer111and may be formed of an insulating material. For example, the first insulating layer111may be configured by a single layer or a double layer of silicon oxide SiOx or silicon nitride SiNx, but is not limited thereto. Referring toFIGS.6A to6C, a plurality of high potential power lines VDD, a plurality of data lines DL, a plurality of reference lines RL, and a light shielding layer LS are disposed on the first insulating layer111. The plurality of high potential power lines VDD, the plurality of data lines DL, the plurality of reference lines RL, and the light shielding layer LS are disposed on the same layer of the first insulating layer111and formed of the same conductive material. For example, the plurality of high potential power lines VDD, the plurality of data lines DL, the plurality of reference lines RL, and the light shielding layer LS may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but are not limited thereto. The plurality of high potential power lines VDD are wiring lines which transmit the high potential power signal to each of the plurality of sub pixels SP. The plurality of high potential power lines VDD extends in the second direction between the plurality of sub pixels SP. Two sub pixels SP which are adjacent to each other in the first direction may share one high potential power line VDD among the plurality of high potential power lines VDD. For example, one high potential power line VDD is disposed at a left side of the red sub pixel SPR to supply a high potential power voltage to the first transistor TR1 of each of the red sub pixel SPR and the white sub pixel SPW. The other high potential power line VDD is disposed at a right side of the green sub pixel SPG to supply a high potential power voltage to the first transistor TR1 of each of the blue sub pixel SPB and the green sub pixel SPG. The plurality of data lines DL extends between the plurality of sub pixels SP in the second direction to transmit a data voltage to each of the plurality of sub pixels SP. The plurality of data lines DL includes a first data line DL1, a second data line DL2, a third data line DL3, and a fourth data line DL4. The first data line DL1 is disposed between the red sub pixel SPR and the white sub pixel SPW to transmit a data voltage to the second transistor TR2 of the red sub pixel SPR. The second data line DL2 is disposed between the first data line DL1 and the white sub pixel SPW to transmit the data voltage to the second transistor TR2 of the white sub pixel SPW. The third data line DL3 is disposed between the blue sub pixel SPB and the green sub pixel SPG to transmit a data voltage to the second transistor TR2 of the blue sub pixel SPB. The fourth data line DL4 is disposed between the third data line DL3 and the green sub pixel SPG to transmit the data voltage to the second transistor TR2 of the green sub pixel SPG. The plurality of reference lines RL extends between the plurality of sub pixels SP in the second direction to transmit the reference voltage Vref to each of the plurality of sub pixels SP. The plurality of sub pixels SP which forms one pixel may share one reference line RL. For example, one reference line RL is disposed between the white sub pixel SPW and the blue sub pixel SPB to transmit the reference voltage Vref to a third transistor TR3 of each of the red sub pixel SPR, the white sub pixel SPW, the blue sub pixel SPB, and the green sub pixel SPG. The light shielding layer LS is disposed so as to overlap the first active layer ACT1 of at least the first transistor TR1 among the plurality of transistors TR1, TR2, and TR3 to block light incident onto the first active layer ACT1. If light is irradiated onto the first active layer ACT1, a leakage current is generated so that the reliability of the first transistor TR1 which is a driving transistor may be degraded. At this time, if the light shielding layer LS configured by an opaque conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof is disposed so as to overlap the first active layer ACT1, light incident from the lower portion of the substrate110onto the first active layer ACT1 may be blocked or at least reduced. Accordingly, the reliability of the first transistor TR1 may be improved. However, it is not limited thereto and the light shielding layer LS may be disposed so as to overlap the second active layer ACT2 of the second transistor TR2 and the third active layer ACT3 of the third transistor TR3. In the meantime, even though in the drawing, it is illustrated that the light single layer LS is a single layer, the light shielding layer LS may be formed as a plurality of layers. For example, the light shielding layer LS may be formed of a plurality of layers disposed so as to overlap each other with at least one of the first insulating layer111, a second insulating layer112, a gate insulating layer113, and a passivation layer114therebetween. The second insulating layer112is disposed on the plurality of high potential power lines VDD, the plurality of data lines DL, the plurality of reference lines RL, and the light shielding layer LS. The second insulating layer112is a layer for insulating components disposed above and below the second insulating layer112and may be formed of an insulating material. For example, the second insulating layer112may be configured by a single layer or a double layer of silicon oxide SiOx or silicon nitride SiNx, but is not limited thereto. Referring toFIGS.6A to6C, in each of the plurality of sub pixels SP, the first transistor TR1, the second transistor TR2, the third transistor TR3, and the storage capacitor SC are disposed on the second insulating layer112. First, the first transistor TR1 includes a first active layer ACT1, a first gate electrode GE1, a first source electrode SE1, and a first drain electrode DEL The first active layer ACT1 is disposed on the second insulating layer112. The first active layer ACT1 may be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but is not limited thereto. For example, when the first active layer ACT1 is formed of an oxide semiconductor, the first active layer ACT1 is formed by a channel region, a source region, and a drain region and the source region and the drain region may be conductive regions, but are not limited thereto. The gate insulating layer113is disposed on the first active layer ACT1. The gate insulating layer113is a layer for electrically insulating the first gate electrode GE1 from the first active layer ACT1 and may be formed of an insulating material. For example, the gate insulating layer113may be configured by a single layer or a double layer of silicon oxide SiOx or silicon nitride SiNx, but is not limited thereto. The first gate electrode GE1 is disposed on the gate insulating layer113so as to overlap the first active layer ACT1. The first gate electrode GE1 may be configured by a conductive material, such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. A first source electrode SE1 and a first drain electrode DE1 which are spaced apart from each other are disposed on the gate insulating layer113. The first source electrode SE1 and the first drain electrode DE1 may be electrically connected to the first active layer ACT1 through a contact hole formed on the gate insulating layer113. The first source electrode SE1 and the first drain electrode DE1 may be disposed on the same layer as the first gate electrode GE1 to be formed of the same conductive material, but is not limited thereto. For example, the first source electrode SE1 and the first drain electrode DE1 may be configured by copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The first drain electrode DE1 is electrically connected to the high potential power lines VDD. For example, the first drain electrodes DE1 of the red sub pixel SPR and the white sub pixel SPW may be electrically connected to the high potential power line VDD at the left side of the red sub pixel SPR. The first drain electrodes DE1 of the blue sub pixel SPB and the green sub pixel SPG may be electrically connected to the high potential power line VDD at the right side of the green sub pixel SPR. At this time, an auxiliary high potential power line VDDa may be further disposed to electrically connect the first drain electrode DE1 with the high potential power line VDD. One end of the auxiliary high potential power line VDDa is electrically connected to the high potential power line VDD and the other end is electrically connected to the first drain electrode DE1 of each of the plurality of sub pixels SP. For example, when the auxiliary high potential power line VDDa is formed of the same material on the same layer as the first drain electrode DE1, one end of the auxiliary high potential power line VDDa is electrically connected to the high potential power line VDD through a contact hole formed in the gate insulating layer113and the second insulating layer112. The other end of the auxiliary high potential power line VDDa extends to the first drain electrode DE1 to be integrally formed with the first drain electrode DE1. At this time, the first drain electrode DE1 of the red sub pixel SPR and the first drain electrode DE1 of the white sub pixel SPW which are electrically connected to the same high potential power lines VDD may be connected to the same auxiliary high potential power line VDDa. The first drain electrode DE1 of the blue sub pixel SPB and the first drain electrode DE1 of the green sub pixel SPG may also be connected to the same auxiliary high potential power line VDDa. However, the first drain electrode DE1 and the high potential power line VDD may be electrically connected by another method, but it is not limited thereto. The first source electrode SE1 may be electrically connected to the light shielding layer LS through a contact hole formed on the gate insulating layer113and the second insulating layer112. Further, a part of the first active layer ACT1 connected to the first source electrode SE1 may be electrically connected to the light shielding layer LS through a contact hole formed in the second insulating layer112. If the light shielding layer LS is floated, a threshold voltage of the first transistor TR1 varies to affect the driving of the display device100. Accordingly, the light shielding layer LS is electrically connected to the first source electrode SE1 to apply a voltage to the light shielding layer LS and it does not affect the driving of the first transistor TR1. However, in the present specification, even though it has been described that both the first active layer ACT1 and the first source electrode SE1 are in contact with the light shielding layer LS, any one of the first source electrode SE1 and the first active layer ACT1 may be in direct contact with the light shielding layer LS. However, it is not limited thereto. In the meantime, even though inFIG.6B, it is illustrated that the gate insulating layer113is formed on the entire substrate110, the gate insulating layer113may be patterned so as to overlap the first gate electrode GE1, the first source electrode SE1, and the first drain electrode DE1, but is not limited thereto. The second transistor TR2 includes a second active layer ACT2, a second gate electrode GE2, a second source electrode SE2, and a second drain electrode DE2. The second active layer ACT2 is disposed on the second insulating layer112. The second active layer ACT2 may be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but is not limited thereto. For example, when the second active layer ACT2 is formed of an oxide semiconductor, the second active layer ACT2 may be formed by a channel region, a source region, and a drain region and the source region and the drain region may be conductive regions, but are not limited thereto. The second source electrode SE2 is disposed on the second insulating layer112. The second source electrode SE2 may be integrally formed with the second active layer ACT2 to be electrically connected to each other. For example, the semiconductor material is formed on the second insulating layer112and a part of the semiconductor material is conducted to form the second source electrode SE2. Therefore, a part of the semiconductor material which is not conducted may become a second active layer ACT2 and a conducted part becomes a second source electrode SE2. However, the second active layer ACT2 and the second source electrode SE2 are separately formed, but it is not limited thereto. The second source electrode SE2 is electrically connected to the first gate electrode GE1 of the first transistor TR1. The first gate electrode GE1 may be electrically connected to the second source electrode SE2 through a contact hole formed on the gate insulating layer113. Accordingly, the first transistor TR1 may be turned on or turned off by a signal from the second transistor TR2. The gate insulating layer113is disposed on the second active layer ACT2 and the second source electrode SE2 and the second drain electrode DE2 and the second gate electrode GE2 are disposed on the gate insulating layer113. The second gate electrode GE2 is disposed on the gate insulating layer113so as to overlap the second active layer ACT2. The second gate electrode GE2 may be electrically connected to the gate line GL and the second transistor TR2 may be turned on or turned off based on the gate voltage transmitted to the second gate electrode GE2. The second gate electrode GE2 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. In the meantime, the second gate electrode GE2 extends from the gate line GL. That is, the second gate electrode GE2 is integrally formed with the gate line GL and the second gate electrode GE2 and the gate line GL may be formed of the same material. For example, the gate line GL may be configured by copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The gate line GL is a wiring line which transmits the gate voltage to each of the plurality of sub pixels SP and intersects the circuit area CA of the plurality of sub pixels SP to extend in the first direction. The gate line GL extends in the first direction to intersect the plurality of high potential power lines VDD, the plurality of data lines DL, and the plurality of reference lines RL extending in the second direction. The second drain electrode DE2 is disposed on the gate insulating layer113. The second drain electrode DE2 is electrically connected to the second active layer ACT2 through a contact hole formed in the gate insulating layer113and is electrically connected to one of the plurality of data lines DL through a contact hole formed in the gate insulating layer113and the second insulating layer112, simultaneously. For example, the second drain electrode DE2 of the red sub pixel SPR is electrically connected to the first data line DL1 and the second drain electrode DE2 of the white sub pixel SPW is electrically connected to the second data line DL2. For example, the second drain electrode DE2 of the blue sub pixel SPB is electrically connected to the third data line DL3 and the second drain electrode DE2 of the green sub pixel SPG is electrically connected to the fourth data line DL4. The second drain electrode DE2 may be configured by a conductive material, such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The third transistor TR3 includes a third active layer ACT3, a third gate electrode GE3, a third source electrode SE3, and a third drain electrode DE3. The third active layer ACT3 is disposed on the second insulating layer112. The third active layer ACT3 may be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but is not limited thereto. For example, when the third active layer ACT3 is formed of an oxide semiconductor, the third active layer ACT3 is formed by a channel region, a source region, and a drain region and the source region and the drain region may be conductive regions, but are not limited thereto. The gate insulating layer113is disposed on the third active layer ACT3 and the third gate electrode GE3, the third source electrode SE3, and the third drain electrode DE3 are disposed on the gate insulating layer113. The third gate electrode GE3 is disposed on the gate insulating layer113so as to overlap the third active layer ACT3. The third gate electrode GE3 is electrically connected to the sensing line SL and the third transistor TR3 may be turned on or turned off based on the sensing voltage transmitted to the third transistor TR3. The third gate electrode GE3 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. In the meantime, the third gate electrode GE3 extends from the sensing line SL. That is, the third gate electrode GE3 is integrally formed with the sensing line SL and the third gate electrode GE3 and the sensing line SL may be formed of the same material. For example, the sensing line SL may be configured by copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The sensing line SL transmits a sensing voltage to each of the plurality of sub pixels SP and extends between the plurality of sub pixels SP in the first direction. For example, the sensing line SL extends at a boundary between the plurality of sub pixels SP in the first direction to intersect the plurality of high potential power lines VDD, the plurality of data lines DL, and the plurality of reference lines RL extending in the second direction. The third source electrode SE3 may be electrically connected to the third active layer ACT3 through a contact hole formed on the gate insulating layer113. The third source electrode SE3 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. Further, a part of the third active layer ACT3 which is in contact with the third source electrode SE3 may be electrically connected to the light shielding layer LS through a contact hole formed in the second insulating layer112. That is, the third source electrode SE3 may be electrically connected to the light shielding layer LS with the third active layer ACT3 therebetween. Therefore, the third source electrode SE3 and the first source electrode SE1 may be electrically connected to each other through the light shielding layer LS. The third drain electrode DE3 may be electrically connected to the third active layer ACT3 through a contact hole formed on the gate insulating layer113. The third drain electrode DE3 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The third drain electrode DE3 is electrically connected to the reference line RL. For example, the third drain electrodes DE3 of the red sub pixel SPR, the white sub pixel SPW, the blue sub pixel SPB, and the green sub pixel SPG may be electrically connected to the same reference line RL. That is, the plurality of sub pixels SP which forms one pixel may share one reference line RL. At this time, an auxiliary reference line RLa may be disposed to connect the reference line RL extending in the second direction to the plurality of sub pixels SP which is disposed in parallel along the first direction. The auxiliary reference line RLa may be disposed on the same layer as the gate line GL, the sensing line SL, the first touch gate line TG1, and the second touch gate line TG2. The auxiliary reference line RLa extends in the first direction to electrically connect the reference line RL and the third drain electrode DE3 of each of the plurality of sub pixels SP. One end of the auxiliary reference line RLa is electrically connected to the reference line RL through a contact hole formed in the second insulating layer112and the gate insulating layer112. The other end of the auxiliary reference line RLa is electrically connected to the third drain electrode DE3 of each of the plurality of sub pixels SP. For example, the auxiliary reference line RLa may be integrally formed with the third drain electrode DE3 of each of the plurality of sub pixels SP. Therefore, the reference voltage Vref from the reference line RL may be transmitted to the third drain electrode DE3 through the auxiliary reference line RLa. However, the auxiliary reference line RLa may be separately formed from the third drain electrode DE3, but is not limited thereto. The storage capacitor SC is disposed in the circuit area of the plurality of sub pixels SP. The storage capacitor SC may store a voltage between the first gate electrode GE1 and the first source electrode SE1 of the first transistor TR1 to allow the light emitting diode ED to continuously maintain a constant state for one frame. The storage capacitor SC includes a first capacitor electrode SC1 and a second capacitor electrode SC2. In each of the plurality of sub pixels SP, the first capacitor electrode SC1 is disposed between the first insulating layer111and the second insulating layer112. The first capacitor electrode SC1 is integrally formed with the light shielding layer LS and is electrically connected to the first source electrode SE1 through the light shielding layer LS. The second insulating layer112is disposed on the first capacitor electrode SC1 and the second capacitor electrode SC2 is disposed on the second insulating layer112. The second capacitor electrode SC2 may be disposed so as to overlap the first capacitor electrode SC1. The second capacitor electrode SC2 is integrally formed with the second source electrode SE2 to be electrically connected to the second source electrode SE2 and the first gate electrode GE1. For example, the semiconductor material is formed on the second insulating layer112and a part of the semiconductor material is conducted to form the second source electrode SE2 and the second capacitor electrode SC2. Accordingly, a part of the semiconductor material which is not conducted functions as a second active layer ACT2 and the conducted part functions as a second source electrode SE2 and the second capacitor electrode SC2. As described above, the first gate electrode GE1 is electrically connected to the second source electrode SE2 through the contact hole formed in the gate insulating layer113. Accordingly, the second capacitor electrode SC2 is integrally formed with the second source electrode SE2 to be electrically connected to the second source electrode SE2 and the first gate electrode GE1. In summary, the first capacitor electrode SC1 of the storage capacitor SC is integrally formed with the light shielding layer LS to be electrically connected to the light shielding layer LS, the first source electrode SE1, and the third source electrode SE3. Accordingly, the second capacitor electrode SC2 is integrally formed with the second source electrode SE2 and the second active layer ACT2 to be electrically connected to the second source electrode SE2 and the first gate electrode GE1. Accordingly, the first capacitor electrode SC1 and the second capacitor electrode SC2 which overlap the second insulating layer112therebetween constantly maintain the voltage of the first gate electrode GE1 and the first source electrode SE1 of the first transistor TR1 while the light emitting diode ED emits light. By doing this, the constant state of the light emitting diode ED is maintained. Referring toFIGS.6A and6C, the first charging transistor TC1 is disposed on the second insulating layer112. The first charging transistor TC1 may be disposed in a boundary area of the plurality of sub pixels SP which is adjacent in the second direction. For example, the first charging transistor TC1 may be disposed in any one sub pixel SP in the boundary area of the plurality of sub pixels SP which is adjacent in the second direction. Specifically, one of the plurality of transistors for touching TC1, TC2, TS1, TS2 is disposed in the boundary area of plurality of sub pixels SP which is adjacent in the second direction. In the meantime, in the present disclosure, it is described that the first charging transistor TC1 is disposed in the boundary area of the red sub pixel SPR, but is not limited thereto. That is, the first charging transistor TC1 may be disposed in the boundary area of the adjacent white sub pixels SPW, the boundary area of the adjacent blue sub pixels SPB, or the boundary area of the adjacent green sub pixels SPG. The first charging transistor TC1 includes a fourth active layer ACT4, a fourth gate electrode GE4, a fourth source electrode SE4, and a fourth drain electrode DE4. The fourth active layer ACT4 is disposed on the second insulating layer112. The fourth active layer ACT4 may be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but is not limited thereto. For example, when the fourth active layer ACT4 is formed of an oxide semiconductor, the fourth active layer ACT4 is formed by a channel region, a source region, and a drain region and the source region and the drain region may be conductive regions, but are not limited thereto. The gate insulating layer113is disposed on the fourth active layer ACT4 and the fourth gate electrode GE4, the fourth source electrode SE4, and the fourth drain electrode DE4 are disposed on the gate insulating layer113. The fourth gate electrode GE4 is disposed on the gate insulating layer113so as to overlap the fourth active layer ACT4. The fourth gate electrode GE4 may be electrically connected to a first touch gate line TG1. Therefore, the first charging transistor TC1 is turned on or turned off based on the first touch gate signal transmitted to the fourth gate electrode GE4. The fourth gate electrode GE4 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. In the meantime, the fourth gate electrode GE4 may extend from the first touch gate line TG1. That is, the fourth gate electrode GE4 is integrally formed with the first touch gate line TG1 and the fourth gate electrode GE4 and the first touch gate line TG1 may be formed of the same material. For example, the first touch gate line TG1 may be configured by copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The first touch gate line TG1 is a wiring line which transmits the first touch gate voltage to each of the plurality of first charging transistors TC1. The first touch gate line TG1 is formed on the same layer, by the same process, of the same material as the plurality of gate lines GL and the plurality of sensing lines SL. The first touch gate line TG1 extends in the first direction while intersecting the circuit area CA of the plurality of sub pixels SP. Further, the first touch gate line TG1 may be disposed in the boundary area of the plurality of sub pixels SP. Specifically, as illustrated inFIG.6A, the first touch gate line TG1 may extend in the first direction in the boundary area of the plurality of sub pixels SP which is adjacent in the first direction. The first touch gate line TG1 intersects the plurality of high potential power lines VDD, the plurality of data lines DL, and the plurality of reference lines RL extending in the second direction. In the meantime, in the boundary area of the plurality of sub pixels SP, not only the first touch gate line TG1, but also the second touch gate line TG2 is disposed. The second touch gate line TG2 extends in the first direction in the boundary area of the plurality of sub pixels SP which is adjacent in the second direction. At this time, the first touch gate lines TG1 and the second touch gate lines TG2 are alternately disposed one by one in the boundary of the plurality of sub pixels SP. The fourth source electrode SE4 may be electrically connected to the fourth active layer ACT4 through a contact hole formed on the gate insulating layer113. The fourth source electrode SE4 may be configured by a conductive material, such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The fourth source electrode TE4 is electrically connected to the first touch electrode TE1. For example, the fourth source electrode SE4 is electrically connected to the first main electrode unit121aof the first sub electrode121. That is, the fourth source electrode SE4 is connected to the first main electrode unit121athrough the contact hole formed in the first insulating layer111, the second insulating layer112, and the gate insulating layer113. Therefore, the first touching voltage V+which is supplied to the first charging transistor TC1 may be charged in the first touch electrode TE1 In the meantime, in the present disclosure, it has been described that the fourth source electrode SE4 is connected to the first main electrode unit121aof the first sub electrode121, but is not limited thereto. The fourth drain electrode DE4 may be electrically connected to the fourth active layer ACT4 through a contact hole formed on the gate insulating layer113. The fourth drain electrode DE4 may be configured by a conductive material, such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The fourth drain electrode DE4 is electrically connected to the reference line RL. Specifically, the fourth drain electrode DE4 is electrically connected to the auxiliary reference line RLa. Specifically, as illustrated inFIG.6A, the fourth drain electrode DE4 extends from the third drain electrode DE3 of the red sub pixel SPR. Therefore, the fourth drain electrode DE4 may be integrally formed with the auxiliary reference line RLa and the third drain electrode DE3 of the red sub pixel SPR. Therefore, the first touching voltage V′ supplied from the reference line RL may be transmitted to the fourth drain electrode DE4 through the auxiliary reference line RLa. Even though not illustrated inFIGS.6A to6C, the first sensing transistor TS1 and the first charging transistor TC1 may have the same structure. That is, all the plurality of transistors for touching TC1 and TS1 connected to the first touch electrode TE1 may have the same structure. However, the first sensing transistor TS1 may be connected to the second touch gate line TG2, rather than the first touch gate line TG1. Specifically, the sixth gate electrode of the first sensing transistor TS1 is connected to the second touch gate line TG2 and the sixth drain electrode is connected to one of the plurality of first main electrode units121aof the first sub electrode121, and the sixth source electrode is connected to one of the plurality of reference lines RL. The passivation layer114is disposed on the first transistor TR1, the second transistor TR2, the third transistor TR3, the storage capacitor SC, and the first charging transistor TC1. The passivation layer114is an insulating layer for protecting components below the passivation layer114. For example, the passivation layer114may be configured by a single layer or a double layer of silicon oxide SiOx or silicon nitride SiNx, but is not limited thereto. Further, the passivation layer114may be omitted depending on the exemplary embodiment. A plurality of color filters CF may be disposed in the emission area of each of the plurality of sub pixels SP on the passivation layer114. As described above, the display device100according to the exemplary embodiment of the present disclosure is a bottom emission type in which light emitted from the light emitting diode ED is directed to the lower portion of the light emitting diode ED and the substrate110. Therefore, the plurality of color filters CF may be disposed below the light emitting diode ED. That is, the plurality of color filters CF may be disposed between the light emitting diode ED and the plurality of touch electrodes TE1 and TE2. Light emitted from the light emitting diode ED passes through the plurality of color filters CF and is implemented as various colors of light. In the meantime, a separate color filter CF is not disposed in the white sub pixel SPW and light emitted from the light emitting diode DE is emitted as it is. The plurality of color filters CF may include a red color filter, a blue color filter, and a green color filter. The red color filter is disposed in an emission area EA of a red sub pixel SPR of the plurality of sub pixels SP, the blue color filter is disposed in an emission area EA of the blue sub pixel SPB, and the green color filter is disposed in an emission area EA of the green sub pixel SPG. The planarization layer115is disposed on the passivation layer114and the plurality of color filters CF. The planarization layer115is an insulating layer which planarizes an upper portion of the substrate110on which the first transistor TR1, the second transistor TR2, the third transistor TR3, the storage capacitor SC, the first charging transistor TC1, the plurality of high potential power lines VDD, the plurality of data lines DL, the plurality of reference lines RL, the plurality of gate lines GL, the plurality of sensing lines SL, and the plurality of touch gate lines TG1 and TG2 are disposed. The planarization layer115may be formed of an organic material, and for example, may be configured by a single layer or a double layer of polyimide or photo acryl, but is not limited thereto. The light emitting diode ED is disposed in an emission area EA of each of the plurality of sub pixels SP. The light emitting diode ED is disposed on the planarization layer115in each of the plurality of sub pixels SP. The light emitting diode ED includes an anode AN, an emission layer EL, and a cathode CA. The anode AN is disposed on the planarization layer115in the emission area EA. The anode AN supplies holes to the emission layer EL so that the anode may be formed of a conductive material having a high work function. For example, the anode AN may be formed of a transparent conductive material such as indium tin oxide (ITO) and indium zinc oxide (IZO), but is not limited thereto. In the meantime, the anode AN extends toward the circuit area CA. A part of the anode AN extends toward the first source electrode SE1 of the circuit area CA from the emission area EA and is electrically connected to the first source electrode SE1 through a contact hole formed in the planarization layer115and the passivation layer114. Accordingly, the anode AN of the light emitting diode ED extends to the circuit area CA to be electrically connected to the first source electrode SE1 of the first transistor TR1 and the second capacitor electrode SC2 of the storage capacitor SC. In the emission area EA and the circuit area CA, the emission layer EL is disposed on the anode AN. The emission layer EL may be formed as one layer over the plurality of sub pixels SP. That is, the emission layers EL of the plurality of sub pixels SP are connected to each other to be integrally formed. The emission layer EL may be configured by one emission layer or may have a structure in which a plurality of emission layers which emits different color light is laminated. The emission layer EL may further include an organic layer such as a hole injection layer, a hole transport layer, an electron transport layer, and an electron injection layer. In the emission area EA and the circuit area CA, the cathode CT is disposed on the emission layer EL. The cathode CT supplies electrons to the emission layer EL so that the cathode may be formed of a conductive material having a low work function. The cathode CT may be formed as one layer over the plurality of sub pixels SP. That is, the cathodes CT of the plurality of sub pixels SP are connected to be integrally formed. For example, the cathode CT may be formed of a transparent conductive material such as indium tin oxide (ITO) and indium zinc oxide (IZO) or ytterbium (Yb) alloy and may further include a metal doping layer, but is not limited thereto. Even though it is not illustrated inFIGS.4and5, the cathode CT of the light emitting diode ED is electrically connected to the low potential power line VSS to be supplied with a low potential power voltage. The bank116is disposed between the anode AN and the emission layer EL. The bank116is disposed to overlap the active area AA and cover the edge of the anode AN. The bank116is disposed at the boundary between the sub pixels SP which are adjacent to each other to reduce the mixture of light emitted from the light emitting diode ED of each of the plurality of sub pixels SP. For example, the bank116may be formed of an insulating material such as polyimide, acryl, or benzocyclobutene (BCB) resin, but it is not limited thereto. FIG.7Ais an enlarged plan view of a portion C ofFIG.4according to one embodiment.FIG.7Billustrates a touch electrode inFIG.7Aaccording to one embodiment without the other components of the display device.FIG.7Cis a cross-sectional view taken along VIIc-VIIc′ ofFIG.7Aaccording to one embodiment.FIGS.7A to7Cillustrate an area in which the second charging transistor TC2 is disposed. Therefore, description for the same parts as those inFIGS.6A to6Cwill be omitted. Referring toFIGS.7A to7C, the second charging transistor TC2 is disposed on the second insulating layer112. The second charging transistor TC2 may be disposed in a boundary area of the plurality of sub pixels SP which is adjacent in the second direction. For example, the second charging transistor TC2 may be disposed in any one sub pixel SP in the boundary area of the plurality of sub pixels SP which is adjacent in the second direction. In the meantime, in the present disclosure, it is described that the second charging transistor TC2 is disposed in the boundary area of the red sub pixel SPR, but is not limited thereto. That is, the second charging transistor TC2 may be disposed in the boundary area of the adjacent white sub pixels SPW, the boundary area of the adjacent blue sub pixels SPB, or the boundary area of the adjacent green sub pixels SPG. The second charging transistor TC2 includes a fifth active layer ACT5, a fifth gate electrode GE5, a fifth source electrode SE5, and a fifth drain electrode DE5. The fifth active layer ACT5 is disposed on the second insulating layer112. The fifth active layer ACT5 may be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but is not limited thereto. For example, when the fifth active layer ACT5 is formed of an oxide semiconductor, the fifth active layer ACT5 is formed by a channel region, a source region, and a drain region and the source region and the drain region may be conductive regions, but are not limited thereto. The gate insulating layer113is disposed on the fifth active layer ACT5 and the fifth gate electrode GE5, the fifth source electrode SE5, and the fifth drain electrode DE5 are disposed on the gate insulating layer113. The fifth gate electrode GE5 is disposed on the gate insulating layer113so as to overlap the fifth active layer ACT5. The fifth gate electrode GE5 may be electrically connected to the first touch gate line TG1. Therefore, the second charging transistor TC2 is turned on or turned off based on the first touch gate signal transmitted to the fifth gate electrode GE5. The fifth gate electrode GE5 may be configured by a conductive material, such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. In the meantime, the fifth gate electrode GE5 may extend from the first touch gate line TG1. That is, the fifth gate electrode GE5 is integrally formed with the first touch gate line TG1 and the fifth gate electrode GE5 and the first touch gate line TG1 may be formed of the same material. For example, the first touch gate line TG1 may be configured by copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The fifth source electrode SE5 may be electrically connected to the fifth active layer ACT5 through a contact hole formed on the gate insulating layer113. The fifth source electrode SE5 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The fifth source electrode SE5 is electrically connected to the second touch electrode TE2. For example, the fifth source electrode SE5 is electrically connected to the extension unit123cof the second sub electrode123. Here, the extension unit123cmay be an area extending from the second connection unit123bof the second sub electrode123. Specifically, as illustrated inFIGS.7A and7B, the second connection unit123bis formed at the left side from the second main electrode unit123adisposed in the red sub pixel SPR and the extension unit123cis formed downwardly from the second connection unit123b. The extension unit123cextends from the second connection unit123bto the boundary portion of the adjacent red sub pixel SPR in the second direction. The extension unit123cis electrically connected to the fifth source electrode SE5 through the contact hole formed in the first insulating layer111, the second insulating layer112, and the gate insulating layer113. That is, one end of the extension unit123cis integrally formed with the second connection unit123band the other end is connected to the fifth source electrode SE5. Therefore, the second touching voltage V−which is supplied to the second charging transistor TC2 may be charged in the second touch electrode TE2. In the meantime, in the present disclosure, it has been described that the fifth source electrode SE5 is connected to the extension unit123cof the second sub electrode123, but is not limited thereto. The fifth drain electrode DE5 may be electrically connected to the fifth active layer ACT5 through a contact hole formed on the gate insulating layer113. The fifth drain electrode DE5 may be configured by a conductive material such as copper (Cu), aluminum (Al), molybdenum (Mo), nickel (Ni), titanium (Ti), chrome (Cr), or an alloy thereof, but is not limited thereto. The fifth drain electrode DE5 is electrically connected to the reference line RL. Specifically, the fifth drain electrode DE5 is electrically connected to the auxiliary reference line RLa. Specifically, as illustrated inFIG.7A, the fifth drain electrode DE5 extends from the third drain electrode DE3 of the red sub pixel SPR. Therefore, the fifth drain electrode DE5 may be integrally formed with the auxiliary reference line RLa and the third drain electrode DE3 of the red sub pixel SPR. Here, the reference line RL connected to the fifth drain electrode DE5 of the second charging transistor TC2 may be different from the reference line RL connected to the fourth drain electrode DE4 of the first charging transistor TC1. Therefore, the second touching voltage V−supplied from the reference line RL may be transmitted to the fifth drain electrode DE5 through the auxiliary reference line RLa. Even though not illustrated inFIGS.7A to7C, the second sensing transistor TS2 and the second charging transistor TC2 may have the same structure. That is, all the plurality of transistors TC2 and TS2 for sensing touching and are connected to the second touch electrode TE2 may have the same structure. However, the second sensing transistor TS2 may be connected to the second touch gate line TG2, rather than the first touch gate line TG1. Specifically, a seventh gate electrode of the second sensing transistor TS2 is connected to the second touch gate line TG2 and a seventh drain electrode is connected to one of the extension units123cof the second sub electrode123, and a seventh source electrode is connected to one of the plurality of reference lines RL. FIG.8is a view for explaining a touch sensing method of a display device according to an exemplary embodiment of the present disclosure. Referring toFIG.8, when a touch is subjected to a specific touch electrode block of the display device100, various capacitances Cf1, Cf2, Cm1, Cm2, Cp1, Cp2are generated between the finger, the first touch electrode TE1, the second touch electrode TE2, and the cathode CT. Here, Cf1is a capacitance between the finger and the first touch electrode TE1, Cf2is a capacitance between the finger and the second touch electrode TE2, and Cm1is a capacitance between the first touch electrode TE1 corresponding to the finger and the second touch electrode TE2. Further, Cm2is a capacitance between the first touch electrode TE1 and the second touch electrode TE2 adjacent to the first touch electrode TE1 and the second touch electrode TE2 corresponding to the finger. Cp1is a parasitic capacitance between the plurality of first touch electrodes TE1 and the cathode CT and Cp1is a parasitic capacitance between the plurality of second touch electrodes TE2 and the cathode CT. Here, even though it is illustrated that Cp1and Cp2are parasitic capacitances between the touch electrodes TE1 and TE2 and the cathode CT, Cp1and Cp2may refer to a total of parasitic capacitances between other electrodes or wiring lines disposed between the touch electrodes TE1 and TE2 and the cathode CT and the touch electrodes TE1 and TE2. InFIG.8, for the convenience of description, only the cathode CT is illustrated as a component which generates the parasitic capacitance. When the user touches the display device with a finger, a quantity of electric charges formed in the first touch electrode TE1 corresponding to the finger and a quantity of electric charges formed in the second touch electrode TE2 may be represented as follows. Q(TE1)=Cf1V++Cm1(V+−V−)+Cp1V+[Equation 1] Q(TE2)=Cm2(V−−V+)+Cp2V−[Equation 2] Here, V+and V−denote a first touching voltage V+charged in the first touch electrode TE1 through the first reference line RL1 and a second touching voltage V−charged in the second touch electrode TE2 through the second reference line RL2, respectively. Further, a total quantity of sensed electric charges which is a sum Q(TE1)+Q(TE2) of a quantity of electric charges sensed by the third reference lines RL3-1 and RL3-4 may be represented as follows. Q(RO)=Cf1V++Cp1V++Cp2V−+(Cm1−Cm2)(V+−V−) [Equation 3] Here, for the Equation approximation, if the first touching voltage V+and the second touching voltage V−are set to be equal and the display device100is designed to have a relationship of “Cp1=Cp2”, a total quantity of sensed electric charges which is finally sensed is as follows. Q(RO)=(Cf1+2ΔCm)V+(Cm1=Cm+ΔCm,Cm2=Cm) [Equation 4] As a result, in Equation 3, the influence by the parasitic capacitance may be removed. Accordingly, only the capacitance formed in the touch electrodes TE1 and TE2 is sensed regardless of the magnitude of the parasitic capacitance. Generally, a touch technique used for the display device uses an add-on film manner or a touch on Encap manner in which the touch structure is formed on an encapsulation unit. In the case of the add-on film manner, the touch panel is formed above the film so that a separate cost of materials and processing costs are caused. Further, the touch pattern is formed on the film so that the transmittance and the resolution of the display device are degraded. In the case of the TOE manner, there are disadvantages in that in order to form the touch electrode, at least four sheets of photo masks are necessary and separate equipment for producing the mask is necessary. The display device100according to the exemplary embodiment of the present disclosure may be an in-cell touch type display device100. That is, a structure for implementing the touch is not separately formed, but may be formed together with the other components by a continuous process in the display device100. Specifically, the touch electrodes TE1 and TE2 are formed of a transparent conductive material to be disposed between the substrate110and the light emitting diode ED. Therefore, light emitted from the light emitting diode ED may pass through the transparent touch electrodes TE1 and TE2 to be easily emitted. The plurality of transistors for touching TC1, TC2, TS1, and TS2 electrically connected to the plurality of touch electrodes TE1 and TE2 may be simultaneously formed with the plurality of transistors TR1, TR2, TR3 in the sub pixel SP, by the same process. A plurality of touch gate lines TG1 and TG2 for driving the plurality of touch electrodes TE1 and TE2 may be simultaneously formed with the plurality of gate lines GL and the plurality of sensing lines SL by the same process. The plurality of touch electrodes TE1 and TE2 exchanges signals for touching through a reference line RL which transmits the reference voltage Vref to the plurality of sub pixels SP. Therefore, the display device100according to the exemplary embodiment of the present disclosure implements the display device100with an in-cell touch structure by adding only the mask for forming the touch electrodes TE1 and TE2. Accordingly, there is an advantage in that the touch structure may be implemented by the simple process with a minimum cost. Further, in the display device100according to the exemplary embodiment of the present disclosure, only the quantity of electric charges formed in the touch electrodes TE1 and TE2 can be sensed regardless of the magnitude of the quantity of electric charges generated by the parasitic capacitance. Therefore, the accuracy of touch sensing may be improved. Further, in order to reduce the parasitic capacitance, it is not necessary to dispose the planarization layer between the touch electrodes TE1 and TE2 and the other components so that the process may be simplified and the cost may be saved. Further, in the display device100according to the exemplary embodiment of the present disclosure, the parasitic capacitance by the touch electrodes TE1 and TE2 can be ignored and the average voltage applied to the touch electrodes TE1 and TE2 may be constantly maintained. Therefore, the influence on the anode AN disposed above the touch electrodes TE1 and TE1 may be reduced. That is, even though the touch electrodes TE1 and TE2 are below the anode AN, it does not influence a current flowing through the anode AN. Accordingly, even though the display device100according to the exemplary embodiment of the present disclosure is implemented as the in-cell touch structure, a display characteristic of the display device100may be constantly maintained. FIG.9is a diagram for explaining a driving method of a display device according to another exemplary embodiment of the present disclosure.FIG.10illustrates a schematic operation timing for explaining a driving method of a display device according to another exemplary embodiment of the present disclosure. InFIG.9, for the convenience of description, among various components of the display device, only a substrate110, a gate driver GD, and a touch driver TD are illustrated. InFIG.10, for the convenience of description, signals of gate lines GL1, GL2, . . . , GLm, GLm+1, GLm+2, . . . GL2m and touch gate lines TG1(SPB1), TG2(SPB1), TG1(SPB2), TG2(SPB2) are schematically illustrated. First, referring toFIG.9, the display device according to another exemplary embodiment of the present disclosure includes the substrate110, the gate driver GD, and the touch driver TD. The substrate110includes a plurality of sub pixel blocks SPB. Each of the plurality of sub pixel blocks SPB includes some of the plurality of sub pixels SP. That is, the plurality of sub pixels SP is divided into a plurality of sub pixel blocks SPB. For example, the plurality of sub pixel blocks SPB may be individual areas obtained by dividing the substrate110by a virtual line in the first direction. Each of the plurality of sub pixel blocks SPB includes the same number of sub pixels SP. The substrate110includes a total of n sub pixel blocks SPB1, SPB2, . . . , SPBn. Specifically, a second sub pixel block SPB2 is disposed below the first sub pixel block SPB1 and a third sub pixel block is disposed below the second sub pixel block and in this manner, a total of n sub pixel blocks SPB1, SPB2, . . . , SPBn may be included from an upper portion to a lower portion on the substrate110. Each of the plurality of sub pixel blocks SPB includes the plurality of touch electrodes TE1 and TE2. Specifically, the plurality of touch electrodes TE1 and TE2 may be disposed so as to correspond to each of the plurality of sub pixels SP. Each of the plurality of sub pixel blocks SPB includes the plurality of touch electrodes TE1 and TE2 overlapping the corresponding sub pixel SP. The gate driver GD may be electrically connected to the plurality of sub pixels SP of the substrate110through the plurality of gate lines GL. Each of the plurality of gate lines GL may be electrically connected to the plurality of sub pixels SP arranged in the first direction. One sub pixel block SPB may correspond to a total of m gate lines GL1, GL2, . . . , GLm, GLm+1, GLm+2, GL2m, . . . , GLnm. For example, the first sub pixel block SPB1 is electrically connected to a first gate line GL1, a second gate line GL2, . . . , an m-th gate line GLm. The second sub pixel block SPB2 may be electrically connected to a m+1-th gate line GLm+1, a m+2-th gate line GLm+2, . . . , and a 2m-th gate line GL2m. The gate driver GD sequentially supplies a gate line to the plurality of gate lines GL in response to the gate control signal supplied from the timing controller. Therefore, the plurality of second transistors TR2 electrically connected to the plurality of gate lines GL may be sequentially driven. The gate driver GD includes a plurality of gate integrated circuits. Each of the plurality of gate integrated circuits includes a shift register, a level shifter, and an output buffer. The shift register sequentially generates a gate pulse. The level shifter shifts a swing width of the gate pulse to a predetermined level to generate a gate signal. The output buffer supplies a gate signal supplied from the level shifter to the gate line GL. The gate driver GD is attached to the non-active area NA of the substrate as a chip or is mounted in the non-active area NA of the substrate110in the gate-in-panel manner. Further, even though it is not illustrated, the timing controller to supply the gate control signal to the gate driver GD may be disposed on the printed circuit board170, but the present disclosure is not limited thereto. The touch driver TD may be disposed in the gate driver GD. The touch driver TD may be electrically connected to the plurality of touch electrodes TE1 and TE2 of the substrate110through the plurality of touch gate lines TG. In the meantime, TG(SPB1), TG(SPB2), . . . , TG(SPBn) ofFIG.9refer to all touch gate lines TG connected to the plurality of touch electrodes TE1 and TE2 of the corresponding sub pixel block SPB. For example, the plurality of touch electrodes TE1 and TE2 disposed in the first sub pixel block SPB1 may be electrically connected to the touch gate line TG(SPB1). The plurality of touch electrodes TE1 and TE2 disposed in the second sub pixel block SPB2 may be electrically connected to the touch gate line TG(SPB2). The plurality of touch electrodes TE1 and TE2 disposed in the n-th sub pixel block SPBn may be electrically connected to the touch gate line TG(SPBn). In the meantime, even though inFIG.9, for the convenience of description, it is illustrated that one sub pixel block SPB and one touch gate line TG correspond to each other, it is not limited thereto. That is, a plurality of touch gate lines TG extending from the touch driver TD to each of the plurality of sub pixel blocks SPB may be configured. The touch driver TD sequentially supplies a touch gate signal to the plurality of touch gate lines TG in response to the PWM signal. Accordingly, the plurality of transistors for touching TC1, TC2, TS1, TS2 electrically connected to each of the plurality of touch gate lines TG is sequentially driven. Here, a PWM signal is supplied by the timing controller, but is not limited thereto. The touch driver TD includes a plurality of touch integrated circuits. Each of the plurality of touch integrated circuits includes a shift register, a level shifter, an output buffer, and an inverter. The shift register sequentially generates a touch gate pulse. The level shifter shifts a swing width of the touch gate pulse to a predetermined level to generate a touch gate signal. The output buffer supplies a touch gate signal supplied from the level shifter to the touch gate line TG. The inverter inverts the generated touch gate signal to generate an inverted touch gate signal. In the meantime, the plurality of touch gate lines TG may include a first touch gate line TG1 and a second touch gate line TG2. The first touch gate line TG1 may be a wiring line connected to a plurality of charging transistors TC1 and TC2. The second touch gate line TG2 may be a wiring line connected to a plurality of sensing transistors TS1 and TS2. Further, the touch gate signal includes a first touch gate signal and a second touch gate signal. That is, the first touch gate line TG1 supplies a first touch gate signal and the second touch gate line TG2 supplies a second touch gate signal. In one touch period, the first touch gate signal and the second touch gate signal may be inverted signals from each other. Specifically, each of the plurality of touch integrated circuits generates a first touch gate signal first to output the first touch gate signal to the first touch gate line TG1. Further, the second gate signal which is inverted from the first touch gate signal is generated using the inverter and output to the second touch gate line TG2. In the meantime, even though inFIG.9, the touch driver TD is disposed in the gate driver GD, it is not limited thereto. For example, the touch driver TD may be disposed in the printed circuit board170. Referring toFIG.10, one frame of the display device may include a plurality of sub frames. Here, the plurality of sub frames may correspond to each of the plurality of sub pixel blocks SPB. That is, the first sub frame is a period for driving the first sub pixel block SPB1, the second sub frame is a period for driving the second sub pixel block SPB2, and the n-th sub frame is a period for driving the n-th sub pixel block SPBn. Each of the plurality of sub frames may be time-divisionally driven in the display period and the touch period. First, in the display period of the first sub frame, the gate signal is applied to the plurality of gate lines GL corresponding to the first sub pixel block SPB1. At this time, the first sub pixel block SPB1 corresponds to the first gate line GL1, the second gate line GL2, . . . , and the m-th gate line GLm. Therefore, the gate signal may be sequentially applied to the first gate line GL1, the second gate line GL2, . . . , and the m-th gate line GLm. Therefore, the plurality of second transistors TR2 which is connected to the first gate line GL1, the second gate line GL2, . . . , and the m-th gate line GLm may be sequentially turned on. After the display period of the first sub frame, a touch period of the first sub frame may be proceeded. Specifically, the first touch gate signal may be applied to the plurality of first touch gate lines TG1(SPB1) corresponding to the first sub pixel block SPB1. Therefore, the plurality of first charging transistors TC1 and the plurality of second charging transistors TC2 connected to the plurality of first touch gate lines TG1(SPB1) are turned on. Further, the second touch gate signal may be applied to the plurality of second touch gate lines TG2(SPB2) corresponding to the first sub pixel block SPB1. Therefore, the plurality of first sensing transistors TS1 and the plurality of second sensing transistors TS2 connected to the plurality of second touch gate lines TG2(SPB2) are turned on. At this time, in the touch period, the first touch gate signal and the second touch gate signal may be inverted signals from each other. That is, when the first touch gate signal is a high level, the second touch gate signal is a low level and when the first touch gate signal is a low level, the second touch gate signal may be a high level. Therefore, when the plurality of first charging transistors TC1 and the plurality of second charging transistors TC2 are turned on, the plurality of first sensing transistors TS1 and the plurality of second sensing transistors TS2 may be turned off. Accordingly, the first touching voltage V+and the second touching voltage V−may be charged in the plurality of first touch electrodes TE1 and the plurality of second touch electrodes TE2 of the first sub pixel block SPB1, respectively. Further, when the plurality of first charging transistors TC1 and the plurality of second charging transistors TC2 are turned off, the plurality of first sensing transistors TS1 and the plurality of second sensing transistors TS2 may be turned on. Accordingly, the touch sensing signal may be transmitted from each of the plurality of first touch electrodes TE1 and the plurality of second touch electrodes TE2 of the first sub pixel block SPB1. After the touch period of the first sub frame ends, a display period of the second sub frame may be proceeded. That is, the driving for the second sub pixel block SPB2 disposed below the first sub pixel block SPB1 of the substrate110may be proceeded. Specifically, in the display period of the second sub frame, the gate signal is sequentially applied to the plurality of gate lines GLm+1, GLm+2, . . . , GL2m corresponding to the second sub pixel block SPB2. Next, in the touch period of the second sub frame, the first touch gate signal and the second touch gate signal may be applied to the plurality of first touch gate lines TG1(SPB2) and the plurality of second touch gate lines TG2(SPB2) corresponding to the second sub pixel block SPB2. This operation may be sequentially performed to the n-th sub pixel block SPBn. Such one frame is repeated so that the display device may be driven. In the meantime, even though inFIG.10, each of the first touch gate signal and the second touch gate signal has seven peaks, the present disclosure is not limited thereto. A display device according to another exemplary embodiment of the present disclosure may be an in-cell touch type display device. Specifically, a frame of the display device may be configured by a plurality of sub frames having a display period and a touch period. Therefore, the driving for the plurality of sub pixels SP and the sensing for the touch electrodes TE1 and TE2 may be easily performed in the display device. FIG.11illustrates a schematic operation timing for explaining a driving method of a display device still according to another exemplary embodiment of the present disclosure. InFIG.11, for the convenience of description, signals of gate lines GL1, GL2, . . . GLm, GLm+1, GLm+2, . . . , GL2m and touch gate lines TG1(SPB1), TG2(SPB1), TG1(SPB2), TG2(SPB2) are schematically illustrated. Referring toFIG.11, one frame of the display device may include a plurality of sub frames. Here, the plurality of sub frames may be periods which sequentially display the plurality of sub pixel blocks SPB. That is, the first sub frame is a period for driving the plurality of sub pixels SP of the first sub pixel block SPB1, the second sub frame is a period for driving the plurality of sub pixels SP of the second sub pixel block SPB2, and the n-th sub frame is a period for driving the plurality of sub pixels SP of the n-th sub pixel block SPBn. Each of the plurality of sub frames may be time-divisionally driven in the first touch period and the second touch period. At this time, the first touch period and the second touch period may be formed for different sub pixel blocks SPB among the plurality of sub pixel blocks SPB. First, in the display period of the first sub frame, the gate signal is sequentially applied to the plurality of gate lines GL1, GL2, . . . , GLm corresponding to the first sub pixel block SPB1. After the display period of the first sub frame, a first touch period of the first sub frame may be proceeded. Specifically, the first touch gate signal and the second touch gate signal may be applied to each of the plurality of first touch gate lines TG1(SPB1) and the plurality of second touch gate lines TG2(SPB1) corresponding to the first sub pixel block SPB1. After the first touch period of the first sub frame, a second touch period of the first sub frame may be proceeded. Specifically, the first touch gate signal and the second touch gate signal may be applied to each of the plurality of first touch gate lines TG1(SPB2) and the plurality of second touch gate lines TG2(SPB2) corresponding to the second sub pixel block SPB2. After the second touch period of the first sub frame ends, a display period of the second sub frame may be proceeded. That is, the driving for the second sub pixel block SPB2 disposed below the first sub pixel block SPB1 of the substrate110may be proceeded. Specifically, in the display period of the second sub frame, the gate signal is sequentially applied to the plurality of gate lines GLm+1, GLm+2, . . . , GL2m corresponding to the second sub pixel block SPB2. Next, in the first touch period of the second sub frame, the first touch gate signal and the second touch gate signal may be applied to each of the plurality of first touch gate lines TG1(SPB2) and the plurality of second touch gate lines TG2(SPB2) corresponding to the second sub pixel block SPB2. Next, in the second touch period of the second sub frame, the first touch gate signal and the second touch gate signal may be applied to each of the plurality of first touch gate lines TG1(SPB3) and the plurality of second touch gate lines TG2(SPB3) corresponding to the third sub pixel block SPB3. This operation may be sequentially performed to the n-th sub pixel block SPBn. Such one frame is repeated so that the display device may be driven. In the meantime, even though inFIG.11, it is described that the display period and the first touch period of one sub frame are formed for the same sub pixel block SPB, the present disclosure is not limited thereto. Further, even though inFIG.11, it is described that the first touch period and the second touch period of one sub frame are sequentially formed for the adjacent sub pixel block SPB, the present disclosure is not limited thereto. That is, the first touch period and the second touch period of one sub frame may be formed for different arbitrary sub pixel blocks SPB among the plurality of sub pixel blocks SPB. In the display device according to still another exemplary embodiment of the present disclosure, one frame is divided into a plurality of sub frames and each of the plurality of sub frames includes a first touch period and a second touch period. At this time, the first touch period and the second touch period may be formed for different sub pixel blocks SPB. That is, in one sub frame, the touch sensing may be performed for two different sub pixel blocks SPB. Therefore, the accuracy of touch sensing may be improved. FIG.12Aillustrates a schematic operation timing for explaining a driving method of a display device still according to another exemplary embodiment of the present disclosure. InFIG.12A, for the convenience of description, signals of the first touch gate line TG1 and the second touch gate line TG2 are schematically illustrated. Referring toFIG.12A, the first touch gate signal and the second touch gate signal may be applied to the first touch gate line TG1 and the second touch gate line TG2 of the display device according to another exemplary embodiment of the present disclosure, respectively. When the first touch gate signal is a high level, the plurality of first charging transistors TC1 and the plurality of second charging transistors TC2 connected to the first touch gate line TG1 may be turned on. When the second touch gate signal is a high level, the plurality of first sensing transistors TS1 and the plurality of second sensing transistors TS2 connected to the second touch gate line TG2 may be turned on. At this time, the first touch gate signal and the second touch gate signal may be inverted signals from each other in the same touch period. Further, a period in which the first touch gate signal is a high level and a period in which the second touch gate signal is a high level do not overlap. Further, heights H1 and H2 of peaks of the first touch gate signal and the second touch gate signal may be equal to each other. Specifically, when the level of the first touch gate signal is reduced to be a first voltage (V1) or lower, a level of the second touch gate signal may be increased from the low level to the high level. That is, when the level of the first touch gate signal is higher than the first voltage (V1), the second touch gate signal may be the low level. Therefore, when the plurality of charging transistors TC1 and TC2 is turned on, the plurality of sensing transistors TS1 and TS2 may be turned off. At a timing when the level of the first touch gate signal is reduced below the first voltage (V1), a level of the second touch gate signal may be increased from the low level. When the second gate signal is completely the high level, the first touch gate signal may be the low level. Therefore, when the plurality of sensing transistors TS1 and TS2 is turned on, the plurality of charging transistors TC1 and TC2 may be turned off. Here, the first voltage V1 may be a voltage higher than the low level voltage between the low level voltage and the high level voltage. For example, the first voltage V1 may refer to a voltage corresponding to a threshold voltage of the plurality of charging transistors TC1 and TC2. Further, when the level of the second touch gate signal is reduced to be the first voltage (V1) or lower, a level of the first touch gate signal may be increased from the low level to the high level. That is, when the level of the second touch gate signal is higher than the first voltage (V1), the first touch gate signal may be the low level. Therefore, when the plurality of sensing transistors TS1 and TS2 is turned on, the plurality of charging transistors TC1 and TC2 may be turned off. At a timing when the level of the second touch gate signal is reduced below the first voltage (V1), a level of the first touch gate signal may be increased from the low level. When the first gate signal is completely the high level, the second touch gate signal may be the low level. Therefore, when the plurality of charging transistors TC1 and TC2 is turned on, the plurality of sensing transistors TS1 and TS2 may be turned off. Here, the first voltage V1 may be a voltage higher than the low level voltage between the low level voltage and the high level voltage. For example, the first voltage V1 may refer to a voltage corresponding to a threshold voltage of the plurality of sensing transistors TS1 and In the display device according to still another exemplary embodiment of the present disclosure, when the level of the first touch gate signal is reduced to be the first voltage or lower, a level of the second touch gate signal may be increased from the low level to the high level. Further, when the level of the second touch gate signal is reduced to be the first voltage or lower, a level of the first touch gate signal may be increased from the low level to the high level. Specifically, the first voltage may refer to a threshold voltage of the plurality of charging transistors TC1 and TC2 and the plurality of sensing transistors TS1 and TS2. Therefore, the plurality of charging transistors TC1 and TC2 connected to the first touch gate line TG1 and the plurality of sensing transistors TS1 and TS2 connected to the second touch gate line TG2 are suppressed to be simultaneously turned on. Therefore, the accuracy of touch sensing may be improved. FIG.12Billustrates a schematic operation timing for explaining a driving method of a display device still according to another exemplary embodiment of the present disclosure. InFIG.12B, for the convenience of description, signals of the first touch gate line TG1 and the second touch gate line TG2 are schematically illustrated.FIG.12Bis the same asFIG.12Aexcept for a height H2 of the peak of the second touch gate signal, so that a redundant description will be omitted. Referring toFIG.12B, a height H2 of the peak of the second touch gate signal may be higher than a height H1 of the peak of the first touch gate signal. Accordingly, even though a period in which the second touch gate signal is a high level is relatively short, the height H2 of the peak is increased to increase a quantity of electric charges sensed by the plurality of sensing transistors TS1 and TS2. Here, the height of the peak refers to a difference between the low level and the high level. Specifically, a timing when the first touch gate signal and the second touch gate signal are increased from the low level to the high level is changed so that the timings when the plurality of charging transistors TC1 and TC2 and the plurality of sensing transistors TS1 and TS2 are turned on do not overlap. At this time, a high level period of the second touch gate signal may be shorter than a high level period of the first touch gate signal. Therefore, the height H2 of the peak of the second touch gate signal is increased to be higher than the height H1 of the peak of the first touch gate signal so that the shortened high level period of the second touch gate signal may be compensated. In the display device according to still another exemplary embodiment of the present disclosure, a difference between the low level and the high level of the second touch gate signal may be larger than a difference between the low level and the high level of the first touch gate signal. Accordingly, a quantity of electric charges sensed by the plurality of sensing transistors TS1 and TS2 is increased to improve the accuracy of touch sensing. The exemplary embodiments of the present disclosure can also be described as follows: According to an aspect of the present disclosure, a display device includes a substrate including a plurality of sub pixels; a first touch electrode on the substrate and overlapping each of the plurality of sub pixels; a second touch electrode which is disposed on the substrate to be spaced apart from the first touch electrode and overlap each of the plurality of sub pixels; an insulating layer covering the first touch electrode and the second touch electrode; a plurality of charging transistors on the insulating layer and electrically connected to one of the first touch electrode and the second touch electrode; a plurality of sensing transistors on the insulating layer and electrically connected to one of the first touch electrode and the second touch electrode; a planarization layer covering the plurality of charging transistors and the plurality of sensing transistors; and a light emitting diode on the planarization layer. The plurality of sub pixels may include an emission area and a circuit area, the first touch electrode and the second touch electrode may be disposed to overlap an anode of the light emitting diode in the emission area, and the first touch electrode and the second touch electrode may be formed of a transparent conductive material. The display device may further include a plurality of reference lines electrically connected to the plurality of sub pixels. One of a source electrode and a drain electrode of the plurality of charging transistors may be electrically connected to the plurality of reference lines. The other one of the source electrode and the drain electrode of the plurality of charging transistors may be electrically connected to the first touch electrode or the second touch electrode. One of a source electrode and a drain electrode of the plurality of sensing transistors may be electrically connected to the plurality of reference lines. And the other one of the source electrode and the drain electrode of the plurality of sensing transistors may be electrically connected to the first touch electrode or the second touch electrode. The plurality of reference lines may be configured to apply a reference voltage to the plurality of sub pixels during a display period and may be configured to exchange signals for touching with the first touch electrode and the second touch electrode during a touch period. The plurality of reference lines may include: a first reference line applying a first touching voltage to the first touch electrode during the touch period; a second reference line applying a second touching voltage to the second touch electrode during the touch period; and a plurality of third reference lines transmitting a touch sensing signal from the first touch electrode and the second touch electrode during the touch period. The plurality of charging transistors may include: a first charging transistor to apply the first touching voltage to the first touch electrode through the first reference line; and a second charging transistor to apply the second touching voltage to the second touch electrode through the second reference line; and the plurality of sensing transistors may include: a first sensing transistor transmitting the touch sensing signal from the first touch electrode through one of the plurality of third reference lines; and a second sensing transistor which transmitting the touch sensing signal from the second touch electrode through the other one of the plurality of third reference lines. The first touching voltage may be a sum of the reference voltage and a predetermined voltage and the second touching voltage may be a difference of the reference voltage and the predetermined voltage. The display device may further include: a plurality of gate lines electrically connected to the plurality of sub pixels; a plurality of first touch gate lines extending in the same direction as the plurality of gate lines and electrically connected to the gate electrodes of the plurality of charging transistors; and a plurality of second touch gate lines extending in the same direction as the plurality of gate lines and electrically connected to the gate electrodes of the plurality of sensing transistors. One frame may include: a display period in which a gate signal is applied to the plurality of gate lines; and a touch period in which a first touch gate signal and a second touch gate signal are applied to each of the plurality of first touch gate lines and the plurality of second touch gate lines, after the display period. The first touch gate signal and the second touch gate signal may be inverted signals in the touch period. When the level of the first touch gate signal is reduced to be a first voltage or lower, a level of the second touch gate signal may rise from a low level to a high level, when the level of the second touch gate signal is reduced to be the first voltage or lower, a level of the first touch gate signal may rise from a low level to a high level, and the first voltage may be a voltage higher than the voltage of the low level of the first touch gate signal and the second touch gate signal. A difference of the low level and the high level of the second touch gate signal may be larger than a difference of the low level and the high level of the first touch gate signal. The substrate may include a plurality of sub pixel blocks including some of the plurality of sub pixels, one frame may include a plurality of sub frames which sequentially drives the plurality of sub pixel blocks, each of the plurality of sub frames may include: a display period in which a gate signal is applied to the plurality of gate lines of one sub pixel block among the plurality of sub pixel blocks; and a touch period in which a first touch gate signal and a second touch gate signal are applied to each of the plurality of first touch gate lines and the plurality of second touch gate lines of the one sub pixel block among the plurality of sub pixel blocks, after the display period. The plurality of sub pixel blocks may include a first sub pixel block and a second sub pixel block below the first sub pixel block and when the touch period in the sub frame for the first sub pixel block ends, the display period in the sub frame for the second sub pixel block may start. The substrate may include a plurality of sub pixel blocks including some of the plurality of sub pixels, one frame may include a plurality of sub frames which sequentially drives the plurality of sub pixel blocks, each of the plurality of sub frames may include: a display period in which a gate signal is applied to the plurality of gate lines of one sub pixel block among the plurality of sub pixel blocks; a first touch period in which a first touch gate signal and a second touch gate signal are applied to each of the plurality of first touch gate lines and the plurality of second touch gate lines of the one sub pixel block among the plurality of sub pixel blocks, after the display period, and a second touch period in which a first touch gate signal and a second touch gate signal are applied to each of the plurality of first touch gate lines and the plurality of second touch gate lines of the other one sub pixel block among the plurality of sub pixel blocks, after the first touch period. The display device may further include: a touch driver electrically connected to the plurality of charging transistors and the plurality of sensing transistors; a gate driver electrically connected to the plurality of sub pixels; and a printed circuit board electrically connected to the substrate at the outside of the substrate. The touch driver may be disposed in the gate driver. The gate driver may be mounted in a non-active area of the substrate in a gate in panel (GIP) manner. The gate driver may be attached to a non-active area of the substrate. The touch driver may be disposed on the printed circuit board. Although the exemplary embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the exemplary embodiments of the present disclosure are provided for illustrative purposes only but not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described exemplary embodiments are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. | 126,242 |
11861092 | DETAILED DESCRIPTION Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings, in which like reference numerals refer to like elements throughout. In this regard, the exemplary embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the exemplary embodiments are merely described below, by referring to the figures, to explain exemplary embodiments of the description. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. As the invention allows for various changes and numerous exemplary embodiments, exemplary embodiments will be illustrated in the drawings and described in detail in the written description. However, this is not intended to limit the invention to particular modes of practice, and it is to be appreciated that all changes, equivalents, and substitutes that do not depart from the spirit and technical scope of the invention are encompassed in the invention. In the description of the invention, certain detailed explanations of the related art are omitted when it is deemed that they may unnecessarily obscure the essence of the invention. It will be understood that when an element is referred to as being “on” another element, it can be directly on the other element or intervening elements may be therebetween. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. While such terms as “first,” “second,” etc., may be used to describe various components, such components must not be limited to the above terms. The above terms are used only to distinguish one component from another. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms, including “at least one,” unless the content clearly indicates otherwise. “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the Figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. In an exemplary embodiment, when the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The exemplary term “lower,” can therefore, encompasses both an orientation of “lower” and “upper,” depending on the particular orientation of the figure. Similarly, when the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The exemplary terms “below” or “beneath” can, therefore, encompass both an orientation of above and below. “About” or “approximately” as used herein is inclusive of the stated value and means within an acceptable range of deviation for the particular value as determined by one of ordinary skill in the art, considering the measurement in question and the error associated with measurement of the particular quantity (i.e., the limitations of the measurement system). For example, “about” can mean within one or more standard deviations, or within ±30%, 20%, 10%, 5% of the stated value. The terms used in the specification are merely used to describe exemplary embodiments, and are not intended to limit the invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in the context. In the specification, it is to be understood that the terms such as “including,” “having,” and “comprising” are intended to indicate the existence of the features, numbers, steps, actions, components, parts, or combinations thereof disclosed in the specification, and are not intended to preclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof may exist or may be added. The invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. Like reference numerals in the drawings denote like elements, and thus their repetitive description will be omitted. FIG.1is a perspective view of a display device100according to an exemplary embodiment. Referring toFIG.1, the display device100may include a display panel110. The display panel110may include an active area AA displaying images and an inactive area IAA extending outwardly from the active area AA. According to an exemplary embodiment, the inactive area IAA may surround the active area AA. According to an exemplary embodiment, the active area AA may extend in a longitudinal direction of the display panel110, but the invention is not limited thereto. According to an exemplary embodiment, the display panel110may be a rigid or flexible panel. The display device100may include a touch screen panel (“TSP”) which recognizes a location where a user contacts the display device100. FIG.2is a partially exploded perspective view of the TSP200according to an exemplary embodiment. Referring toFIG.2, the TSP200may include a plurality of touch sensor electrodes230. The plurality of touch sensor electrodes230may include a plurality of first touch electrodes210and a plurality of second touch electrodes220. According to an exemplary embodiment, the TSP200is an on-cell TSP in which the plurality of touch sensor electrodes230is arranged in the display panel110inFIG.1. However, a structure of the TSP is not limited thereto. According to another exemplary embodiment, the TSP200may have an in-cell TSP structure in which the plurality of touch sensor electrodes230are arranged inside the display panel110inFIG.1or a hybrid TSP structure, which is a combination of the on-cell TSP and the in-cell TSP structures, for example. Thus, the structure of the TSP is not limited to only one type. The plurality of first touch electrodes210and the plurality of second touch electrodes220may be alternately arranged relative to each other on a substrate201. The substrate201may be an encapsulated substrate arranged in the display panel110inFIG.1. The plurality of first touch electrodes210and the plurality of second touch electrodes220may be arranged in directions crossing each other. Each of the first touch electrodes210may be a transmission electrode and each of the second touch electrodes220may be a reception electrode. According to an exemplary embodiment, the TSP200may have a mutual-capacitance sensing structure, in which capacitance changes are measured at points where the plurality of touch sensor electrodes230cross each other. However, the structure of the TSP200is not limited thereto. According to another exemplary embodiment, the structure of the TSP200may be a self-capacitance sensing structure, in which the capacitance change is measured at each pixel for touch recognition via a single touch sensor electrode230. Thus, the structure of the TSP200is not limited to one type only. The plurality of first touch electrodes210and the plurality of second touch electrodes220may be arranged on the same layer on the substrate201. In another exemplary embodiment, the plurality of first touch electrodes210and the plurality of second touch electrodes220may be separated from each other on different layers. A pair of the first touch electrodes210arranged adjacent to each other on the substrate201may be electrically connected to each other via a first touch connector211. A pair of the second touch electrodes220arranged adjacent to each other on the substrate201may be electrically connected to each other via a second touch connector221. A pair of the second touch electrodes220may be connected to the second touch connector221arranged on a different layer via a contact hole241, to avoid interference with the first touch electrode210. An insulating layer250covering the plurality of first touch electrodes210and the plurality of second touch electrodes220may be arranged on the substrate201. When an input tool such as a user's finger or a pen comes close to or contacts the substrate201, the TSP200may detect a touch location by measuring a capacitance change between the first touch electrode210and the second touch electrode220. Research and development on a display device including a TSP with a plurality of micro-LEDs has been conducted. FIG.3is a plan view of an arrangement of a LED325and a touch sensor electrode326of a display device300according to an exemplary embodiment.FIG.4is a cross-sectional view of a single sub-pixel of the display device300ofFIG.3. According to an exemplary embodiment, each of sub-pixels of the display device300may include at least one thin film transistor (“TFT”) and at least one micro-LED. However, the TFT is not necessarily feasible only in the structure ofFIG.3, and its number and structure may be variously changeable. Referring toFIGS.3and4, the display device300may include a plurality of sub-pixel areas301. The sub-pixel areas301may be separated from each other in the X-axis and the Y-axis directions on a display substrate311. The sub-pixel areas301may be separated from each other by at least one layer of a bank323. The sub-pixel area301may include a first area302in which the LED325is disposed (e.g., placed) and a second area303in which a touch sensor electrode326is placed. In an exemplary embodiment, the display substrate311may be any one of a rigid glass substrate, a flexible glass substrate, and a flexible polymer substrate, for example. In an exemplary embodiment, the display substrate311may be transparent, semi-transparent, or opaque, for example. A buffer layer312may be arranged on the display substrate311. The buffer layer312may totally cover a top surface of the display substrate311. The buffer layer312may include an inorganic layer or an organic layer. The buffer layer312may be a single layer or a multi-layer. The TFT may be arranged on the buffer layer311. The TFT may include a semiconductor activating layer313, a gate electrode318, a source electrode320, and a drain electrode321. According to an exemplary embodiment, the TFT may be a top gate type. However, the invention is not limited thereto, and the TFT may be other types such as a bottom gate type. The semiconductor activating layer313may be arranged on the buffer layer312. The semiconductor activating layer313may include a source area314and a drain area315which are positioned by doping n-type impurity ions or p-type impurity ions. An area between the source area314and the drain area315may be a channel area316in which impurities are not doped. In an exemplary embodiment, the semiconductor activating layer313may be an organic semiconductor, an inorganic semiconductor, or amorphous silicon, for example. In another exemplary embodiment, the semiconductor activating layer313may be an oxide semiconductor, for example. A gate insulating layer317may be arranged on the semiconductor activating layer313. The gate insulating layer317may include an inorganic layer. The gate insulating layer317may be a single layer or a multi-layer. The gate electrode318may be arranged on the gate insulating layer317. The gate electrode318may include a material with good conductivity. The gate electrode318may be a single layer or a multi-layer. An interlayer insulating layer319may be arranged on the gate electrode318. The interlayer insulating layer319may include an inorganic layer or an organic layer. The source electrode320and the drain electrode321may be arranged on the interlayer insulating layer319. In detail, a contact hole may be defined by removing a portion of the gate insulating layer317and a portion of the interlayer insulating layer319. Then, the source electrode320may be electrically connected to the source area314via the contact hole and the drain electrode321may be electrically connected to the drain area315via the contact hole. A planarization layer322may be arranged on the source electrode320and the drain electrode321. The planarization layer322may include an inorganic layer or an organic layer. At least one layer of the bank323separating the sub-pixel areas301may be arranged on the planarization layer322. The bank323may include an inorganic layer or an organic layer. The bank323may be transparent or opaque. The bank323may include a light absorbing material, a light reflecting material, or a light scattering material. The bank323may function as a light blocking layer having low light transmissivity. An opening324may be defined above the TFT by removing a portion of the bank323. A first electrode333may be arranged on the planarization layer322which is exposed by removing the portion of the bank323. The first electrode333may be electrically connected to the drain electrode321via the contact hole defined in the planarization layer322. The first electrode333may include a transparent electrode or a metal electrode. The first electrode333may have various patterns. In an exemplary embodiment the first electrode333may be patterned in an island shape, for example. The sub-pixel area301may include the first area302in which the LED325is placed and the second area303in which the touch sensor electrode326is placed. The LED325and the touch sensor electrode326may be arranged in the sub-pixel area301. In detail, each of the sub-pixel areas301may be positioned in the opening324surrounded by the bank323. The first area302in which the LED325is placed and the second area303in which the touch sensor electrode326is placed may be arranged adjacent to each other in each of the sub-pixel areas301. According to an exemplary embodiment, a size of the second area303may be larger than that of the first area302. The LED325may emit light in a certain wavelength band covering a range from ultraviolet (“UV”) rays to visible light. In an exemplary embodiment, the LED325may be a micro-LED, for example. According to an exemplary embodiment, the LED325may be a red-color LED, a green-color LED, a blue-color LED, a white-color LED, or a UV LED, for example. The LED325may include a first contact electrode328, a second contact electrode329, and a p-n diode327arranged between the first contact electrode328and the second contact electrode329. The p-n diode327may include a p-doped layer330on a bottom portion, an n-doped layer331on a top portion, and at least one of quantum well layer332arranged between the p-doped layer330and the n-doped layer331. In another exemplary embodiment, the doped layer331on the top portion may be the p-doped layer and the doped layer330on the bottom portion may be the n-doped layer. The first contact electrode328may be arranged on the p-doped layer330on the bottom portion. The first contact electrode328may be electrically connected to the first electrode333. The second contact electrode329may be arranged on the n-doped layer331on the top portion. The second contact electrode329may be electrically connected to a second electrode334. The second electrode334may include a transparent electrode or a metal electrode. The second electrode334may include various shapes of patterns. According to an exemplary embodiment, the second electrode334may be a common electrode. The touch sensor electrode326may be arranged in the second area303of each of the sub-pixel areas301. The touch sensor electrode326may include a metal layer. In a case of a transparent display device, the touch sensor electrode326may include a transparent conductive layer such as an indium tin oxide (“ITO”) layer. The touch sensor electrode326may be the same as the touch sensor electrode230inFIG.2. The touch sensor electrode326may extend to an adjacent sub-pixel area301. According to an exemplary embodiment, the touch sensor electrode326may be a portion of the first touch electrode210or the second touch electrode220inFIG.2. The touch sensor electrode326may be arranged on a same layer as that on which the first electrode333is disposed. The touch sensor electrode326may include the same material and be obtained via the same process as that of the first electrode333. In other exemplary embodiment, the touch sensor electrode326may be patterned with another metallic material. According to an exemplary embodiment, the touch sensor electrode326may be electrically connected to the first electrode333. In another exemplary embodiment, the touch sensor electrode326may apply a separate electrical signal. The touch sensor electrode326may be driven via a mutual-capacitance method or a self-capacitance method, depending on a connection method. In the case of the mutual-capacitance method, the touch sensor electrode326may be an electrode which senses a capacitance change between the plurality of touch electrodes such as the first touch electrode210and the second touch electrode220inFIG.2. The plurality of touch electrodes may be arranged on the same layer above the display substrate311. In another exemplary embodiment, the plurality of touch electrodes may be separated on different layers above the display substrate311. In the case of the self-capacitance method, the touch sensor electrode326may be an electrode which senses a capacitance change of a single touch electrode by using a single touch electrode. According to an exemplary embodiment, a ground wiring (not illustrated) may be further arranged in the second area303of the sub-pixel area301. A filling layer335may be filled in the opening324. The LED325and the touch sensor electrode326may be embedded in the filling layer335. In an exemplary embodiment, the filling layer335may include an organic material, but the invention is not limited thereto. Likewise, the LED325may be arranged in the first area302of the sub-pixel area301. The touch sensor electrode326may be arranged in the second area303of the sub-pixel area301. Below, like reference numbers in illustrated drawings above may denote like members performing like functions. Thus, duplicate descriptions will be omitted and only major particular portions of each exemplary embodiment will be selectively described. FIG.5is a plan view of a display device500according to another exemplary embodiment. Referring toFIG.5, the sub-pixel area301may include the first area302in which the LED325is placed and the second area303in which a touch sensor electrode526is placed. The filling layer335may be filled in the opening324from which a portion of the bank323is removed. The LED325may be embedded in the filling layer335. The touch sensor electrode526may be arranged on the filling layer335. The touch sensor electrode526and the second electrode334may be arranged on the same layer. The touch sensor electrode526may include the same material and be obtained via the same process as that of the second electrode334. The touch sensor electrode526may be driven via a mutual-capacitance method or a self-capacitance method, depending on a connection method. According to an exemplary embodiment, a ground wiring536may be further arranged in the second area303of the sub-pixel area301. The ground wiring536and the touch sensor electrode526may be arranged on the same layer. The ground wiring536may eliminate noise caused by pixel-driving. When the ground wiring536is arranged, a capacitance may be reduced, and subsequently, noise may be reduced. The ground wiring536may receive an electrical signal from a power line, through which a constant voltage flows. In another exemplary embodiment, a ground voltage may be applied to the ground wiring536. In another exemplary embodiment, the ground wiring536may be in a floating state. FIG.6is a plan view of an arrangement of the LED325and a touch sensor electrode626according to an exemplary embodiment. Referring toFIG.6, the LED325and the touch sensor electrode626of the display device600may be arranged in each of sub-pixel areas601. Each LED325may be arranged in respective sub-pixel areas601and the touch sensor electrode626may extend to the adjacent sub-pixel area601. Each of sub-pixel areas601may include a first area602in which the LED325is placed and a second area603in which a touch sensor electrode626is placed. An area of the touch sensor electrode626may be expanded by enlarging a gap g between a pair of LEDs325which are involved in different light-emitting and adjacent to each other in the X-axis direction. When the area of the touch sensor electrode626is increased, touch sensitivity may be enhanced. According to an exemplary embodiment, the LED325arranged in each sub-pixel may include at least one of the red-color LED, the green-color LED, the blue-color LED, the white-color LED, and the UV LED. In another exemplary embodiment, a color filter layer having color hue corresponding to respective LEDs325may be further arranged above the LED325. FIG.7is a plan view of an arrangement of an LED and a touch sensor electrode according to another exemplary embodiment. Referring toFIG.7, the display device700may include a plurality of sub-pixel areas701. The sub-pixel areas701may be separated from each other in X-axis and Y-axis directions. The sub-pixel area701may include a first area702in which the LED325is placed and a second area703in which a touch sensor electrode726is placed. The touch sensor electrode726may extend to an adjacent sub-pixel area701. According to an exemplary embodiment, the touch sensor electrode726may have a self-capacitance sensing structure. The touch sensor electrode726may be an electrode which senses a capacitance change in a single touch electrode. A touch sensor wiring727may be arranged on the touch sensor electrode726. In an exemplary embodiment a plurality of touch sensor wirings727may be arranged in the sub-pixel areas701which are arranged along the Y-axis direction, for example. The touch sensor wiring727may extend along the Y-axis direction and may be electrically connected to respective touch sensor electrodes726which are continuously arranged in the Y-axis direction. The touch sensor wiring727may be electrically connected to the touch sensor electrode726and an external device (not illustrated). A changed capacitance may be transferred from the single touch sensor electrode726to the external device via the touch sensor wiring727, and a sensor voltage generated by the external device may be transferred to the touch sensor electrode726. According to an exemplary embodiment, the touch sensor wiring727may be directly or indirectly connected to the touch sensor electrode726on the touch sensor electrode726. Referring toFIG.8, the sub-pixel area301of the display device800may include the first area302in which the LED325is placed and the second area303in which a touch sensor electrode826is placed. The filling layer335may be filled in the opening324which is defined by removing the portion of the bank323. The touch sensor electrode826and the first electrode333may be arranged on the same layer. A touch sensor wiring827may be arranged on the filling layer335. The touch sensor wiring827may be arranged in the second area303. The touch sensor wiring827may be electrically connected to the touch sensor electrode826via a contact hole828defined in the filling layer335. According to an exemplary embodiment, the touch sensor wiring827and at least one of the gate electrode318, the source electrode320, and the drain electrode321may be included on the same layer. However, the current exemplary embodiment is not limited to a single location. Referring toFIG.9, the sub-pixel area301of the display device900may include the first area302in which the LED325is placed and the second area303in which a touch sensor electrode926is placed. The filling layer335may be filled in the opening324which is defined by removing the portion of the bank323. The touch sensor electrode926may be arranged on the filling layer335. The touch sensor electrode926and the second electrode334may be arranged on the same layer. A touch sensor wiring927may be arranged on the touch sensor electrode926. The touch sensor wiring927may be directly connected to the touch sensor electrode926. FIG.10is a plan view of an arrangement of the LED325and a touch sensor electrode1026according to another exemplary embodiment. Referring toFIG.10, a display device1000may include a plurality of sub-pixel areas1001. The sub-pixel areas1001may be separated from each other in X-axis and the Y-axis directions. The sub-pixel area1001may include a first area1002in which the LED325is placed and a second area1003in which the touch sensor electrode1026is placed. The touch sensor electrode1026may extend to an adjacent sub-pixel area1001. According to an exemplary embodiment, the touch sensor electrode1026may have a mutual-capacitance sensing structure. The touch sensor electrode1026may be an electrode which senses the capacitance change generated between a first touch electrode1028and a second touch electrode1029. The first touch electrode1028and the second touch electrode1029may be arranged on the same layer. The first touch electrode1028and the second touch electrode1029may be alternately arranged in the X-axis direction. The pair of first touch electrodes1028, which are separated from each other with the second touch electrode1029therebetween in the X-axis direction, may be electrically connected to each other via a connecting wiring1027which is arranged on a different layer with respect to the first touch electrode1028. According to an exemplary embodiment, the connecting wiring1027may include the same material as that of at least one of the gate electrode, the source electrode, and the drain electrode. In another exemplary embodiment, the connecting wiring1027may be electrically connected to any one of the gate electrode, the source electrode, and the drain electrode. The second touch electrodes1029may be respectively connected to separate touch sensor wirings, or may be directly connected to a touch integrated circuit (“IC”) on one edge of the display device1000. FIG.11is a plan view of an arrangement of an LED and a touch sensor electrode according to another exemplary embodiment, andFIG.12is a cross-sectional view of a single sub-pixel inFIG.11. Referring toFIGS.11and12, the display device1100may include a plurality of sub-pixel areas1101. The sub-pixel areas1101may be separated from each other in the X-axis and the Y-axis directions. The sub-pixel area1101may include a first area1102in which the LED325is placed and a second area1103in which a touch sensor electrode1126is placed. The touch sensor electrode1126may extend to an adjacent sub-pixel area1101. According to an exemplary embodiment, the touch sensor electrode1126may have the mutual-capacitance sensing structure. The touch sensor electrode1126may include a first touch electrode1128and a second touch electrode1129. The first touch electrode1128and the second touch electrode1129may be alternately arranged in the X-axis direction. Unlike as illustrated inFIG.10, the first touch electrode1128and the second touch electrode1129may be separated from each other on different layers. The first touch electrode1128, which may be a metal electrode, and a single electrode arranged on the TFT may be a metal electrode arranged on the same layer. In an exemplary embodiment, the first touch electrode1128and the source electrode320or the drain electrode321may be arranged on the same layer, for example. The first touch electrode1128may include the same material in the same process as that of the source electrode320or the drain electrode321. In another exemplary embodiment, the first touch electrode1128, which may be the metal electrode, and the gate electrode318may be arranged on the same layer. The second touch electrode1129and the first electrode333may be arranged on the same layer. The second touch electrode1129may include the same material in the same process as that of the first electrode333. In another exemplary embodiment, the second touch electrode1129and the second electrode334may be arranged on the same layer and may include the same material in the same process. In another exemplary embodiment, the first touch electrode1128may include a separate metal layer, for example, a transparent conductive layer, and the second touch electrode1129and at least one of the gate electrode318, the source electrode320, the drain electrode321, the first electrode333, and the second electrode334may be arranged on the same layer. FIG.13is a cross-sectional view of a display device1300according to another exemplary embodiment. Referring toFIG.13, a first function layer1301encapsulating the sub-pixel area301may be arranged on the display substrate311. According to an exemplary embodiment, the first function layer1301may be an encapsulating layer, but it is not limited thereto. A second function layer1303may be arranged above the first function layer1301with a medium layer1302therebetween. The second function layer1303may be a window cover, but it is not limited thereto. The medium layer1302may receive pressure caused by a user's contact and may be a material with a cushion function. According to an exemplary embodiment, the medium layer1302may be an air layer, but it is not limited thereto. A spacer1304maintaining a cell gap may be arranged between the first function layer1301and the second function layer1303. A sealant1305may be applied to edges where the first function layer1301and the second function layer1303may face. The sub-pixel area301may include the first area302in which the LED325is placed and the second area303in which the touch sensor electrode326is placed. The touch sensor electrode326and the first electrode333may be arranged on the same layer. In another exemplary embodiment, the touch sensor electrode326and the second electrode334may be arranged on the same layer. The touch sensor electrode326may correspond to an electrode which senses the capacitance change in the X-axis and the Y-axis crossing the X-axis. A force sensing electrode1306, which forms capacitance with the touch sensor electrode326in a Z-axis perpendicular to the X-axis and the Y-axis and senses pressure in accordance with the capacitance change, may be further arranged on the display substrate311. According to an exemplary embodiment, the force sensing electrode1306may be arranged on one surface of the second function layer1303facing the first function layer1301. Since capacitance is generated between the touch sensor electrode326and the force sensing electrode1306, sensing the capacitance change in the Z-axis may be possible. In detail, when an input tool is pressed, an applied force may be sensed depending on the capacitance change between the touch sensor electrode326and the force sensor electrode1306. The force sensor electrode1306may patterned with a particular pattern on the second function layer1303. In detail, the force sensor electrode1306may be patterned with a plurality of patterns having different areas from each other, to sense each location in accordance with the force applied to the display device1300. In another exemplary embodiment, the force sensor electrode1306may be entirely disposed on the second function layer1303, to sense the force only. In an exemplary embodiment, the force sensor electrode1306may include conductive polymer materials such as poly 3,4-ethylenedioxy thiophene (“PEDOT”), polyacetylene, and polypyrrole. Since the force sensor electrode1306has resistance per unit area more than 100 times larger than that of the transparent conductive layer such as an ITO layer, while maintaining electrical conductivity, the force sensor electrode1306may be applicable to an electrode of the TSP. A connector1307which is electrically connected to the force sensor electrode1306may be arranged between the first function layer1301and the second function layer1303. The connector1307may be connected to a pad on the display substrate311via the contact hole1308defined in the first function layer1301and may transfer a force sensor signal to the external device. Likewise, the structure, in which locations in the X-axis and the Y-axis are sensed by using the touch sensor electrode326and a force in the Z-axis is sensed by using the force sensor electrode1306, may be applied to various display devices. Referring toFIG.14, a display device1400may be an organic light-emitting display device. In detail, a display unit1402including the TFT and an organic LED may be arranged on a display substrate1401, a touch sensor electrode1403may be arranged on the display unit1402, and a first function layer1404corresponding to encapsulation may be arranged on the touch sensor electrode1403. A second function layer1406, which corresponds to the window cover, may be arranged above the first function layer1404with a medium layer1405therebetween. A force sensor electrode1407may be arranged on one surface of the second function layer1406facing the first function layer1404. The touch sensor electrode1403may be connected to a pad on the display substrate1401via a contact hole1408defined in the display unit1402and may transfer the sensor signal to the external device. In addition, the force sensor electrode1407may be connected to a connector1409and may transfer the force sensor signal to the external device. The structure described above may sense the force applied in accordance with the capacitance change between the touch sensor electrode1403and the force sensor electrode1407. Referring toFIG.15, a display device1500may be an organic light-emitting display device. In detail, a display unit1502including the TFT and the organic LED may be arranged on the display substrate1501. Unlike as illustrated inFIG.14, a first function layer1504corresponding to encapsulation may be arranged on the display unit1502. A touch sensor electrode1503may be arranged on the first function layer1504. A second function layer1506, which corresponds to the window cover, may be arranged above the touch sensor electrode1503with a medium layer1505therebetween. A force sensor electrode1507may be arranged on one surface of the second function layer1506. The force sensor electrode1507may be connected to a connector1509and may transfer the force sensor signal to the external device. The structure described above may sense the force applied in accordance with the capacitance change between the touch sensor electrode1503and the force sensor electrode1507. Referring toFIG.16, a display device1600may be a liquid crystal display device. In detail, a display unit1602including a crystal display element may be arranged on the display substrate1601, and a touch sensor electrode1603may be arranged on the display unit1602. A function layer1604, which corresponds to a color filter substrate, may be arranged on the touch sensor electrode1603, and a force sensor electrode1605may be arranged on the function layer1604. The force sensor electrode1605may be connected to a connector1606and may transfer the force sensor signal to the external device. The structure described above may sense the applied force via the capacitance change between the touch sensor electrode1603and the force sensor electrode1605. Referring toFIG.17, a display device1700may be a liquid crystal display device. In detail, a display unit1702including a crystal display element may be arranged on a display substrate1701. Unlike as illustrated inFIG.16, a first function layer1704, which corresponds to a color filter substrate, may be arranged on the display unit1702and a touch sensor electrode1703may be arranged on the first function layer1704. A second function layer1708such as the window cover may be arranged above the touch sensor electrode1703with a medium layer1707therebetween. A force sensor electrode1705may be arranged on one surface of the second function layer1708. The force sensor electrode1705may be connected to a connector1706and may transfer the force sensor signal to the external device. The structure described above may sense the applied force via the capacitance change between the touch sensor electrode1703and the force sensor electrode1705. Referring toFIG.18, a display device1800may include a display substrate1801. A display unit1802may be arranged on the display substrate1801. The display unit1802may include a micro-LED. A touch sensor electrode1803may be arranged on the display unit1802. In another exemplary embodiment, the display unit1802may include a LCD element, or an organic LED. In another exemplary embodiment, a function layer such as a color filter substrate or an encapsulation layer may be arranged on the display1802. A function layer1809, which corresponds to the window cover, may be arranged above the touch sensor electrode1803with a medium layer1810therebetween. A force sensor electrode1805may be arranged on a function layer1809facing the touch sensor electrode1803. According to an exemplary embodiment, the medium layer1810may include an air layer. A refractive index matching layer (“RIML”)1806may be arranged on at least one surface of the force sensor electrode1805to improve reflective index affected by the air layer. The RIML1806may include a first RIML1807arranged on one surface of the force sensor electrode1805facing the touch sensor electrode1803and a second RIML1808arranged on the other surface of the force sensor electrode1805facing the function layer1809. In an exemplary embodiment, the RIML1806may include silicon oxide (SiO2) or silicon nitride (SiNx), for example. The refractive index of the first RIML1807and that of the second RIML1808may be less than that of the force sensor electrode1805. The RIML1806with a low refractive index and the force sensor electrode1805with a high refractive index are alternately arranged on the medium layer1810. Thus, the refractive index of the display device1800may be improved and poor show-through of the touch sensor electrode1803may be improved. FIG.19is a cross-sectional view of a single sub-pixel according to another exemplary embodiment. Referring toFIG.19, the display device1900may include the plurality of sub-pixel areas301. The sub-pixel area301may include the first area302in which the LED325is placed and the second area303in which the touch sensor electrode1926is placed. The touch sensor electrode1926may extend to an adjacent sub-pixel area301. According to an exemplary embodiment, the touch sensor electrode1926may include a first touch electrode1927and a second touch electrode1928. The first touch electrode1927and the first electrode333may be arranged on the same layer. In another exemplary embodiment, the first touch electrode1927and at least one of the gate electrode318, the source electrode320, and the drain electrode321, which are arranged on the TFT, may be arranged on the same layer. A first bank1903may be arranged on the circumference of the sub-pixel area301. The LED325and the first touch electrode1927may be arranged in the opening324with a portion of the first bank1903removed therefrom. A second bank1904may be further arranged on the first bank1903. The second bank1904may embed the LED325and the first touch electrode1927. The second bank1904may planarize the top surface of the LED325. The second electrode334may be electrically connected to the LED325on the second bank1904. The second touch electrode1928and the second electrode333may be arranged on the same layer. The touch sensor electrode1926may correspond to an electrode which senses the capacitance change between the first touch electrode1927and the second touch electrode1928. A color filter1901may be arranged over the LED325. The color filter1901may transform light emitted from the LED325or increase color purity. A black matrix1902may be arranged on the circumference of the color filter1901. The black matrix1902may surround the circumference of the LED325. In another exemplary embodiment, the black matrix1902may be arranged between adjacent sub-pixel areas301. Since the color filter1901and the black matrix1902are arranged, a polarization plate is not needed and reflection of external light may be improved. An encapsulation layer1905may be arranged on the outermost circumference of the display substrate311to protect each element arranged on the display substrate311. The encapsulation layer1905may include a lamination of at least one of inorganic materials and at least one of organic materials. In another exemplary embodiment, the encapsulation layer1905may include an inorganic material. FIG.20is a cross-sectional view of a single sub-pixel according to another exemplary embodiment. Referring toFIG.20, the display device2000may include the plurality of sub-pixel areas301. The sub-pixel area301may include the first area302in which the LED325is placed and the second area303in which a touch sensor electrode2026is placed. The touch sensor electrode2026may extend to an adjacent sub-pixel area301. According to an exemplary embodiment, the touch sensor electrode2026may include a first touch electrode2027and a second touch electrode2028. The first touch electrode2027and the first electrode333may be arranged on the same layer. In another exemplary embodiment, the first touch electrode2027and at least one of the gate electrode318, the source electrode320, and the drain electrode321, which are provided in the TFT, may be arranged on the same layer. A first bank2003may be arranged on the circumference of the sub-pixel area301. The first bank2003may extend to an adjacent sub-pixel area301. The LED325and the first touch electrode2027may be arranged in the opening324which is provided by removing the portion of the first bank2003. A second bank2004may be further arranged on the first bank2003. The second bank2004may embed the LED325and the first touch electrode2027. The second bank2004may be independently arranged on respective sub-pixel areas301such that the LEDs325are embedded. According to an exemplary embodiment, the second bank2004may extend from the first area302in which the LED325is placed to the second area302in which the touch sensor electrode2026is placed. In another exemplary embodiment, the second bank2004may be arranged on only the first area302including the LED325. According to an exemplary embodiment, the second bank2004may include a scattering material or a color conversion material. The second electrode334may be electrically connected to the LED325on the second bank2004. A second touch electrode2028and the second electrode334may be arranged on the same layer. The second touch electrode2028may correspond to an electrode which senses the capacitance change between the first touch electrode2027and the second touch electrode2028. A color filter2001may be arranged above the LED325. The color filter2001may transform light emitted from the LED325or increase color purity. A black matrix2002may be arranged on the circumference of the color filter2001. According to an exemplary embodiment, the black matrix2002may surround the circumference of the LED325. In another exemplary embodiment, the black matrix2002may be arranged between adjacent sub-pixel areas301. An encapsulation layer2005may be arranged on the outermost circumference of the display substrate311to protect each element arranged on the display substrate311. The encapsulation layer2005may include a lamination of at least one of inorganic materials and at least one of organic materials. It should be understood that the exemplary embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or exemplary embodiments within each exemplary embodiment should typically be considered as available for other similar features or exemplary embodiments in other exemplary embodiments. While one or more exemplary embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims. | 44,967 |
11861093 | DETAILED DESCRIPTION The following description sets forth numerous specific details in order to provide a more thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention can be practiced without one or more of these specific details. In other instances, well-known technical features have not been described in order to avoid unnecessary obscuring of the invention. It is to be understood that the invention may be embodied in many different forms and should not be construed as being limited to the embodiments set forth below. Rather, these embodiments are provided so that this disclosure is thorough and conveys the scope of the invention to those skilled in the art. In the drawings, like reference numerals refer to like elements throughout. It will be understood that when an element is referred to as being “connected to” or “coupled to” another element, it can be directly connected or coupled to the other element, or intervening elements may also be present. In contrast, when an element is referred to as being “directly connected to” another element, there are no intervening elements. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the term “comprising” specifies the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of the associated listed items. As described in the Background section, a conventional charge source usually provides an amount of charge according to Qdc=Vcom*Cin. For example, in organic light-emitting diode (OLED) touch panel applications, touch information is obtained by detecting a variation of a capacitance Cfingerof a finger or another object that is approaching or has touched a screen of an OLED touch panel (which is a variable ranging from tens of fF to several pF). The detected capacitance Cfingeris often converted using an analog-to-digital converter (ADC) and adds to a panel capacitance Cpannel(which is a fixed value in the range of from several pF to hundreds of pF). Currently, in order to prevent detection of both the finger capacitance Cfingerand the panel capacitance Cpannel, the sum of which may exceeds an input range of the ADC, reference subtraction is usually adopted to enable an integrator11in the ADC to subtract an amount of charge Qdc induced by the panel capacitance Cpannel. This can be accomplished by transferring (including injecting or drawing) the amount of charge Qdc into or out from an input electrode of the OLED touch panel by an associated charge source circuit10(here, acting as a charge collector). Specifically, referring toFIG.1, a conventional analog-to-digital converter for use in an OLED touch panel may include a charge source circuit10and an integrator11. The charge source circuit10is typically implemented as an array of n+1 capacitors configured for reference subtraction (DC subtraction) through drawing or injecting an amount of charge Qdc (output from the charge source circuit10) from or to sensed charge on an input electrode (not shown) of the OLED touch panel. For example, when n=6, the charge source circuit10may include capacitors C0-C6(making up the capacitor array) and switches SB0-SB6, SA0-SA6, S0-S1. One end of each of the capacitors C0-C6is coupled to one end of the switch S0, and they are coupled together to an inverting input of the integrator A0, thereby enabling the provision of the amount of charge Qdc for reference subtraction to the integrator A0. The other end of the capacitance C0is coupled to one end of each of the switches SA0and SB0. The other end of the capacitance C1is coupled to one end of each of the switches SA1and SB1. The other end of the capacitance C2is coupled to one end of each of the switches SA2and SB2. The other end of the capacitance C3is coupled to one end of each of the switches SA3and SB3. The other end of the capacitance C4is coupled to one end of each of the switches SA4and SB4. The other end of the capacitance C5is coupled to one end of each of the switches SA5and SBS. The other end of the capacitance C6is coupled to one end of each of the switches SA6and SB6. The other ends of all the switches SA0-SA6and S0are coupled to a first voltage line Vcom, and the other ends of all the switches SB0-SB6are coupled to a second voltage line. One end of the second voltage line is coupled to a fixed contact of the switch S1, and a movable contact of the switch S1is selectively coupled to a second voltage of2*Vcom or grounded. Control terminals of the switches SB0-SB6are coupled to respective incoming switching signals V_S<0>-V_S<6>, and control terminals of the switches SA0-SA6are coupled to respective incoming signals V_L<0>-V_L<6>. A control terminal of the switch S0is coupled to an incoming clock signal CLK. Capacitances of the capacitors C0-C6are binary-weighted. For example, the capacitance of the capacitor C0is 1*150 fF, the capacitance of the capacitor C1is 2*150 fF, the capacitance of the capacitor C2is 4*150 fF, the capacitance of the capacitor C3is 8*150 fF, the capacitance of the capacitor C4is 16*150 fF, the capacitance of the capacitor C5is 32*150 fF, and the capacitance of the capacitor C6is 64*150 fF. In other words, the capacitance of C1is twice the capacitance of C0, the capacitance of C2is twice the capacitance of C1, the capacitance of C3is twice the capacitance of C2, the capacitance of C4is twice the capacitance of C3, the capacitance of C5is twice the capacitance of C4, and the capacitance of C6is twice the capacitance of C5. In this circuit, a digital signal output from the analog-to-digital converter (ADC) represents a value Data<6:0> corresponding to the 7-bit switching signals V_S<6:0>and V_L<6:0> of the capacitor array for controlling a value of Cin. As the capacitances of the capacitors C0-C6are the binary-weighted (64, 32, 16, 8, 4, 2, 1) *150 fF, if the values of the switching signals V_S<6:0> and V_L<6:0> are taken from the range of 111111 to 000000, an overall capacitance Cin can be expressed as N*150 fF, where N is a natural number in the range of 0 to 127. With this design, Vcom and Cin are required to provide desired resolution which enables the provision of different amounts of charge Qdc for reference subtraction (e.g., from hundreds of pF to tens of fF) represented by various numbers of bits. In order to provide, based on Vcom, precise voltages represented by different numbers of bits, the use of a voltage division technique and a drive operational amplifier is necessary. This, however, will lead to increased circuit complexity and cost. On the other hand, if such high resolution of many bits is to be provided by Cin, the addition of larger capacitances would be necessary. Since adding a capacitance will lead to a proportional increase in layout area, this will obviously lead to significant layout area increases. In view of this, the present invention provides a novel charge source circuit capable of providing an amount of charge in resolution of 13 or more bits with a very small area and a very small voltage. Moreover, the amount of charge can be adjusted within a range from tens to hundreds of pF, without using too many capacitors which could lead to the problem of an increased circuit layout area. By using this charge source circuit, an analog-to-digital converter can have low power consumption, a small circuit layout area, a wide input range, a high operating speed and high precision. When this charge source circuit is used in an OLED touch panel for reference subtraction (DC subtraction) through drawing or injecting an amount of charge from or to sensed charge on an input electrode of the OLED touch panel, better reference subtraction (DC subtraction) can be achieved for the OLED touch panel. The present invention will be described in greater detail below with reference to particular embodiments and the accompanying drawings. From the following description, advantages and features of the invention will become more apparent. Note that the figures are provided in a very simplified form not necessarily drawn to exact scale and for the only purpose of facilitating easy and clear description of the embodiments. Referring toFIG.2, one embodiment of the present invention provides a charge source circuit10including a reference current generation block101, a current minor block102and a charge output block103, which are connected sequentially in this order. The reference current generation block101is configured to provide a reference current, and the current mirror block102is configured to mirror the current from the reference current generation block101. The charge output block103is configured to convert the mirrored current from the current mirror block102into a corresponding amount of charge Qdc and output it. The output amount of charge can be expressed as Qdc=Idc*t, where Qdc is the amount of charge output from the charge output block, Idc is a current provided by the charge output block, and t is a period of time in which the charge output block provides the current. The period t is divided into at least two consecutive intervals. Lengths of these time intervals form a sequence with ratios equal to a power of 2, thus enabling continuous adjustability of the amount of charge Qdc. In this embodiment, the charge output block103provides binary-weighted currents Idc in the respective time intervals. Moreover, the period t is divided into n time intervals, depending on the number of bits of a desired range of adjustment of the amount of charge Qdc and a maximum possible number of bits of the current in each time interval (i.e., a maximum possible range of adjustment of the current in each time interval). Specifically, it is assumed that the maximum possible range of adjustment of the current Idc in each time interval provided by the charge output block corresponds to k bits (i.e., the maximum possible number of bits of the current in each time interval is k) and that the desired range of adjustment of the amount of charge Qdc corresponds to M bits, where M≥k+1. If M is divisible by k, then the number of time intervals n of the period t satisfies n=M/k. In this way, a fixed number of time intervals (n) are set for the desired M-bit amount of charge Qdc, and the current Idc in each time interval is adjustable within a k-bit range. Thus, an amount of charge Qdc in each time interval can also be adjusted within a k-bit range. If M is indivisible by k, then the number of time intervals n of the period t satisfies n=int(M/k)+1. In this way, a fixed number of time intervals (n) are also set for the desired M-bit amount of charge Qdc. In this case, an amount of charge Qdc in each of the first n-1 time intervals is adjustable within a k-bit range, while an amount of charge Qdc in the last time interval can be adjusted within a M-(n-1)*k-bit range. Here, int ( ) denotes a floor function which rounds a number down to the nearest integer, rather than to the nearest integer. It is to be noted that in order to achieve continuous adjustability of the amount of charge Qdc, between any two consecutive time intervals of the n time intervals, the difference between a minimum possible amount of charge Qdc in the first time interval and a maximum possible amount of charge Qdc in the second time interval is equal to a minimum resolvable amount of charge Qdc. In addition, in order to shorten the total period t, the n time intervals form a chronological sequence and their lengths form a progression with a common ratio equal to a negative power of 2. Between any two consecutive time interval of the n time intervals, the length of the earlier time interval is p times (p=2q, i.e., the q-th power of 2) the length of the later time interval. Moreover, a minimum possible value of the current provided in the earlier time interval is greater than a maximum possible value of the current Idc provided in the later time interval. Thus, the a more significant bit of the amount of charge Qdc can be adjusted in the earlier time interval, and a less significant bit of the amount of charge Qdc can be adjusted in the later time interval. In this embodiment, for the purpose of circuit layout area savings, with additional reference toFIG.5, both the current mirror block102and the charge output block103are circuits not containing a capacitor. Instead, they are constructed essentially from transistors such as MOS transistors. The charge output block103includes k current source branches, where k≥2. In order to enable bidirectional output of charge Qdc from the charge output block103, each of the current source branches includes an upper current source, an upper control switch SBW, a lower control switch SAW and a lower current source, which are connected sequentially in this order. The upper current sources and the upper control switches control current drawing of the k current source branches, while the lower current sources and the lower control switches control current injection of the k current source branches. Control terminals of the upper and lower current sources are coupled to the current mirror block102. Upper terminals of the upper current sources are coupled to an operating voltage AVDD, and lower terminals of them are grounded AVSS. The charge output block103controls an output of each of the k current source branches through turning on or off the corresponding upper control switch SBW or lower control switch SAW, thereby enabling binary weighting of output currents Idc in different time intervals. The upper current sources are input nodes where the current Idc flows into an integrator A2in the next stage. Under the control of a k-bit switching signal SBW<k:1>, a current Idc=Nb*I0 can be produced, where Nb is a k-bit binary value ranging from 0 to (2k-1), and I0 is the mirrored current from the current minor block101(i.e., a unit current). The lower current sources are input nodes where the current Idc flows out of the integrator A2in the next stage. Under the control of a k-bit switching signal SAW<6:1>, a current Idc=Na*I0 can be produced, where Na is also a k-bit binary value in ranging from 0 to (2k-1). As an example, referring toFIG.3, if a maximum possible number of bits of the current Idc provided in each time interval is k and the number of bits M of the desired amount of charge Qdc satisfies M=3k, then the period t can be divided into3consecutive time intervals (i.e., n=3) for the charge output block103: Interval1, Interval2and Interval3. In this case, in each of the time intervals, the charge output block103outputs an amount of charge Qdc=Idc*t, and Idc can be adjusted within a k-bit range in each time interval of Intervals1-3. Specifically, the upper control switches SBW are controlled by the k-bit switching signal SBW<k:1>, and the lower control switches SAW are controlled by the k-bit switching signal SAW<k:1>. When the length t3 of Interval3is configured to equal that of a unit interval of the system, i.e., the reciprocal of the system frequency, 1/f, it can be used to control the least significant k bits of the amount of charge Qdc. The length of Interval2can be configured to be2ktimes that of Interval3. Thus, when k=6, it is equal to 64/f, and this interval can be used to control the intermediate significant k bits of the amount of charge Qdc. The length of Interval1can be configured to be 2ktimes that of Interval2. Thus, when k=6, it is equal to 64*64/f, and this interval can be used to control the most significant k bits of the amount of charge Qdc. In this way, the amount of charge Qdc can be controlled with a total of M=3k bits. Moreover, the difference between a minimum possible amount of charge Qdc provided in Interval1(as a minimum possible value of Idc in this internal is I0, the minimum possible amount of charge Qdc provided in Interval1is I0*t1) and a maximum possible amount of charge Qdc provided in Interval2(e.g., if k=6, then a maximum possible value of Idc in this internal is (25+24+23+22+21+20)*I0=63*I0, and the maximum possible amount of charge Qdc provided in Interval2is63*I0*t2) is equal to a minimum resolvable amount of charge Qdc for Interval2, i.e., I0*t2. Thus, when the length of Interval1is configured to be 64 times that of Interval2, i.e., t1=64*t2, continuous adjustability of the amount of charge Qdc can be achieved throughout the period of time consisting of Intervals1and2. Further, the difference between a minimum possible amount of charge Qdc provided in Interval2(as a minimum possible value of Idc in this internal is I0, the minimum possible amount of charge Qdc provided in Interval2is I0*t2) and a maximum possible amount of charge Qdc provided in Interval3(e.g., if k=6, then similarly, a maximum possible value of Idc in this internal is63*I0, and the maximum possible amount of charge Qdc provided in Interval3is 63*I0*t3) is equal to a minimum resolvable amount of charge Qdc for Interval3, i.e., I0*t3. Thus, if k=6, when the length of Interval2is configured to be 64 times that of Interval3, i.e., t2=64*t3, continuous adjustability of the amount of charge Qdc can be achieved throughout the period of time consisting of Intervals2and3. As noted above, t3 is configured to be equal to the system's unit interval. As an example, referring toFIG.4, if a maximum possible number of bits of the current Idc provided in each time interval is k and the number of bits M of the desired amount of charge Qdc satisfies 2k<M<3k, then the period t can be divided into 3 consecutive time intervals (i.e., n=3) for the charge output block103: Interval1, Interval2and Interval3. In this case, in each of the time intervals, the charge output block103outputs an amount of charge Qdc=Idc*t. Moreover, the current Idc can be adjusted in a k-bit range in both Intervals1and2but in a (M-2k)-bit range in Interval3. When the length t3 of Interval3is configured to equal that of the unit interval of the system, i.e., the reciprocal of the system frequency, 1/f, it can be used to control the least significant (M-2k) bits of the amount of charge Qdc. The length of Interval2can be configured to be 2(M-2K)times that of Interval3. Thus, when k=6, it is equal to 4/f, and this interval can be used to control the intermediate significant k bits of the amount of charge Qdc. The length of Interval1can be configured to be2ktimes that of Interval2. Thus, when k=6, it is equal to 64*64/f, and this interval can be used to control the most significant k bits of the amount of charge Qdc. In this way, the amount of charge Qdc can be controlled with a total of M bits. Additionally, the difference between a minimum possible amount of charge Qdc provided in Interval1and a maximum possible amount of charge Qdc provided in Interval2is equal to a minimum resolvable amount of charge Qdc for Interval2, i.e., I0*t2. Further, the difference between a minimum possible amount of charge Qdc provided in Interval2and a maximum possible amount of charge Qdc provided in Interval3is equal to a minimum resolvable amount of charge Qdc for Interval3, i.e., I0*t3. In this way, continuous adjustability of the amount of charge Qdc can be achieved throughout the period of time consisting of Intervals1through3. Only unidirectional output of charge Qdc from the charge output block103may be needed (e.g., to be drawn or injected from or to sensed charge on the input electrode of the OLED touch panel for reference subtraction). When it is only needed to draw the amount of charge from the sensed charge, only the lower current sources and the lower control switches will be necessary, and the upper current sources and the upper control switches may be omitted. On the contrary, when it is only needed to inject the amount of charge to the sensed charge, only the upper current sources and the upper control switches will be necessary, and the lower current sources and the lower control switches may be omitted. It is to be noted that, in this embodiment, the lengths of the time intervals (e.g., Interval1, Interval2and Interval3, as described above) are fixed and not involved in the adjustment of the amount of charge Qdc. They may be determined in advance according to the desired range and resolution of adjustment of the amount of charge Qdc. Therefore, in this embodiment, different amounts of charge Qdc represented by various numbers of bits can be provided through adjusting the values of Na and Nb in the various time intervals. Moreover, the number of time intervals and the lengths thereof are so configured that an amount of charge Qdc is first output over a longer period of time determined by the most significant bits and another amount of charge Qdc is then output over a shorter period of time determined by the rest less significant bits. In this way, a desired total amount of charge Qdc can be output within an overall short period of time at an increased output speed. In particular, when the charge source circuit of this embodiment is used in an analog-to-digital converter, the analog-to-digital converter can have a faster processing speed, and when it is used in an OLED touch panel, the OLED touch panel can be capable of faster touch detection. Additionally, it is to be understood that the time intervals that form a sequence with ratios equal to a power of 2 and the binary-weighted currents are merely examples of the present invention, which do not limit the scope of the invention in any way. In other embodiments of the present invention, sequence with other ratios not equal to 2 and non-binary-weighted currents Idc are also possible. Further, the reference current generation block101, the current minor block102and the charge output block103may be implemented as any suitable circuit designs, as long as they can perform the functions of the charge source circuit described herein. As an example, the reference current generation block101may be a constant-current source capable of outputting a constant current. As another example, referring toFIG.5, the reference current generation block101may include an operational amplifier A1, a first switching transistor PMb, a second switching transistor PMa and a resistor R. Both the first switching transistor PMb and the second switching transistor PMa may be PMOS transistors. A source of the first switching transistor PMb is coupled to the operating voltage AVDD, and a drain of the first switching transistor PMb is coupled to the source of the first switching transistor PMb. A drain of the second switching transistor PMa is coupled to one end of the resistor R and a non-inverting (+) input of the operational amplifier A1, and the other end of the resistor R is grounded. An inverting (−) input of the operational amplifier A1receives a reference voltage Vref, and a gate of the first switching transistor PMb is coupled to an output of the operational amplifier A1. A first bias voltage signal Vbias1is applied to a gate of the second switching transistor PMa. When the first switching transistor PMb and the second switching transistor PMa are both turned on, the reference current generation block101can produce a constant current I0=Vref/R through the resistor R. The current minor block102may include an upper primary minor transistor PMb1, an upper cascaded transistor PMa1, a lower cascaded transistor NMal and a lower primary minor transistor NMb1. A source of the upper primary minor transistor PMb1is coupled to the operating voltage AVDD, and a drain of the upper primary minor transistor PMb1is coupled to a source of the upper cascaded transistor PMa1. A drain of the upper cascaded transistor PMa1is coupled to a drain of the lower cascaded transistor NMa1. A source of the lower cascaded transistor NMa1is coupled to a drain of the lower primary minor transistor NMb1, and a source of the lower primary minor transistor NMb1is grounded AVSS. A gate of the upper primary minor transistor PMb1is coupled to the gate of the first switching transistor PMb and the output of the operational amplifier A1, and the first bias voltage signal Vbias1is provided at the gate of the upper cascaded transistor PMa1. A second bias voltage signal Vbias2is provided at a gate of the lower cascaded transistor NMa1, and a gate of the lower primary minor transistor NMb1is coupled to a node where the upper cascaded transistor PMa1is coupled to the lower cascaded transistor NMa1and to the current source branches in the charge output block103(e.g., to gates of second lower transistors therein). The upper primary mirror transistor PMb1has a size defined as M=1 and can minor a current TO through the first switching transistor PMb at a ratio of 1:1. The charge output block103can bidirectionally output an amount of charge Qdc and may include 6 current source branches, i.e., k=6. Each current source branch may include an upper current source, an upper control switch SBW, a lower control switch SAW and a lower current source, which are connected sequentially in this order. Moreover, the upper current source may include a first upper transistor and a second upper transistor, and the lower current source may include a first lower transistor and a second lower transistor. Specifically, the first current source branch may include a first upper transistor PM1b, a second upper transistor PM1a, an upper control switch SBW under the control of a switching signal SBW<1>, a lower control switch SAW under the control of a switching signal SAW<1>, a first lower transistor NM1aand a second lower transistor NM1b. The second current source branch may include a first upper transistor PM2b, a second upper transistor PM2a, an upper control switch SBW under the control of a switching signal SBW<2>, a lower control switch SAW under the control of a switching signal SAW<2>, a first lower transistor NM2aand a second lower transistor NM2b. The third current source branch may include a first upper transistor PM3b, a second upper transistor PM3a, an upper control switch SBW under the control of a switching signal SBW<3>, a lower control switch SAW under the control of a switching signal SAW<3>, a first lower transistor NM3aand a second lower transistor NM3b. The fourth current source branch may include a first upper transistor PM4b, a second upper transistor PM4a, an upper control switch SBW under the control of a switching signal SBW<4>, a lower control switch SAW under the control of a switching signal SAW<4>, a first lower transistor NM4aand a second lower transistor NM4b. The fifth current source branch may include a first upper transistor PM5b, a second upper transistor PM5a, an upper control switch SBW under the control of a switching signal SBW<5>, a lower control switch SAW under the control of a switching signal SAW<5>, a first lower transistor NM5aand a second lower transistor NM5b. The sixth current source branch may include a first upper transistor PM6b, a second upper transistor PM6a, an upper control switch SBW under the control of a switching signal SBW<6>, a lower control switch SAW under the control of a switching signal SAW<6>, a first lower transistor NM6aand a second lower transistor NM6b. In the first current source branch, a drain of the first upper transistor PM1bis coupled to a source of the second upper transistor PM1a, and a drain of the second upper transistor PM1ais coupled to one end of the upper control switch in the branch. A drain of the first lower transistor NM1ais coupled to one end of the lower control switch in the branch, and the drain and a source of the first lower transistor NM1aare coupled to a drain of the second lower transistor NM1b. A source of the second lower transistor NM1bis grounded. The transistors and control switches in each of the second to sixth current source branches are wired in the same manner as those in the first current source branch, and detailed description thereof is omitted herein. Further, gates of the first upper transistors PM1b-PM6bmay be coupled together and to both the gate of the upper primary mirror transistor PMb1and the output of the operational amplifier A1. Gates of the second upper transistors PM1a-PM6amay be coupled together and to the first bias voltage signal Vbias1. Gates of the first lower transistors NM1a-NM6amay be coupled together and to the second bias voltage signal Vbias2. Gates of the second lower transistors NM1b-NM6bmay be coupled together and to the gate of the lower primary mirror transistor NMb1. It is to be noted that sizes of the first upper transistors PM1b-PM6bare binary-weighted. That is, the size M of the first upper transistor PM1bis 1, the same as the size of the upper primary mirror transistor PMb1. The size M of the first upper transistor PM2bis 2, twice the size of the first upper transistor PM1b. The size M of the first upper transistor PM3bis 4, twice the size of the first upper transistor PM2b. The size M of the first upper transistor PM4bis 8, twice the size of the first upper transistor PM3b. The size M of the first upper transistor PM5bis 16, twice the size of the first upper transistor PM4b. The size M of the first upper transistor PM6bis 32, twice the size of the first upper transistor PM5b. Sizes of the second lower transistors NM1b-NM6bare also binary-weighted. That is, the size M of the second lower transistor NM1bis 1, the same as the size of the lower primary mirror transistor NMb1. The size M of the second lower transistor NM2bis 2, twice the size of the second lower transistor NM1b. The size M of the second lower transistor NM3bis 4, twice the size of the second lower transistor NM2b. The size M of the second lower transistor NM4bis 8, twice the size of the second lower transistor NM3b. The size M of the second lower transistor NM5bis 16, twice the size of the second lower transistor NM4b. The size M of the second lower transistor NM6bis 32, twice the size of the second lower transistor NM5b. It is to be noted that, apart from the binary-weighted sizes of the first upper transistors PM1b-PM6band the second lower transistors NM1b-NM6b, binary weighting of mirrored currents from the 6 current source branches can also be accomplished by parallel transistors (e.g., for each type of transistor, the M=2 branch may include two parallel replicas of the transistor in the M=1 branch). Further, the second switching transistor PMa, the first switching transistor PMb, the first upper transistors, the second upper transistors, the upper primary mirror transistor PMb1and the upper cascaded transistor PMa1may all be PMOS transistors, while the lower cascaded transistor NMa1, the lower primary mirror transistor NMb1, the first lower transistors and the second lower transistors may all be NNOS transistors. Of course, in other embodiments of the present invention, these MOS transistors may be replaced with bipolar transistors, triodes or other suitable switching elements. Furthermore, the second upper transistors are cascaded to the respective first upper transistors, and the first lower transistors are cascaded to the respective second lower transistors. The second switching transistor PMa is cascaded to the first switching transistor PMb, and the upper cascaded transistor PMa1is cascaded to the upper primary mirror transistor PMb1. The lower cascaded transistor NMa1is cascaded to the lower primary mirror transistor NMb1. These cascaded transistors are provided to enhance load-carrying and current output capabilities, reduce output impedance, avoid compromised current or voltage gains and prevent output distortion, of the branches. In this example, in each current source branch in the charge output block103, the cascaded first and second upper transistors make up an upper current mirror (in other embodiments, each upper current mirror may also be made up of more cascaded PMOS transistors), which provides a current that flows to the inverting input of the integrator A2and charges the integrator A2. Thus, when all the first and second lower transistors are turned off by the switching signal SAW<6:1>, the number Nb of upper current minors that provide currents to the integrator A2can be adjusted by turning on or off the individual upper control switches under the control of the switching signal SBW<6:1>. The overall current Idc that flows to the inverting input of the integrator A2is Nb*I0 and can inject to the integrator A2an amount of charge Qdc that is equal to Idc*t=Nb*I0*t. Nb is6-bit binary. When the switching signal SBW<6:1> is111111, Nb takes a maximum value of 32+16+8+4+2+1=26−1=63. When the switching signal SBW<6:1> is 000000, Nb takes a minimum value of 0. Therefore, the value of Nb ranges from 0 to 63. Notably, in case of the charge source circuit10to be used for reference subtraction in an OLED touch panel, since a capacitance Cpannelof the touch panel typically has been determined before delivery, an amount of charge Qdc=Idc*t required to be drawn or injected for reference subtraction can be determined based on the panel capacitance Cpannel. The period t may be controlled by a value stored in a control register in a processor (e.g., an MCU, not shown), and the current Idc can be determined by the processor through configuring the values of Na and Nb by turning on or off the individual upper and lower control switches SBW, SAW. In this example, assuming an range of adjustment of Qdc with a resolution of 18 bits (i.e., M=18) is desired, with additional reference toFIG.6, with all the lower current minors being turned off (i.e., no current is output from them) under the control of the switching signal SAW<6:1>, the period tin which the integrator A2is charged by currents from the individual upper current minors under the control of the switching signal SBW<6:1> is divided into n=M/k=18/6=3 consecutive time intervals: Interval1(with a length of time of t1), Interval2(with a length of t2) and Interval3(with a length of t3). In each of the time intervals, a current Idc provided by the charge output block103can be controlled with 6 bits. Here, the lowest current in Interval3is denoted as Iminand the length of the interval is configured to be equal to a length of the system's unit interval. That is, t3 is equal to the reciprocal of the system frequency f, i.e., 1/f. Moreover, the length of Interval2is configured to be 64 times that of Interval3(i.e., t2=64/f) and the length of Interval1is configured to be 64 times that of Interval2(i.e., t1=64*64/f). The length of Interval3determines a minimum resolvable amount of charge to be charged to the integrator A2for reference subtraction, i.e., Qdc=I0*t3=Imin*1/f. An equivalent minimum capacitance CEMis (Imin*1/f)/Vin, where Vin represents an input voltage. Therefore, a total amount of charge charged into the integrator A2by the individual upper current minors under the control of the switching signal SBW<6:1>can be expressed as: Qdc=Qdc1(Interval 1)+Qdc2(Interval 2)+Qdc3(Interval 3) =Nb1(Interval 1)*I0*t1+Nb2(Interval2)*I0*t2+Nb3(Interval3)*I0*t3 ={64*64*Nb1(Interval 1)+64*Nb2(Interval 2)+Nb3(Interval 3)}I*I0*t3. Nb1(Interval 1)represents a Nb value indicated by the SBW<6:1> in Interval1. Nb1(Interval 1)is a 6-bit binary value taken from the range of 0-63. In Interval1, the most significant 6 bits of the amount of charge Qdc can be controlled by Nb1(Interval 1)Nb2(Interval 2)represents a Nb value indicated by the SBW<6:1> in Interval2. Nb2(Interval 2)is also a 6-bit binary value taken from the range of 0-63. In Interval2, the intermediate significant 6 bits of the amount of charge Qdc can be controlled by Nb2(Interval 2). Nb3(Interval 3)represents a Nb value indicated by the SBW<6:1> in Interval3. Nb3(Interval 3)is also a 6-bit binary value taken from the range of 0-63. In Interval3, the least significant 6 bits of the amount of charge Qdc can be controlled by Nb3(Interval 3). In this way, an overall Qdc amount over the three time intervals can be controlled with 18 bits, and an equivalent capacitance CEadjustable within the range of from 0 to CEM*218can be provided. When CEMlies between fifty and sixty fF, the equivalent capacitance CEcorresponding to the 18-bit Qdc amount can address needs ranging from several pF to hundreds of pF. Moreover, the time intervals and currents are designed to enable Qdc in the three time intervals to be controlled with a continuous sequence of bits. Specifically, Qdc3 in Interval3can be adjusted by adjusting Nb3 that is represented by the least significant 6 bits and ranges from 0 to63. Therefore, Qdc3 in this interval has a minimum value of 0 (when Nb3 is at its minimum value that is 0) and a maximum value of 63*I0*t3 (when Nb3 is at its maximum value that is 63). Qdc2 in Interval2can be adjusted by adjusting Nb2 that is represented by the intermediate significant 6 bits and ranges from 0 to 63. Therefore, Qdc2 in this interval has a minimum value of 64*I0*t3 when Nb2 is 1. The difference between the minimum Qdc2 value in Interval2and the maximum Qdc3 value in Interval3is just equal to the minimum resolvable Qdc value for Interval3that is I0*t3. This enables Qdc in Intervals2and3to be overall controlled with a continuous sequence of bits. Qdc2 in Interval2has a minimum value of 64*63*I0*t3 when Nb2 is 63. Qdc1 in Interval1can be adjusted by adjusting Nb1 that is represented by the most significant 6 bits and ranges from 0 to 63. Qdc1 in this interval has a minimum value of 64*64*I0*t3 when Nb1 is 1. The difference between the minimum Qdc1 value in Interval1and the maximum Qdc2 value in Interval2is just equal to the minimum resolvable Qdc value for Interval2that is 64*I0*t3. This enables Qdc in Intervals1and2to be overall controlled with a continuous sequence of bits. In practical applications, a resolution of 14 bits (i.e., M=14) may suffice, and a shorter total time taken to provide a given amount of charge would be desirable (i.e., a reference subtraction time of an ADC). In this case, the charge source circuit design ofFIG.5can still be used with assumption that k=6. With additional reference toFIG.7, in such applications, with all the lower current mirrors being turned off (i.e., there is no current flowing into or out of them) under the control of the switching signal SAW<6:1>, the period t in which the integrator A2is charged by currents from the individual upper current mirrors under the control of the switching signal SBW<6:1>is divided into n=int(M/k)+1=int (14/6)+1=2+1=3 consecutive time intervals: Interval1(with a length of t1), Interval2(with a length of t2) and Interval3(with a length of t3). The length of Interval3is configured to be equal to a length of the system's unit interval (i.e., t3 is equal to the reciprocal of the system frequency f, i.e., 1/f). Moreover, the length of Interval2is configured to be 4 times that of Interval3(i.e., t2=4/f), and the length of Interval1is configured to be 64 times that of Interval2(i.e., t1=64*4/f). Further, in Interval3, the switching signal SBW<2:1> is used to control the individual upper current minors to charge the integrator A2, while in both Intervals1and2, the switching signal SBW<6:1> is used to control the individual upper current minors to charge the integrator A2. A total amount of charge charged in the period can be expressed as: Qdc=Qdc1(Interval 1)+Qdc2(Interval 2)+Qdc3(Interval 3) =Nb1(Interval 1)*I0*t1+Nb2(Interval 2)*I0*t2+Nb3(Interval 3)*I0*t3 ={64*4*Nb1(Interval 1)4*Nb2(Interval 2)+Nb3(Interval 3)}I*I0*t3. Nb1(Interval 1)represents a Nb value indicated by the SBW<6:1> in Interval1. Nb1(Interval 1)is a 6-bit binary value taken from the range of 0-63. In Interval1, the most significant 6 bits of the amount of charge Qdc can be controlled by Nb1(Interval 1)Nb2(Interval 2)represents a Nb value indicated by the SBW<6:1> in Interval2. Nb2(Interval 2)is also a 6-bit binary value taken from the range of 0-63. In Interval2, the intermediate significant 6 bits of the amount of charge Qdc can be controlled by Nb2(Interval 2). Nb3(Interval 3)represents a Nb value indicated by the SBW<2:1> in Interval3. Nb3(Interval 3)is a 2-bit binary value taken from the range of 0-3. In Interval3, the least significant 2 bits of the amount of charge Qdc can be controlled by Nb3(Interval 3). That is, in both Intervals1and2, an amount of charge Qdc adjustable in a 6-bit (k=6) range can be provided, while in Interval3, an amount of charge Qdc adjustable in a 2-bit (M−(n−1)*k=14−2*6=2) range can be provided. In this way, an overall Qdc amount over the three time intervals can be controlled with 14 bits. Moreover, when the Nb3 takes the maximum value of 3, a maximum Qdc3 value is achieved in Interval3, which is equal to 3*I0*t3. When Nb2 takes 1, a minimum Qdc2 value is provided in Interval2, which is equal to 4*I0*t3. The difference between the minimum Qdc2 value provided in Interval2and the maximum Qdc3 value provided in Interval3is just equal to the minimum resolvable Qdc value for Interval3, i.e., I0*t3. This enables Qdc in Intervals2and3to be overall controlled with a continuous sequence of bits. When Nb2 takes 63, the intermediate significant bits in Interval2provides a maximum Qdc2 value that is equal to 4*63*I0*t3, and when Nb1 takes 1, a minimum Qdc1 value equal to 64*4*I0*t3 is provided. The difference between the minimum Qdc1 value provided in Interval1and the maximum Qdc2 value provided in Interval2is just equal to the minimum resolvable Qdc value for Interval2, i.e., 4*I0*t3. This enables Qdc in Intervals1and2to be overall controlled with a continuous sequence of bits. Likewise, in this example, in each current source branch of the charge output block103, the cascaded first and second first and second lower transistors make up a lower current minor, which provides a current that flows out of the inverting input of the integrator A2and discharges the integrator A2. Thus, when all the first and second upper transistors are turned off by the switching signal SBW<6:1>, the number Na of lower current minors that provide currents from the integrator A2can be adjusted by turning on or off the individual lower control switches under the control of the switching signal SAW<6:1>. The overall current Idc that flows from the inverting input of the integrator A2is Na*I0 and can draw from the integrator A2an amount of charge Qdc that is equal to Idc*t=Na*I0*t. Na is 6-bit binary. When the switching signal SAW<6:1> is 111111, Nb takes a maximum value of 32+16+8+4+2+1=26−1=63. When the switching signal SAW<6:1> is 000000, Na takes a minimum value of 0. Therefore, the value of Nb ranges from 0 to 63. In this example, assuming an range of adjustment of Qdc with a resolution of 18 bits (i.e., M=18) is desired, with additional reference toFIG.6, with all the upper current minors being turned off (i.e., there is no current flowing into or out of them) under the control of the switching signal SBW<6:1>, the period t in which the integrator A2is discharged by currents from the individual low current mirrors under the control of the switching signal SAW<6:1> is also divided into 3 (i.e., n=M/k=18/6=3) consecutive time intervals: Interval1(with a length of t1), Interval2(with a length of t2) and Interval3(with a length of t3). In each of the time intervals, a current Idc provided by the charge output block103can be controlled with 6 bits. Here, the lowest current in Interval3is denoted as Iminand the length of the interval is configured to be equal to a length of the system's unit interval. That is, t3 is equal to the reciprocal of the system frequency f, i.e., 1/f. Moreover, the length of Interval2is configured to be 64 times that of Interval3(i.e., t2=64/f) and the length of Interval1is configured to be 64 times that of Interval2(i.e., t1=64*64/f). The length of Interval3determines a minimum resolvable amount of charge to be discharged from the integrator A2for reference subtraction, i.e., Qdc=I0*t3=Imin*1/f. An equivalent minimum capacitance is (Imin*1/f)/Vin, where Vin represents an input voltage. Therefore, a total amount of charge discharged from the integrator A2by the individual lower current minors under the control of the switching signal SAW<6:1>can be expressed as: Qdc=Qdc1(Interval 1)+Qdc2(Interval 2)+Qdc3(Interval 3) =Na1(Interval 1)*I0*t1+Na2(Interval 2)*I0*t2+Na3(Interval 3)*I0*t3 ={64*64*Na1(Interval 1)+64*Na2(Interval 2)+Na3(Interval 3)}*I0*t3. Na1(Interval 1)represents a Na value indicated by the SAW<6:1> in Interval1. Na1(Interval 1)is a 6-bit binary value taken from the range of 0-63. In Interval1, the most significant 6 bits of the amount of charge Qdc can be controlled by Na1(Interval 1). Na2(Interval 2)represents a Na value indicated by the SAW<6:1> in Interval2. Na2(Interval 2)is also a 6-bit binary value taken from the range of 0-63. In Interval2, the intermediate significant 6 bits of the amount of charge Qdc can be controlled by Na2(Interval 2). Na3(Interval 3)represents a Na value indicated by the SAW<2:1> in Interval3. Na3(Interval 3)is a 6-bit binary value taken from the range of 0-3. In Interval3, the least significant 6 bits of the amount of charge Qdc can be controlled by Na3(Interval 3). In this way, an overall Qdc amount over the three time intervals can be adjusted with 18 bits in a continuous manner. It is to be noted that although the output from the charge output block103has been described in the foregoing embodiments as being bidirectional to inject or draw charge to or from the inverting input of the integrator11(i.e., provide a current of charge flowing to or out of the integrator11), the present invention is not so limited because in other embodiments, the output of the charge output block103may also be unidirectional. In these cases, for example, the upper circuits in the current source branches ofFIG.5(including the first upper transistors, the second upper transistors and the upper control switches therein) may be omitted so that the charge output block103can only provide a current flowing out of the integrator11. Alternatively, the lower circuits in the current source branches ofFIG.5(including the first lower transistors, the second lower transistors and the lower control switches therein) may be omitted so that the charge output block103can only provide a current flowing into the integrator11. Moreover, in these implementations with unidirectional output of the charge output block103, the circuits of the current mirror block102, the reference current generation block101and other block may be adapted to remove circuit parts providing unwanted signals to the charge output block103. It is also to be noted that although the foregoing embodiments have been described as including the cascaded transistors and other elements, the present invention is not so limited because in other embodiments, the cascaded transistors may be omitted in other embodiments of the present invention. Based on the same inventive concept, referring toFIG.5, the present invention also provides an analog-to-digital converter (ADC) including an integrator11and the charge source circuit10described herein. The charge source circuit10is coupled at an output thereof to an inverting input of the integrator11and configured to provide an amount of charge Qdc to the integrator11. The integrator11is configured for reference subtraction based on the amount of charge Qdc and to output a corresponding digital signal. Based on the same inventive concept, referring toFIG.5, the present invention also provides an organic light-emitting diode (OLED) touch panel including an input electrode (not shown) and the above ADC. The charge source circuit10in the ADC is coupled to the input electrode and configured to draw or inject an amount of charge Qdc from or to sensed charge on the input electrode. The ADC is configured to output a digital signal based on an amount of sensed charge remaining from the drawing or injection of the amount of charge Qdc by the charge source circuit10. In summary, in the charge source circuit, ADC and OLED touch panel of the present invention, an amount of charge is provided according to Qdc=Idc*t, rather than Qdc=Vcom*Cin as conventionally done. This dispenses with the use of a voltage division technique, a drive operational amplifier and too many capacitors, thus resulting in circuit simplicity, reduced cost and circuit layout area savings. More importantly, current Idc and time interval sequences may be designed to enable, with a very small area and a very small voltage, continuous adjustability of the amount of charge Qdc within a desired range at a higher resolution. Correspondingly, an equivalent capacitance C corresponding to the amount of charge Qdc can be adjusted within a range from tens of fF to hundreds of pF. By employing the charge source circuit, the ADC does not necessarily have a wide input range. The charge source circuit employed in the OLED touch panel for reference subtraction (DC subtraction) can draw or inject such an amount of charge from or to sensed charge on an input electrode of the OLED touch panel, thereby enhancing reference subtraction performance of the OLED touch panel and increasing touch detection accuracy thereof. The description presented above is merely that of a few preferred embodiments of the present invention and is not intended to limit the scope thereof in any sense. Any and all changes and modifications made by those of ordinary skill in the art based on the above teachings fall within the scope as defined in the appended claims. | 49,876 |
11861094 | DETAILED DESCRIPTION Advantages and features of the present disclosure, and implementation methods thereof will be clarified through following embodiments described with reference to the accompanying drawings. The present disclosure may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. Further, the present disclosure is only defined by scopes of claims. A shape, a size, a ratio, an angle, and a number disclosed in the drawings for describing embodiments of the present disclosure are merely an example, and thus, the present disclosure is not limited to the illustrated details. Like reference numerals refer to like elements throughout the specification. In the following description, when the detailed description of the relevant known function or configuration is determined to unnecessarily obscure the important point of the present disclosure, the detailed description will be omitted. In a case where ‘comprise’, ‘have’, and ‘include’ described in the present disclosure are used, another part may be added unless ‘only˜’ is used. The terms of a singular form may include plural forms unless referred to the contrary. In construing an element, the element is construed as including an error range although there is no explicit description. In describing a position relationship, for example, when the position relationship is described as ‘upon˜’, ‘above˜’, ‘below˜’, and ‘next to˜’, one or more portions may be disposed between two other portions unless ‘just’ or ‘direct’ is used. When an element or layer is referred to as being “on” another element or layer, one element or layer may be directly on another element or layer, or the other element or layer may be interposed between one element or layer and another element or layer. It will be understood that, although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. In the drawings, the same or similar elements are denoted by the same reference numerals even though they are depicted in different drawings. In the area and thickness of each component shown in the drawings are illustrated for convenience of description and are not necessarily limited to the area and thickness of the configuration shown in the present specification. Features of various embodiments of the present disclosure may be partially or overall coupled to or combined with each other, and may be variously inter-operated with each other and driven technically as those skilled in the art can sufficiently understand. The embodiments of the present disclosure may be carried out independently from each other, or may be carried out together in co-dependent relationship. Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. FIG.1is a system configuration diagram of a display device according to one embodiment of the present disclosure. Referring toFIG.1, a display device100according to one embodiment of the present disclosure may provide both an image display function for displaying an image and a touch sensing function for sensing a touch and/or touch coordinates with respect to a touch operation by a touch object such as a user's finger and/or a pen et al. In order to provide the image display function, the display device100according to one embodiment of the present disclosure may comprise a display panel110in which a plurality of data lines and a plurality of gate lines are disposed, and a plurality of subpixels defined by the plurality of data lines and the plurality of gate lines are arranged, a data driving circuit120configured to provide a data signal to the plurality of data lines, a gate driving circuit130configured to provide a gate signal to the plurality of gate lines, and a display controller140configured to control operations of the data driving circuit120and the gate driving circuit130. Each of the data driving circuit120, the gate driving circuit130, and the display controller140may be implemented as one or more individual components (e.g., circuits). In some cases, two or more of the data driving circuit120, the gate driving circuit130, and the display controller140may be implemented while being integrated into one component. For example, the data driving circuit120and the display controller140may be implemented as one integrated circuit (IC) chip. In order to provide the touch sensing function, the display device according to one embodiment of the present disclosure may include a touch panel TP including a touch sensor, and a touch sensing circuit150which supplies a touch driving signal to the touch panel TP, detects a touch sensing signal from the touch panel TP, and senses whether a user touches or does not touch the touch panel TP, and also senses a touch position (touch coordinates) on the touch panel TP based on the detected touch sensing signal. The touch sensing circuit150may include a touch driving circuit153which supplies the touch driving signal to the touch panel TP and detects the touch sensing signal from the touch panel TP, and a touch controller155which senses whether there is a user's touch and/or senses the touch position based on the touch sensing signal detected by the touch driving circuit153. The touch driving circuit153and the touch controller155may be implemented as separate components or may be integrated into one component, if needed. Each of the data driving circuit120, the gate driving circuit130, and the touch sensing circuit150may be implemented as one or more integrated circuits, and may be implemented as a chip on glass COG type, a chip on film COF type, or a tape carrier package TCP type in terms of electrical connection with the display panel110, and the gate driving circuit130may be implemented as a gate in panel GIP type. Each of the circuit configurations120,130, and140for the display driving and the circuit configurations153and155for the touch sensing may be implemented as one or more individual components. If needed, one or more of the circuit configurations120,130, and140for the display driving and one or more of the circuit configurations153and155for the touch sensing may be functionally integrated and implemented in one or more components. For example, the data driving circuit120and the touch driving circuit153may be integrated into one or more integrated circuit chips. When the data driving circuit120and the touch driving circuit153are integrated into two or more integrated circuit chips, each of the two or more integrated circuit chips may have the data driving function and the touch driving function. Meanwhile, the display device100according to one embodiment of the present disclosure may be various types such as an organic light emitting display device, a micro LED display device, a quantum dot display device, and the like. Hereinafter, for convenience of description, an example of the display device100corresponding to the organic light emitting display device will be described. The touch panel TP may include the touch sensor to which the touch driving signal may be applied or the touch sensing signal may be detected, and may further include touch routing wirings for electrically connecting the touch sensor and the touch driving circuit153. The touch sensor may include touch electrode lines. Each of the touch electrode lines may be a bar type of one electrode or a type in which touch electrodes are connected to each other. When each touch electrode line is the type in which the touch electrodes are connected to each other, each touch electrode line may include the plurality of touch electrodes and bridge patterns for connecting the touch electrodes. In this case, the touch panel TP may be integrally formed with the display panel110. When the display panel110is manufactured, the touch panel TP may be formed directly on the display panel110. In addition, some of the components for the touch including the touch panel TP may be formed together with signal lines and the electrodes for the display driving. FIG.2schematically illustrates a display panel110according to one embodiment of the present disclosure. Referring toFIG.2, the display panel110may include a display area AA in which an image is displayed and a non-display area NA which is an outer area of a boundary line BL of the display area AA. The boundary line BL is the boundary line for dividing the display area AA and the non-display area NA. A plurality of subpixels for displaying an image are arranged in the display area AA, and various electrodes or signal lines for driving the display are disposed. In addition, a touch sensor for touch sensing and a plurality of touch routing wirings electrically connected to the touch sensor may be disposed in the display area AA. Accordingly, the display area AA may be referred to as a touch sensing area capable of touch sensing. The non-display area NA may include link lines extending from various signal lines disposed in the display area AA or electrically connected to the signal lines, and display pads electrically connected to the link lines. The display pads disposed in the non-display area NA may be directly bonded or electrically connected to the display driving circuits120and130. For example, the display pads disposed in the non-display area NA may include data pads to which data link lines, in which data lines are extended or electrically connected, are connected. Also, the touch routing wirings electrically connected to the touch sensor disposed in the display area AA and the touch pads electrically connected to the touch routing wirings may be disposed in the non-display area NA. The touch pads disposed in the non-display area NA may be bonded or electrically connected to the touch driving circuit153. A portion of the plurality of touch electrode lines disposed in the display area AA may extend to the non-display area NA, and one or more electrodes of the same material as the plurality of touch electrode lines disposed in the display area AA may be further disposed in the non-display area NA. A portion of the outermost touch electrode among the plurality of touch electrodes included in each of the plurality of touch electrode lines disposed in the display area AA may extend to the non-display area NA, and one or more electrodes of the same material as the plurality of touch electrodes included in each of the plurality of touch electrode lines disposed in the display area AA may be further disposed. The touch sensor may exist in the display area AA, most of the touch sensor may exist in the display area AA, and a portion of the touch sensor may exist in the non-display area NA or may exist over the display area AA and the non-display area NA. Referring toFIG.2, the display panel110according to one embodiment of the present disclosure may include a dam area DA in which at least one dam is disposed. The dam area DA prevents or at least reduces an organic layer of an encapsulation layer disposed in the display area AA from flowing out to the outside of the display panel110. The dam area DA may exist close to the boundary portion or may exist in the boundary portion between the display area AA and the non-display area NA. For example, the dam area DA may refer to a peripheral area of a point where a slope of the encapsulation layer becomes gentle in a direction down along an inclined surface of the encapsulation layer. At least one dam disposed in the dam area DA may be disposed to surround the display area AA or may not be disposed in a portion of the display area AA. For example, when the display panel110has a quadrangular shape, the dam may be disposed to surround all four directions or may be disposed only in one to three directions. In addition, at least one dam disposed in the dam area DA may be a single pattern or two or more disconnected patterns. For example, when two dams are disposed in the dam area DA, the dam closest to the display area AA may be referred to as a first dam, and the other dam may be referred to as a second dam. In the dam area DA, there may be the first dam in one direction without the second dam in the one direction and both the first dam and the second dam in another direction. FIG.3exemplarily illustrates a structure in which a touch panel is embedded in a display panel according to one embodiment of the present disclosure. Referring toFIG.3, a plurality of subpixels SP are arranged in a display area AA on a substrate111of a display panel110. Each of the subpixels SP may include a light emitting element ED, a first transistor T1for applying a driving current to the light emitting element ED, a second transistor T2for transferring a data voltage VDATA to a first node N1of the first transistor T1, and a capacitor Cst for maintaining a predetermined voltage for one frame. The first transistor T1may include the first node N1to which the data voltage VDATA is applied, a second node N2electrically connected to the light emitting element ED, and a third node N3to which a driving voltage VDD is applied from a driving voltage line DVL. The first node N1may be a gate node, the second node N2may be a source node or a drain node, and the third node N3may be a drain node or a source node. The first transistor T1may be also referred to as a driving transistor for driving the light emitting element ED. The light emitting element ED may include an anode electrode, a light emitting layer, and a cathode electrode. The anode electrode may be applied with the data voltage VDATA corresponding to a different pixel voltage for each subpixel SP, and may be electrically connected to the second node N2of the first transistor T1. The cathode electrode may be applied with a base voltage VSS corresponding to a common voltage commonly applied to all the subpixels SP. The light emitting element ED may be a light emitting element ED using an organic material or a light emitting element ED using an inorganic material. In the light emitting element ED using an organic material, a light emitting layer may include an organic light emitting layer including an organic material. In this case, the light emitting element ED may be referred to as an organic light emitting diode. The second transistor T2is turned-on and turned-off by a scan signal SCAN applied through the gate line GL, and may be electrically connected between the data line DL and the first node N1of the first transistor T1. The second transistor T2may be also referred to as a switching transistor. When the second transistor T2is turned-on by the scan signal SCAN, the second transistor T2transfers the data voltage VDATA supplied from the data line DL to the first node N1of the first transistor T1. The capacitor Cst may be electrically connected between the first node N1and the second node N2of the first transistor T1. Each of the subpixels SP may have a structure including the two transistors T1and T2and one capacitor Cst, as shown inFIG.3, but not limited thereto. Each of the subpixels SP may have a structure including three or more transistors and one or more capacitors. Each of the first transistor T1and the second transistor T2may be an N-type transistor or a P-type transistor. Meanwhile, the light emitting element ED and a pixel circuit including two or more transistors T1and T2for applying the driving current to the light emitting element ED and one or more capacitor Cst may be disposed on the display panel110. Since the light emitting element ED and the pixel circuit are vulnerable to external moisture or oxygen, the encapsulation layer114for preventing or at least reducing external moisture or oxygen from penetrating into the light emitting element ED and the pixel circuit may be disposed on the display panel110. The encapsulation layer114may be formed of one layer or a plurality of layers. For example, when the encapsulation layer114is composed of a plurality of layers, the encapsulation layer114may include one or more inorganic encapsulation layers and one or more organic encapsulation layers. For example, the encapsulation layer114may include a first inorganic encapsulation layer, an organic encapsulation layer, and a second inorganic encapsulation layer. In this case, the organic encapsulation layer may be positioned between the first inorganic encapsulation layer and the second inorganic encapsulation layer. In addition, the organic encapsulation layer may be formed onto the dam area DA and may not be formed outside the dam area DA. The first inorganic encapsulation layer may be formed on the cathode electrode while being closest to the light emitting element ED. For example, the first inorganic encapsulation layer may be formed of an inorganic insulating material capable of being deposited at a low temperature, such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, or aluminum oxide Al2O3. Accordingly, since the first inorganic encapsulation layer is deposited in a low temperature atmosphere, it is possible to prevent or at least reduce the light emitting layer, which is vulnerable to a high temperature atmosphere, from being damaged for a deposition process of the first inorganic encapsulation layer. The organic encapsulation layer may be formed to have an area smaller than that of the first inorganic encapsulation layer, and may be formed to expose both ends of the first inorganic encapsulation layer. The organic encapsulation layer serves as a buffer for mitigating stress between the respective layers caused by the bending of the display device100, and enhances a planarization performance. For example, the organic encapsulation layer may be formed of an organic insulating material such as acrylic resin, epoxy resin, polyimide, polyethylene, or silicon oxycarbon SiOC. The second inorganic encapsulation layer is provided on the organic encapsulation layer and is configured to cover an upper surface and a side surface of each of the organic encapsulation layer and the first inorganic encapsulation layer. Accordingly, the second inorganic encapsulation layer may prevent or at least reduce external moisture or oxygen from penetrating into the first inorganic encapsulation layer and the organic encapsulation layer. For example, the second inorganic encapsulation layer may include an inorganic insulating material such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, or aluminum oxide Al2O3. In the display device100according to one embodiment of the present disclosure, the touch panel TP may be disposed on the encapsulation layer114. Specifically, the touch sensor included in the touch panel TP may be disposed on the encapsulation layer114. The touch panel TP may include a touch sensor, a touch pad, a touch routing wiring, and the like. A method capable of sensing whether there is a touch and/or obtaining touch coordinates by use of the touch sensor may be a method for detecting a change in self-capacitance or a method for detecting a change in mutual capacitance. The display device100according to one embodiment of the present disclosure describes a case of sensing a touch on the basis of a mutual capacitance. FIG.4schematically illustrates an arrangement structure of the touch sensor in the display panel according to one embodiment of the present disclosure. Referring toFIG.4, the touch sensor includes a plurality of first touch electrode lines10and a plurality of second touch electrode lines20disposed on the encapsulation layer114in the display area AA. As shown in the drawings, for example, the plurality of first touch electrode lines10are disposed in a first direction (horizontal direction), and the plurality of second touch electrode lines20are disposed in a second direction (vertical direction). The plurality of first touch electrode lines10may include a plurality of first touch sensing electrodes and a plurality of first connection electrodes for connecting the plurality of first touch sensing electrodes. In addition, the second touch electrode line20may include a plurality of second touch sensing electrodes and a plurality of second connection electrodes for connecting the plurality of second touch sensing electrodes. The first connection electrode is disposed on a different layer from the first touch sensing electrode. However, the second connection electrode is disposed in a spaced area between the two adjacent first touch sensing electrodes to connect the two second touch sensing electrodes adjacent to each other in the vertical direction. In this case, the second connection electrode is disposed on the same layer as the second touch sensing electrode and is formed of the same material as the second touch sensing electrode. The second touch sensing electrode is extended to form the second connection electrode. That is, the second touch electrode line20is an integrated electrode line having two different widths. Each of the plurality of first touch sensing electrodes and the plurality of second touch sensing electrodes has a rectangular shape, but is not limited thereto. Each of the plurality of first touch sensing electrodes and the plurality of second touch sensing electrodes may have a rhombus shape, or a triangle shape, or the shapes of the first touch sensing electrodes and second touch sensing electrodes may be mixed with rhombus and triangle shapes. The detailed shape and arrangement of the first touch electrode line10and the second touch electrode line20will be described in detail in the following drawings. The plurality of first touch electrode lines10and the plurality of second touch electrode lines20may be disposed inside the outer boundary line BL of the display area AA, but not limited thereto. A portion of the first touch sensing electrode or the second touch sensing electrode adjacent to the outer boundary line BL of the display area AA may be disposed in the non-display area NA beyond the outer boundary line BL of the display area AA. The display panel110according to one embodiment of the present disclosure has a rectangular shape with rounded corners as an example, but not limited thereto. The first touch sensing electrode or the second touch sensing electrode disposed at the edge of the display panel110may be disposed while being provided in the same shape as the shape of the edge of the display panel110. The display panel110includes a first display panel area110ain which the plurality of first touch electrode lines10and the plurality of second touch electrode lines20are disposed to enable the touch sensing, and the plurality of subpixels SP are disposed to display an image, and a second display panel area110bincluding a bending area BA and extended from the first display panel area110ato be disposed on the rear surface of the first display panel area110a. The second display panel area110bmay include the display pad, the touch pad, and the data driving circuit120. The second display panel area110bis included in the non-display area NA. The data driving circuit120is disposed in the center of the second display panel area110b, and the touch pad may be disposed on the left/right side with respect to the data driving circuit120in one embodiment. In the second display panel area110b, a first touch pad area LPA is disposed on the left side of the data driving circuit120, and a second touch pad area RPA is disposed on the right side of the data driving circuit120. The touch routing wirings30and40include a plurality of first touch routing wirings30and a plurality of second touch routing wirings40. The plurality of first touch routing wirings30are lowered along the inclined surface of the encapsulation layer114and are configured to electrically connect the plurality of first touch electrode lines10to the plurality of touch pads disposed in the first touch pad area LPA or the second touch pad area RPA. The plurality of second touch routing wirings40are lowered along the inclined surface of the encapsulation layer114and are configured to electrically connect the plurality of second touch electrode lines20to the plurality of touch pads disposed in the first touch pad area LPA or the second touch pad area RPA. The display device100according to one embodiment of the present disclosure may detect the change in mutual capacitance between the first touch electrode line10and the second touch electrode line20, to thereby sense a finger touch or a pen touch on the basis of the change. Each of the plurality of first touch electrode lines10may be disposed in a first direction, and each of the plurality of second touch electrode lines20may be disposed in a second direction which is different from the first direction. The first direction and the second direction may be perpendicular to each other but may not be perpendicular to each other. The plurality of first touch routing wirings30include an external routing wiring33which is connected to one side or the other side of the plurality of first touch electrode lines10, disposed along the non-display area NA of the display panel110, and connected to the plurality of touch pads disposed in the first touch pad area LPA or the second touch pad area RPA. In addition, the plurality of first touch routing wirings30include an internal routing wiring31which is connected to the first touch sensing electrode rather than one side and the other side of the first touch electrode line10, disposed in a second direction in the display area AA of the display panel110, and connected to the plurality of touch pads disposed in the first touch pad area LPA or the second touch pad area RPA. The first touch routing wiring30includes at least one internal routing wiring31. According as the number of internal routing wirings31is increased, the number of external routing wirings33is reduced so that it is possible to decrease the width of the non-display area NA. In case of the plurality of second touch electrode lines20, the touch pad area LPA and RPA may be disposed in the second direction and may be provided on one side of the plurality of second touch electrode lines20. The plurality of second touch routing wirings40may be connected to one side of the plurality of second touch electrode lines20and may be connected to the plurality of touch pads disposed in the first touch pad area LPA and the second touch pad area RPA. InFIG.4, only one side of the second touch electrode line20is connected to the second touch routing wiring40, but not limited thereto. The second touch routing wiring40may be connected to the other side of the second touch electrode line20and then may be connected to the touch pad. Also, in order to improve touch sensitivity, the second touch routing wiring40may include both the routing wiring connected to one side of the second touch electrode line20and connected to the touch pad, and the routing wiring connected to the other side of the second touch electrode line20and connected to the touch pad. The first touch routing wiring30and the second touch routing wiring40may be formed of the same material as the first touch electrode line10and the second touch electrode line20, and may be provided on the same layer as the first touch electrode line10and the second touch electrode line20. The first touch routing wiring30and the second touch routing wiring40may be jumped in the upper/lower portions of the bending area BA adjacent to the bending area BA to pass through the bending area BA and then may be connected to the wiring in the other layer. Meanwhile, the internal routing wiring31is disposed in the spaced area between the two adjacent second touch electrode lines20. Also, the internal routing wiring31passes through the plurality of first touch electrode lines10between the first touch electrode line10to be connected and the pad area LPA and RPA. As described above, since the plurality of first touch electrode lines10include the plurality of first touch sensing electrodes, the internal routing wiring31is disposed in the spaced area between the two adjacent first touch sensing electrodes. Therefore, the internal routing wiring31, the first touch sensing electrodes, and the second touch electrode line20may be disposed on the same layer. In the display panel110according to one embodiment of the present disclosure, the internal routing wiring31is disposed inside the display area AA in which touch sensing electrodes are disposed, so that it is possible to reduce the size of the non-display area NA. In addition, as shown inFIG.4, in case of the first touch electrode line10disposed adjacent to the touch pad area LPA and RPA, the first touch electrode line10and the touch pad may be connected through the internal routing wiring31. In case of the first touch electrode line10disposed relatively far away from the touch pad area LPA and RPA, the first touch electrode line10and the touch pad may be connected through the external routing wiring33. The width of the internal routing wiring31is smaller than the width of the external routing wiring33. For example, the internal routing wiring31is 3 μm and the external routing wiring33is 10 μm. Since the width of the internal routing wiring31is smaller (less) than the width of the external routing wiring33, the resistance of the internal routing wiring31is greater than the resistance of the external routing wiring33. Therefore, the first touch electrode line10disposed closer to the touch pad area LPA and RPA is connected to the internal routing wiring31, and the first touch electrode line10disposed farther away from the touch pad area LPA and RPA is connected to the external routing wiring33, so that it is possible to reduce the difference in resistance of the first touch routing wiring30, which varies according to the position of the first touch electrode line10. In addition, due to the resistance difference of the first touch routing wiring30, which occurs even though the internal routing wiring31is disposed, an isoresistance design area may be additionally provided in the second display panel area110badjacent to the touch pad area LPA and RPA. The widths of the first touch routing wirings30disposed in the isoresistance design area may be different from each other. The second touch routing wirings40may also be disposed in the isoresistance design area. A first ground wiring50and a second ground wiring55may be disposed along the circumference of the display area AA on the encapsulation layer114of the non-display area NA. The first ground wiring50quickly discharges or blocks static electricity so as not to be affected by static electricity during the touch sensing, thereby preventing or at least reducing a touch sensing error or degradation of sensing sensitivity. Therefore, the first ground wiring50is disposed outside the second ground wiring55with respect to the display panel110, thereby improving an electrostatic blocking effect. Also, ‘0V’ corresponding to a ground voltage may be applied to the first ground wiring50. Herein, one side of the first ground wiring50is connected to the first touch pad in the first touch pad area LPA, and the other side of the first ground wiring50is connected to the second touch pad in the second touch pad area RPA. The second ground wiring55may be disposed along the circumference of the display area AA on the encapsulation layer114of the non-display area NA. The second ground wiring55may be disposed in parallel with the first ground wiring50. InFIG.4, the second ground wiring55is disposed closer to the display area AA than the first ground wiring50, but not limited thereto. Since the second ground wiring55is disposed closer to the display area AA than the first ground wiring50, the second ground wiring55may be disposed closer to the first touch electrode line10and the second touch electrode line20, thereby reducing touch noise between the first touch electrode line10and the second touch electrode line20and improving touch performance. The same voltage as a voltage range applied to the second touch electrode line20may be applied to the second ground wiring55. Herein, one side of the second ground wiring55may be connected to the first touch pad in the first touch pad area LPA, and the other side of the second ground wiring55may be connected to the second touch pad in the second touch pad area RPA. As shown inFIG.4, the first ground wiring50and the second ground wiring55are formed by cutting the wiring once without forming a closed loop. Accordingly, it is possible to prevent or at least reduce the first ground wiring50and the second ground wiring55from bursting (being damaged) by external noise. In addition, the disconnection positions of the first ground wiring50and the second ground wiring55are located toward the center of the upper end of the display panel110while being far from the touch pad area LPA and RPA. Thus, the resistances of the left and right wirings of each of the first ground wiring50and the second ground wiring55may be similarly adjusted with respect to the disconnection location. Also, the disconnection positions of the first ground wiring50and the second ground wiring55may be different from each other. When the disconnection positions of the first ground wiring50and the second ground wiring55are the same, static electricity or noise may flow into the first ground wiring50or the second ground wiring55. Therefore, the first ground wiring50and the second ground wiring55have the different disconnection positions, thereby effectively preventing or at least reducing static electricity and noise. As described above, an enlarged view of ‘A1’ area and ‘A2’ area will be shown to describe specific shapes and arrangement of the first touch electrode line10and the second touch electrode line20. FIG.5is an enlarged view of ‘Al’ area ofFIG.4according to one embodiment. Referring toFIG.5, ‘Al’ area corresponds to the area in which the internal routing wiring31passes through the first touch electrode line10. When any one of the plurality of first touch electrode lines10is referred to as an eleventh (11th) touch electrode line11, four of 11th touch sensing electrodes among a plurality of eleventh touch sensing electrodes included in the 11th touch electrode line11are included in the ‘A1’ area. The four of 11th touch sensing electrodes are spaced apart from each other and are referred to as a 111th sub-touch sensing electrode11a, a 112th sub-touch sensing electrode11b, a 113th sub-touch sensing electrode11c, and a 114th sub-touch sensing electrode11d, respectively. The distance between the 111th sub-touch sensing electrode11aand the 112th sub-touch sensing electrode11b, the distance between the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11c, and the distance between the 113th sub-touch sensing electrode11cand the 114th sub-touch sensing electrode are the same, but not limited thereto. The 11th touch electrode line11includes a plurality of first connection electrodes15ceconfigured to connect the 111th sub-touch sensing electrode11aand the 112th sub-touch sensing electrode11bto each other, configured to connect the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11cto each other, and configured to connect the 113th sub-touch sensing electrode11cand the 114th sub-touch sensing electrode11dto each other. The plurality of first connection electrodes15ceare disposed on a layer different from the 111th sub-touch sensing electrode11a, the 112th sub-touch sensing electrode11b, the 113th sub-touch sensing electrode11c, and the 114th sub-touch sensing electrode11d. Specifically, each of the plurality of first connection electrodes15ceincludes a connection wiring15ceaand a contact electrode15ceb. In the drawings, it shows the two of first connection electrodes15cefor connecting the adjacent sub-touch sensing electrodes, but not limited thereto. It is possible to provide a single number of first connection electrode15ceor three or more of the first connection electrodes15ce. In addition, the ‘Al’ area includes two of the second touch electrode lines adjacent to each other among the plurality of second touch electrode lines20. The two of second touch electrode lines are spaced apart from each other and are referred to as a 21st touch electrode line21and a 22nd touch electrode line22, respectively. The 21st touch electrode line21includes a 211th sub-touch sensing electrode21a, a 212th sub-touch sensing electrode21b, and a 21st connection electrode21cefor connecting together the 211 sub-touch sensing electrode21aand the 212 sub-touch sensing electrode21b. The 21st touch electrode line21is spaced apart from the 111th sub-touch sensing electrode11aand the 112th sub-touch sensing electrode11b. The 22nd touch electrode line22includes a 221st sub-touch sensing electrode22a, a 222nd sub-touch sensing electrode22b, and a 22nd connection electrode22cefor connecting together the 221st sub-touch sensing electrode22aand the 222nd sub-touch sensing electrode22b. The 22nd touch electrode line22is spaced apart from the 113th sub-touch sensing electrode11cand the 114th sub-touch sensing electrode11d. As shown inFIG.5, the 11th touch electrode line11, the 21st touch electrode line21, and the 22nd touch electrode line22are adjacent to each other, and the edges of the touch sensing electrodes included in each of the touch electrode lines may include a plurality of vertices in a zigzag form, but not limited thereto. For example, the edge of the touch sensing electrodes may be a straight line or a curved line. Visibility of the touch electrode lines may be reduced by forming the edges of the touch sensing electrodes in a zigzag shape. Meanwhile, the internal routing wiring31is disposed in the spaced area between the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11c. The internal routing wiring31is spaced apart from the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11c. The internal routing wiring31may be disposed on the same layer as the sub-touch sensing electrodes and may be formed of the same material as the sub-touch sensing electrodes. A third ground wiring51may be further disposed in the spaced area between the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11c. The third ground wiring51may be disposed in the spaced area between the 112th sub-touch sensing electrode11band the internal routing wiring31and the spaced area between the internal routing wiring31and the 113th sub-touch sensing electrode11c. The third ground wiring51is spaced apart from the 112th sub-touch sensing electrode11b, the 113th sub-touch sensing electrode11c, and the internal routing wiring31. The third ground wiring51may be disposed on the same layer as the sub-touch sensing electrodes and may be formed of the same material as the sub-touch sensing electrodes. The third ground wiring51may reduce noise of the signal transmitted through the internal routing wiring31by blocking the signal transmitted from the area adjacent to the internal routing wiring31. Specifically, since the internal routing wiring31is connected to any one of the first touch electrode lines10, the third ground wiring51prevents unnecessary capacitance formation between the touch signal transmitted through the second touch electrode line20adjacent to the first touch electrode line and the touch signal transmitted through the internal routing wiring31. Since both the internal routing wiring31and the third ground wiring51are disposed adjacent to the sub-touch sensing electrodes, the internal routing wiring31and the third ground wiring51are formed to be the same as the shape of the sub-touch sensing electrodes. Accordingly, the internal routing wiring31and the third ground wiring51may have a zigzag shape, but not limited thereto. For example, the internal routing wiring31and the third ground wiring51may be in the form of a straight line or a curved line. Accordingly, as the third ground wiring51is connected to the first ground wiring50, the third ground wiring51may receive a ground voltage through the first ground wiring50, but not limited thereto. For example, the third ground wiring51may be floated, and thus may serve as a dummy electrode. FIG.6is an enlarged view of ‘A2’ area ofFIG.4according to one embodiment. Referring toFIG.6, the ‘A2’ area is the area in which the internal routing wiring31is electrically connected to the first touch electrode line10. When any one of the plurality of first touch electrode lines10is referred to as a twelfth (12th) touch electrode line12, four of 12th touch sensing electrodes among a plurality of 12th touch sensing electrodes included in the 12th touch electrode line12are included in the ‘A2’ area. The four of 12th touch sensing electrodes are spaced apart from each other, and are referred to as a 121st sub-touch sensing electrode12a, a 122nd sub-touch sensing electrode12b, a 123rd sub-touch sensing electrode12c, and a 124th sub-touch sensing electrode12d, respectively. The distance between the 121st sub-touch sensing electrode12aand the 122nd sub-touch sensing electrode12b, the distance between the 122nd sub-touch sensing electrode12band the 123rd sub-touch sensing electrode12c, and the distance between the 123rd sub-touch sensing electrode12cand the 124th sub-touch sensing electrode12dare the same, but not limited thereto. The 12th touch electrode line12includes a plurality of first connection electrodes15ceconfigured to connect the 121st sub-touch sensing electrode12aand the 122nd sub-touch sensing electrode12bto each other, configured to connect the 122nd sub-touch sensing electrode12band the 123rd sub-touch sensing electrode12cto each other, and configured to connect the 123rd sub-touch sensing electrode12cand the 124th sub-touch sensing electrode12dto each other. The plurality of first connection electrodes15ceare disposed on a layer different from the 121st sub-touch sensing electrode12a, the 122nd sub-touch sensing electrode12b, the 123rd sub-touch sensing electrode12c, and the 124th sub-touch sensing electrode12d. As described above, each of the plurality of first connection electrodes15ceincludes a connection wiring15ceaand a contact electrode15ceb. In the drawings, it shows two first connection electrodes15cefor connecting the adjacent sub-touch sensing electrodes, but not limited thereto. It is possible to provide a single first connection electrode15ceor three or more of the first connection electrodes15ce. In addition, the ‘A2’ area includes two of the second touch electrode lines adjacent to each other among the plurality of second touch electrode lines20. The two of second touch electrode lines are spaced apart from each other and are referred to as a 21st touch electrode line21and a 22nd touch electrode line22, respectively. The 21st touch electrode line21includes a 211th sub-touch sensing electrode21a, a 212th sub-touch sensing electrode21b, and a 21st connection electrode21cefor connecting the 211th sub-touch sensing electrode21aand the 212th sub-touch sensing electrode21b. The 21st touch electrode line21is spaced apart from the 121st sub-touch sensing electrode12aand the 122nd sub-touch sensing electrode12b. The 22nd touch electrode line22includes a 221st sub-touch sensing electrode22a, a 222nd sub-touch sensing electrode22b, and a 22nd connection electrode22cefor connecting the 221st sub-touch sensing electrode22aand the 222nd sub-touch sensing electrode22b. The 22nd touch electrode line22is spaced apart from the 123rd sub-touch sensing electrode12cand the 124th sub-touch sensing electrode12d. As shown inFIG.6, the 12th touch electrode line12, the 21st touch electrode line21, and the 22nd touch electrode line22are adjacent to each other, and the edges of the touch sensing electrodes included in each of the touch electrode lines may include a plurality of vertices in a zigzag form, but not limited thereto. For example, the edge of the touch sensing electrodes may be a straight line or a curved line. Visibility of the touch electrode lines may be reduced by forming the edges of the touch sensing electrodes in a zigzag shape. Meanwhile, the internal routing wiring31is disposed in the spaced area between the 122nd sub-touch sensing electrode12band the 123rd sub-touch sensing electrode12c. The internal routing wiring31is spaced apart from the 122nd sub-touch sensing electrode12band the 123rd sub-touch sensing electrode12c. The internal routing wiring31may be disposed on the same layer as the sub-touch sensing electrodes and may be formed of the same material as the sub-touch sensing electrodes. A third ground wiring51may be further disposed in the spaced area between the 122nd sub-touch sensing electrode12band the 123rd sub-touch sensing electrode12c. The third ground wiring51may be disposed in the spaced area between the 122nd sub-touch sensing electrode12band the internal routing wiring31and the spaced area between the internal routing wiring31and the 123rd sub-touch sensing electrode12c. The third ground wiring51is spaced apart from the 122nd sub-touch sensing electrode12b, the 123rd sub-touch sensing electrode12c, and the internal routing wiring31. The third ground wiring51may be disposed on the same layer as the sub-touch sensing electrodes and may be formed of the same material as the sub-touch sensing electrodes. The third ground wiring51may reduce noise of the signal transmitted through the internal routing wiring31by blocking the signal transmitted from the area adjacent to the internal routing wiring31. Specifically, since the internal routing wiring31is connected to the 12th touch electrode line12, the third ground wiring51prevents unnecessary capacitance formation between the touch signal transmitted through the 21st touch electrode line21and the 22nd touch electrode line22adjacent to the 12th touch electrode line12and the touch signal transmitted through the internal routing wiring31. Accordingly, as an open area OA is formed in the third ground wiring51disposed between the internal routing wiring31and the 123rd sub-touch sensing electrode12c, the internal routing wiring31may be electrically connected to the 12th touch electrode line12. The internal routing wiring31may be electrically connected to the 12th touch electrode line12according as the open area OA is formed in any one or both the third ground wirings51disposed between the internal routing wiring31and the adjacent sub-touch sensing electrode. Since both the internal routing wiring31and the third ground wiring51are disposed adjacent to the sub-touch sensing electrodes, the internal routing wiring31and the third ground wiring51are formed to have the same shape as the sub-touch sensing electrodes. Accordingly, the internal routing wiring31and the third ground wiring51may have a zigzag shape, but not limited thereto. For example, the internal routing wiring31and the third ground wiring51may be in the form of a straight line or a curved line. FIG.7is a cross sectional view along V-V′ ofFIG.5according to one embodiment. Specifically,FIG.7is a cross sectional view illustrating the contact electrode15cebconnected to the 112th sub-touch sensing electrode11b, the third ground wiring51, the internal routing wiring31, and the contact electrode15cebconnected to the 113th sub-touch sensing electrode11cin the ‘Al’ area ofFIG.5. Referring toFIG.7, an encapsulation layer114is disposed on a TFT substrate111′. The TFT substrate111′ includes a substrate111, a thin film transistor TFT disposed on the substrate111, and a light emitting element ED disposed on the thin film transistor TFT (not shown inFIG.7). As described above, the encapsulation layer114may include a first inorganic encapsulation layer114a, an organic encapsulation layer114b, and a second inorganic encapsulation layer114c. A touch buffer layer115may be disposed on the encapsulation layer114. The touch buffer layer115may be disposed under the first touch electrode line10and the second touch electrode line20. The touch buffer layer115is disposed between the touch sensor and the light emitting element ED so that a separation distance between the touch sensor and the cathode electrode of the light emitting element ED may be designed to maintain a predetermined minimum separation distance. Accordingly, parasitic capacitance between the touch sensor and the cathode electrode may be reduced, and touch sensitivity degradation caused by parasitic capacitance may be prevented. In the same manner, the touch buffer layer115may also be disposed under the touch routing wiring. The touch buffer layer115may be formed of an organic insulating material which may be formed at a low temperature of a predetermined temperature (for example, 100° C.) or less and may have a low dielectric constant (for example, 1-3) to prevent or at least reduce damage to a light emitting layer including an organic material vulnerable to a high temperature. For example, the touch buffer layer115may be formed of an acryl-based material, an epoxy-based material, or a siloxane-based material such as silicon nitride material SiNx. Also, the touch buffer layer115having planarization performance with an organic insulating material may prevent or at least reduce damage to the encapsulation layer114due to bending of the organic light emitting display device and breakage of the touch sensor disposed on the touch buffer layer115. The contact electrode15cebof the first connection electrode15ceis disposed on the touch buffer layer115. A touch insulating layer117is disposed on the contact electrode15ceb. The touch insulating layer117may include an inorganic insulating material such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, or aluminum oxide Al2O3. The touch insulating layer117includes a plurality of contact holes formed in the area corresponding to the contact electrode15ceb. The 112th sub-touch sensing electrode11b, the third ground wiring51, the internal routing wiring31, and the 113th sub-touch sensing electrode11care disposed on the touch insulating layer117. The 112th sub-touch sensing electrode11bis connected to the contact electrode15cebthrough the contact hole formed in the touch insulating layer117, and the 113th sub-touch sensing electrode11cis connected to the contact electrode15cebthrough the contact hole formed in the touch insulating layer117. InFIG.7, the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11care illustrated as a mesh type having a plurality of openings OP, but not limited thereto. For example, the 112th sub-touch sensing electrode11band the 113th sub-touch sensing electrode11cmay be plate-shaped electrode metal having no openings. In other words, the first touch electrode line10and the second touch electrode line20may be a mesh type having a plurality of openings OP or a plate-shaped electrode metal without an opening. When the first touch electrode line10and the second touch electrode line20are in the mesh type having the plurality of openings OP, the touch electrode line may be a single layer or a plurality of layers formed of a conductive material such as aluminum Al, titanium Ti, silver Ag, and copper Cu. When the first touch electrode line10and the second touch electrode line20are in the plate-shaped electrode metal without the opening, the touch electrode line may be an electrode metal made of a transparent electrode material so that light emitted from the TFT substrate111′ may be transmitted upward. The contact electrode15ceb, the 11th touch electrode line11, the third ground wiring51, and the internal routing wiring31may be formed of the same material. A touch protection layer118is disposed on the 11th touch electrode line11, the third ground wiring51, and the internal routing wiring31. The touch protection layer118covers the 11th touch electrode line118, the third ground wiring51, and the internal routing wiring31, thereby preventing or at least reducing wirings from being corroded by external moisture. The touch protection layer118may be formed of an organic insulating material such as an acrylic resin, an epoxy resin, polyimide, or the like. A top module119such as a cover glass is attached to the touch protection layer118by an optical adhesive member such as optically clear adhesive OCA or optically clear resin OCR. In this case, a finger or a pen for touch is brought into contact with the cover glass. FIG.8is a cross sectional view along VI-VI′ ofFIG.6according to one embodiment. Specifically,FIG.8is a cross sectional view illustrating the 122nd sub-touch sensing electrode12b, the third ground wiring51, the internal routing wiring31, the open area OA of the third ground wiring51, and the 123rd sub-touch sensing electrode12cin the ‘A2’ area ofFIG.6. Referring toFIG.8, it shows the cross section in which the internal routing wiring31is electrically connected to the 12th touch electrode line12. An encapsulation layer114, a touch buffer layer115, and a touch insulating layer117are disposed on a TFT substrate111′. The 122nd sub-touch sensing electrode12b, the 211th sub-touch sensing electrode21a, the third ground wiring51, the internal routing wiring31, and the 123rd sub-touch sensing electrode12care disposed on the touch insulating layer117. InFIG.8, the 122nd sub-touch sensing electrode12b, the 211th sub-touch sensing electrode21a, and the 123rd sub-touch sensing electrode12care illustrated as a mesh type having a plurality of openings OP, but not limited thereto. For example, the 122nd sub-touch sensing electrode12b, the 211th sub-touch sensing electrode21a, and the 123rd sub-touch sensing electrode12cmay be plate-shaped electrode metal having no openings. The 122nd sub-touch sensing electrode12b, the 211th sub-touch sensing electrode21a, the third ground wiring51, the internal routing wiring31, and the 123rd sub-touch sensing electrode12cmay be formed of the same material. Referring toFIG.6, the internal routing wiring31is connected to the 123rd sub-touch sensing electrode12cthrough the open area OA. As shown inFIG.8, in the open area OA, the internal routing wiring31is directly connected to the 123rd sub-touch sensing electrode12c. A touch protection layer118is disposed on the 12th touch electrode line12, the 21st touch electrode line21, the third ground wiring51, and the internal routing wiring31. A top module119such as a cover glass is attached to the touch protection layer118by an optical adhesive member such as optically clear adhesive OCA or optically clear resin OCR. In this case, a finger or a pen for touch is brought into contact with the cover glass. FIG.9is a cross sectional view along IV-IV′ ofFIG.4according to one embodiment.FIG.9shows the display area AA of the display panel110, the non-display area NA of the display panel110, and the boundary area between the display area AA and the non-display area NA. Repetitive explanation of the components described above will be omitted. A thin film transistor90and a light emitting element ED are disposed on a substrate111in the display area AA. The substrate111may be formed of a flexible material such as glass or polyimide. A buffer layer may be additionally provided between the substrate111and the thin film transistor90. The buffer layer may reduce penetration of moisture or impurities through the substrate111. In this case, the thin film transistor90represents the first transistor T1described above. InFIG.9, the thin film transistor90is illustrated as a coplanar structure of a top gate, but not limited thereto. The structure of the thin film transistor90may be variously formed. An active layer91is disposed on the substrate111, and a gate insulating film81is disposed to cover the active layer91. A gate electrode92is disposed on the gate insulating film81while being overlapped with the active layer91, and a passivation layer83is disposed on the gate electrode92to cover the gate electrode92. A source electrode93and a drain electrode94are disposed on the passivation layer83. The source electrode93and the drain electrode94are connected to the active layer91through a contact hole formed in the passivation layer83and the gate insulating film81. Both the gate insulating film81and the passivation layer83may be composed of a single layer or multiple layers of an inorganic insulating material such as silicon oxide SiOx or silicon nitride SiNx, but not limited thereto. The gate electrode92, the source electrode93, and the drain electrode94may include a conductive material, for example, copper Cu, aluminum Al, molybdenum Mo, nickel Ni, titanium Ti, chromium Cr, or an alloy thereof, but not limited thereto. The active layer91may be formed of a semiconductor material such as oxide semiconductor, amorphous silicon, or polysilicon, but not limited thereto. A first planarization layer112for covering the thin film transistor90is disposed on the thin film transistor90. An intermediate electrode IE is disposed on the first planarization layer112, and the intermediate electrode IE is connected to the source electrode93through a contact hole formed in the first planarization layer112. The intermediate electrode IE may be made of a conductive material, for example, copper Cu, aluminum Al, molybdenum Mo, nickel Ni, titanium Ti, chromium Cr, or an alloy thereof, but not limited thereto. A second planarization layer113for covering the intermediate electrode IE is disposed on the intermediate electrode IE. An anode electrode AN is disposed on the second planarization layer113, and the anode electrode AN is connected to the intermediate electrode IE through a contact hole formed in the second planarization layer113. The anode electrode AN may be formed of a transparent conductive material, for example, indium tin oxide ITO or indium zinc oxide IZO, but not limited thereto. Each of the first planarization layer112and the second planarization layer113is a layer for reducing a step difference thereunder and may be formed of an organic insulating material. For example, the first planarization layer112and the second planarization layer113may be formed of photo acryl, polyimide, benzocyclobutene-based resin, or acrylate-based resin. A bank layer BN is disposed on the anode electrode AN and is configured to cover a portion of the anode electrode AN. The bank layer BN covers the edge portion of the anode electrode AN and covers a contact hole area of the second planarization layer113on which the anode electrode AN is disposed. The bank layer BN may be disposed at the boundary of the subpixels. A light emitting layer EM is disposed between the bank layers BN adjacent to each other on the bank layer BN. A cathode electrode CA is disposed on the light emitting layer EM and is configured to cover the light emitting layer EM. An encapsulation layer114for protecting the thin film transistor90and the light emitting element ED is disposed on the cathode electrode CA, and a touch buffer layer115is disposed on the encapsulation layer114. A sensing electrode SE is disposed on the touch buffer layer115, and a touch protection layer118is disposed on the sensing electrode SE to cover the sensing electrode SE. The sensing electrode SE includes the first touch electrode line10and the second touch electrode line20. In addition, a portion of the sensing electrode SE is disposed while being overlapped with the contact electrode15ceb. Meanwhile, in the non-display area NA, a dam DM may be disposed on the same layer as the second planarization layer113. The dam DM may prevent the encapsulation layer114, more particularly, an organic encapsulation layer114b, from overflowing to the outside of the substrate111. Accordingly, two dams DM may be disposed side by side as shown in the drawing, but not limited thereto. For example, three dams DM may be disposed. The dam DM may be formed of a single layer or a plurality of layers. For example, the dam DM may be formed in a double-layered structure in which the second planarization layer113and the bank layer BN are stacked, or a three-layered structure in which the first planarization layer112, the second planarization layer113, and the bank layer BN are stacked. A first inorganic encapsulation layer114aand a second inorganic encapsulation layer114cmay be disposed onto the edge portion of the substrate111beyond the two dams DM. The organic encapsulation layer114bdoes not exceed the dam DM disposed in the periphery. The touch buffer layer115and the touch insulating layer117may also extend to the non-display area NA. The external routing wiring33, the second ground wiring55, and the first ground wiring50may be disposed in the non-display area NA and may be formed in a double layer structure. In detail, the external routing wiring33, the second ground wiring55, and the first ground wiring50may be implemented in the form of a double layer in which conductive layers are disposed above and below the touch insulating layer117and the conductive layers formed on the upper and lower portions are connected through contact holes formed in the touch insulating layer117. The conductive layer disposed under the touch insulating layer117in the non-display area NA may be formed of the same material on the same layer as the contact electrode15ceband may be referred to as a lower conductive layer BC. Accordingly, it is possible to reduce the resistance of the external routing wiring33, the second ground wiring55, and the first ground wiring50. The touch protection layer118may protect the wirings by covering not only the sensing electrode SE but also the external routing wiring33, the second ground wiring55, and the first ground wiring50disposed in the non-display area NA. FIG.10is a cross sectional view along B-B′ ofFIG.4according to one embodiment. The non-display area NA, a jumping area JA, and the bending area BA of the display panel110are show inFIG.10. Repetitive explanation of the components described above will be omitted. The jumping area JA is the area (or jumping) where wirings contacts another wiring passing through the bending area BA before the wirings pass through the bending area BA. The bending area BA corresponds to the boundary between the first display panel area110aand the second display panel area110b, and the second display panel area110bmay be bent and overlap the rear surface of the first display panel area110aowing to the bending area BA. A first conductive layer C1is disposed on a support substrate111″ of the non-display area NA. The support substrate111″ is a component including a substrate111, a gate insulating film81, and a passivation layer83, and is briefly represented as one layer, for convenience. The first conductive layer C1may be formed of the same material as a source electrode93and a drain electrode94and may be provided on the same layer as the source electrode93and the drain electrode94. The first conductive layer C1may be connected to the source electrode93or the drain electrode94while being disposed in the non-display area NA and may also be referred to as a link wiring. In addition, the first conductive layer C1may be additionally disposed in the jumping area JA and the bending area BA. A first planarization layer112is disposed on the first conductive layer C1, and a second conductive layer C2is disposed on the first planarization layer112. The first planarization layer112is disposed in the non-display area NA except for the jumping area JA, to thereby prevent an electrical connection between the first conductive layer C1and the second conductive layer C2. The second conductive layer C2may be disposed on the same layer as the intermediate electrode IE in the non-display area NA and may be formed of the same material as the intermediate electrode IE. The second conductive layer C2contacts the first conductive layer C1in the jumping area JA. The second conductive layer C2may be disposed in the bending area BA together with the first conductive layer C1, but not limited thereto. For example, any one of the first conductive layer C1and the second conductive layer C2may be disposed in the bending area BA, or both the first conductive layer C1and the second conductive layer C2may be disposed in the bending area BA while being not overlapped with each other. An insulating layer85may be additionally disposed between the first conductive layer C1and the first planarization layer112. The insulating layer85may be disposed in the non-display area NA except for the jumping area JA and the bending area BA, and may also be disposed on the thin film transistor90in the display area AA. The insulating layer85may be formed of an inorganic insulating material, such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, and the like. A second planarization layer113is disposed on the second conductive layer C2. The second planarization layer113is disposed in the non-display area NA except for the jumping area JA. A third conductive layer C3is disposed on the second planarization layer113of the non-display area NA, a touch insulating layer117is disposed on the third conductive layer C3, and a fourth conductive layer C4is disposed on the touch insulating layer117. The third conductive layer C3may be formed of the same material as the lower conductive layer and disposed on the same layer as the lower conductive layer BC, and the fourth conductive layer C4may be formed of the same material as the sensing electrode SE and disposed on the same layer as the sensing electrode SE. The third conductive layer C3and the fourth conductive layer C4are electrodes constituting the touch routing wirings30and40and are connected to the vicinity of the bending area BA. The third conductive layer C3and the fourth conductive layer C4contact each other in the non-display area NA through the open area of the touch insulating layer117, and the fourth conductive layer C4is disposed while being connected to the jumping area JA. The touch insulating layer117is opened in the jumping area JA, and the second conductive layer C2and the fourth conductive layer C4contact each other through the open portion of the touch insulating layer117. In the jumping area JA, the first conductive layer C1, the second conductive layer C2, and the fourth conductive layer C4are electrically connected. The touch signals provided through the touch routing wirings30and40are transmitted to the first conductive layer C1and the second conductive layer C2in the jumping area JA through the fourth conductive layer C4, and are provided to the touch pad in the first touch pad area LPA or the second touch pad area RPA through the first conductive layer C1and/or the second conductive layer C2disposed in the bending area BA. Referring toFIG.4, the above-mentioned jumping area JA may be disposed on the bending area BA. Additionally, the jumping area JA may be disposed under the bending area BA. Only the organic insulating materials of the first planarization layer112, the second planarization layer113, and the touch protection layer118may be disposed in the bending area BA, to thereby prevent or at least reduce or at least reduce a crack of the display panel110and disconnection of the first conductive layer C1and/or the second conductive layer C2. The display device according to the various embodiments of the present disclosure may be described as follows. The display device according to one embodiment of the present disclosure may include a substrate including a display area in which a plurality of subpixels are disposed and a non-display area excluding the display area, an encapsulation layer for covering the plurality of subpixels, a first touch electrode line including a plurality of first touch sensing electrodes disposed in a first direction on the encapsulation layer and spaced apart from each other in the first direction, a second touch electrode line provided in the same plane as the first touch electrode line and disposed in a second direction crossing the first direction, a plurality of pads disposed on one side of the substrate, a first touch routing wiring configured to connect some of the first touch electrode line and some among the plurality of pads and disposed between the plurality of first touch sensing electrodes in the second direction, and a second touch routing wiring configured to connect the second touch electrode line to some other pads among the plurality of pads. According to another feature of the present disclosure, a connection line configured to connect the first touch sensing electrodes adjacent to each other in the first direction among the plurality of first touch sensing electrodes may be further included. The connection line may be disposed on a different layer from the first touch electrode line and the second touch electrode line. According to another feature of the present disclosure, the connection line may be disposed on a different layer from the first touch electrode line and the second touch electrode line. According to another feature of the present disclosure, the first touch electrode line and the second touch electrode line may have a rectangular shape, a rhombus shape, or a triangular shape. According to another feature of the present disclosure, the edge of each of the first touch electrode line and the second touch electrode line may have a zigzag shape. According to another feature of the present disclosure, a portion of the first touch routing wiring may be disposed in the display area. According to another feature of the present disclosure, the first touch routing wiring may be disposed on the same layer as the first touch electrode line. According to another feature of the present disclosure, the second touch routing wiring may be disposed in the non-display area and be a double wiring in which two electrodes are overlapped. According to another feature of the present disclosure, one of the two electrodes, which is disposed on a lower portion, may be disposed on the same layer as the connection wiring and is formed of the same material as the connection wiring. According to another feature of the present disclosure, the substrate may include a first area in which the first touch electrode line and the second touch electrode line are disposed, a second area protruding from the first area, and a bending area disposed between the first area and the second area and configured to bend the second area to a rear surface of the first area. According to another feature of the present disclosure, the plurality of subpixels may include a pixel circuit including a plurality of thin film transistors, and a light emitting element, a plurality of wiring disposed on the same layer as some of the electrodes constituting the plurality of thin film transistors are disposed in the bending area, and the first touch electrode line and the second touch electrode line are connected to the plurality of wirings. According to another feature of the present disclosure, the second area may include a jumping area configured to connect the first touch electrode line and the plurality of wirings, and the second touch electrode line and the plurality of wirings in an area adjacent to the bending area. According to another feature of the present disclosure, the display device may further include a ground wiring disposed between the first touch routing wiring and the plurality of first touch sensing electrodes in the display area. According to another feature of the present disclosure, the ground wiring may be disposed on the same layer as the plurality of first touch sensing electrodes and the first touch routing wiring and be spaced apart from the plurality of first touch sensing electrodes and the first touch routing wiring. According to another feature of the present disclosure, the ground electrode may be disposed along the circumference of the substrate in the non-display area, and a ground voltage may be applied to the ground electrode. According to another feature of the present disclosure, in some touch routing wirings of the first touch routing wiring and the second touch routing wiring, a constant resistance structure for adjusting resistance in the first touch routing wiring and the second touch routing wiring may be disposed close to the pad area. According to another feature of the present disclosure, A display device may include a substrate including a display area in which a plurality of subpixels are disposed and a non-display area excluding the display area, a dam disposed close to the boundary between the display area and the non-display area on the substrate, an encapsulation layer for covering the plurality of subpixels and the dam, a first touch electrode line including a plurality of first touch sensing electrodes disposed in a first direction on the encapsulation layer and spaced apart from each other in the first direction, a second touch electrode line provided in the same plane as the first touch electrode line and disposed in a second direction crossing the first direction, a first touch routing wiring connected to the first touch electrode line in the display area and disposed in the second direction between the plurality of first touch sensing electrodes, and a second touch routing wiring connected to the second touch electrode line in the non-display area. According to another feature of the present disclosure, the display device may further include a connection line configured to connect the two adjacent first touch sensing electrodes among the plurality of first touch sensing electrodes under the first touch electrode line. According to another feature of the present disclosure, the display device may further include a ground wiring disposed between the first touch routing wiring and the plurality of first touch sensing electrodes in the display area. According to another feature of the present disclosure, the substrate may include a data driving circuit disposed on one surface of the substrate, and pads disposed on left and right sides of the data driving circuit and connected to the first touch routing wiring and the second touch routing wiring, wherein one end of the ground wiring may be connected to one of the pads disposed on the left side and the other end of the ground wiring may be connected to one of the pads disposed on the right side. According to the embodiment of the present disclosure, the touch electrode line and the touch routing wiring are disposed on the same layer in the display area of the display panel so that it is possible to reduce the size of the non-display area. According to the embodiment of the present disclosure, the ground wiring is additionally disposed between the touch sensing electrode and the touch routing wiring disposed in the display area, static electricity may be quickly discharged through the ground wiring so as not to be affected by static electricity during touch sensing, thereby preventing the touch sensing error or the degradation of sensing sensitivity. It will be apparent to those skilled in the art that various substitutions, modifications, and variations are possible within the scope of the present disclosure without departing from the spirit and scope of the present disclosure. Therefore, the scope of the present disclosure is represented by the following claims, and all changes or modifications derived from the meaning, range and equivalent concept of the claims should be interpreted as being included in the scope of the present disclosure. | 76,258 |
11861095 | DETAILED DESCRIPTION The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. Techniques and technologies may be described herein in terms of functional and/or logical block components, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer-executed, computerized, software-implemented, or computer-implemented. It should be appreciated that the various block components shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. When implemented in software, or the like, various elements of the systems and devices described herein are essentially the code segments or instructions that cause one or more processor devices to perform the various tasks. In certain embodiments, the program or code segments are stored in a tangible processor-readable medium, which may include any medium that can store or transfer information. Examples of a non-transitory and processor-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CD-ROM, an optical disk, a hard disk, or the like. The subject matter presented here relates to certain features of a media player application that can be rendered on a touchscreen display of an electronic device. More specifically, the disclosed subject matter relates to a touchscreen locking feature that disables at least some touchscreen functionality of a media player during playback of media content. The media player described herein can support the playback of audio content, video-only content, video content that includes audio (i.e., traditional video content), a slideshow of still images, or the like. For ease of description and simplicity, the following description refers to the presentation of video content in the context of an exemplary video player embodiment. A media player of the type described herein can be rendered and displayed on any suitably configured touchscreen display. The touchscreen display can be integrated with a host electronic device, or it can be a distinct component that communicates and cooperates with an electronic device. In certain embodiments, a touchscreen display can be realized as a removable peripheral component that is compatible with a host electronic device. In yet other embodiments, the touchscreen display can be implemented with a more complex system, tool, or instrument (such as a vehicle, a piece of manufacturing equipment, an appliance, or the like). In this regard, an electronic device having a touchscreen display can be realized as any of the following devices, systems, or components, without limitation: a mobile telephone; a personal computer (in any form factor, including a desktop, a laptop, a handheld, etc.); a tablet computing device; a wearable computing device; a video game device or console; a digital media player device; a household appliance; a piece of home entertainment equipment; a medical device; a navigation device; an electronic toy or game; a vehicle instrument or instrument panel; a control panel of a piece of machinery, a tool, or the like; a digital camera or video camera; a musical instrument; or a remote control device. It should be appreciated that this list is not exhaustive, and it is not intended to limit the scope or application of the embodiments described herein. Turning now to the drawings,FIG.1is a simplified block diagram representation of an exemplary embodiment of a video delivery system100that is suitably configured to support the techniques and methodologies described in more detail below. The system100(which has been simplified for purposes of illustration) generally includes, without limitation: at least one media content source102(referred to in the singular form herein for the sake of convenience); and an electronic device (e.g., a media player device104or other form of customer equipment that is capable of receiving, processing, and rendering media content). In certain embodiments, the media player device104communicates with the media content source102using a data communication network106. For the sake of brevity, conventional techniques related to satellite, cable, and Internet-based communication systems, video broadcasting systems, data transmission, signaling, network control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. The data communication network106is any digital or other communications network capable of transmitting messages between senders (e.g., the media content source102) and receivers (e.g., the media player device104). In various embodiments, the network106includes any number of public or private data connections, links or networks supporting any number of communications protocols. The network106may include the Internet, for example, or any other network based upon TCP/IP or other conventional protocols. In various embodiments, the network106also incorporates a wireless and/or wired telephone network, such as a cellular communications network for communicating with mobile phones, personal digital assistants, and/or the like. The network106may also incorporate any sort of wireless or wired local area networks, such as one or more IEEE 802.3 and/or IEEE 802.11 networks. The media content source102may be deployed as a head end facility and/or a satellite uplink facility for the system100. In some embodiments, the media content source102may include or cooperate with one or more web-based content delivery applications, services, or providers. The media content source102generally functions to control content, signaling data, programming information, and other data sent to any number of receiving components. The media content source102includes one or more data processing systems or architectures that are capable of producing signals that are transmitted to customer premise equipment, mobile devices, computer systems, or the like. In various embodiments, the media content source102represents a satellite, cable, cloud-based, or other content distribution center having suitably configured and deployed control system(s) for obtaining, accessing, managing, and/or communicating content, signaling information, blackout information, programming information, and other data. The media player device104may be implemented as a computer-based or processor-based electronic device having an appropriate media player application installed thereon. The media player application supports the playback of streaming media content, which can be provided by the media content source102. Alternatively or additionally, the media player application supports the playback of stored media content108, which can be locally stored at the media player device104. FIG.2is a simplified block diagram representation of an exemplary embodiment of a computer-based media player device104having a touchscreen display120that supports the presentation of media content. The device104generally includes, without limitation: at least one processor122; at least one memory storage device or element124; the touchscreen display120; at least one communication (network) interface126; and input and output (I/O) devices128. In practice, the device104can include additional components, elements, and functionality that may be conventional in nature or unrelated to the particular media playback functionality described here. A processor122may be, for example, a central processing unit (CPU), a field programmable gate array (FPGA), a microcontroller, an application specific integrated circuit (ASIC), or any other logic device or combination thereof. One or more memory elements124are communicatively coupled to the at least one processor122, and can be implemented with any combination of volatile and non-volatile memory. The memory element124has non-transitory machine-readable and computer-executable instructions (program code)130stored thereon, wherein the instructions130are configurable to be executed by the at least one processor122as needed. When executed by the at least one processor122, the instructions130cause the at least one processor122to perform the associated tasks, processes, and operations defined by the instructions130. Of course, the memory element124may also include instructions associated with a file system of the host device104and instructions associated with other applications or programs. Moreover, the memory element124can serve as a data storage unit for the host device104. For example, the memory element124can provide a storage buffer for images (e.g., video frame thumbnails, selected screenshots, or the like) and/or for streaming media content that is presented by the device104. In certain embodiments, the memory element124is used to maintain stored media content108that can be presented by the device104. The touchscreen display120may be integrated with the device104or communicatively coupled to the device104as a peripheral or accessory component. The shape, size, resolution, and technology of the touchscreen display120will be appropriate to the particular implementation of the device104. The touchscreen display120can be realized as a monitor, screen, or another conventional electronic display that is capable of graphically presenting data and/or information provided by the device104. The touchscreen display120is communicatively coupled to the at least one processor122, and it can leverage existing technology to detect touch gestures and contact with a user's finger (or fingers), a stylus, or the like. The communication interface126represents the hardware, software, and processing logic that enables the device104to support data communication with other devices. In practice, the communication interface126can be suitably configured to support wireless and/or wired data communication protocols as appropriate to the particular embodiment. For example, if the device104is a smartphone, then the communication interface126can be designed to support a cellular communication protocol, a short-range wireless protocol (such as the BLUETOOTH communication protocol), and a WLAN protocol. As another example, if the device104is a desktop or laptop computer, then the communication interface can be designed to support the BLUETOOTH communication protocol, a WLAN protocol, and a LAN communication protocol (e.g., Ethernet). In practice, the communication interface126enables the device104to receive media content for presentation on the touchscreen display120, wherein the media content can be downloaded, streamed, or otherwise provided for real-time (or near real-time) playback or for storage at the device104. The I/O devices128enable the user of the device104to interact with the device104as needed. In practice, the I/O devices128may include, without limitation: a speaker, an audio transducer, or other audio feedback component; a haptic feedback device; a microphone; a mouse or other pointing device; a touchscreen or touchpad device; a keyboard; a joystick; a biometric sensor or reader (such as a fingerprint reader, a retina or iris scanner, a palm print or palm vein reader, etc.); a camera; or any conventional peripheral device. In this context, the touchscreen display120can be categorized as an I/O device128. Moreover, the touchscreen display120may incorporate or be controlled to function as a fingerprint or palm print scanner. A haptic feedback device can be controlled to generate a variable amount of tactile or physical feedback, such as vibrations, a force, knock, or bump sensation, a detectable movement, or the like. Haptic feedback devices and related control schemes are well known and, therefore, will not be described in detail here. This description assumes that an electronic device of the type described above can be operated to present media content to a user. The source, format, and resolution of the media content are unimportant for purposes of this description. Indeed, the data that conveys the media content can be locally stored at the electronic device, or it can be provided in an on-demand streaming media format from a content source, a service provider, a cloud-based entity, or the like. The following description assumes that the device104and its installed media player application can successfully and compatibly process, render, and display the desired media (video) content in an appropriate manner. FIG.3is a flow chart that illustrates an exemplary embodiment of a process300for controlling the touch sensitivity of a touchscreen display of an electronic device during playback of media content. In accordance with the embodiment described here, the process300temporarily disables or locks at least some of the touchscreen functionality of a displayed media player during playback of media content, to reduce or eliminate playback interruptions caused by accidental contact with the touchscreen. The various tasks performed in connection with the process300may be performed by software, hardware, firmware, or any combination thereof. For illustrative purposes, the following description of the process300may refer to elements mentioned above in connection withFIG.1andFIG.2. It should be appreciated that the process300may include any number of additional or alternative tasks, the tasks shown inFIG.3need not be performed in the illustrated order, and the process300may be incorporated into a more comprehensive procedure or process having additional functionality not described in detail herein. Moreover, one or more of the tasks shown inFIG.3could be omitted from an embodiment of the process300as long as the intended overall functionality remains intact. The process300begins by controlling the display of a media player on the touchscreen display of an electronic device (task302). Task302may involve the launching or opening of a media player application, a media player software component, or the like. This description assumes that the process300controls the playback of media content with the media player (task304). In the case of video content, task304involves the display or presentation of the video content on the touchscreen display. For example, task302may be performed to open the media player with selected media content ready to be played, such that playback of the selected media content (task304) begins when the user presses a graphical representation of a Play button. As another example, task302may be performed to open the media player and initiate automatic playback of the selected media content (task304). In this regard,FIG.4is a screen shot of an exemplary media player400, as captured during playback of video content. The illustrated embodiment of the media player400includes a primary window402for the presentation of media content. The primary window402inFIG.4can be defined by the entire rectangular perimeter (e.g., full-screen mode).FIG.4depicts a state where only the intended video content is displayed, and where common user interface elements, operating system elements, and media player controls are hidden, obscured, disabled, or deactivated. In accordance with certain embodiments, hidden, obscured, disabled, or deactivated items or elements can be momentarily displayed in the primary window402in response to detected user interaction (touch or contact) with the touchscreen display. For example, the media player controls, a playback progress bar, status indicators, and/or other elements can be displayed in an active manner for a few seconds when the user touches the touchscreen (anywhere on the screen or in designated areas or zones of the screen). This example assumes that some type of detectable event, user interaction, command, or state/status of the host device triggers the display of an interactive lock element on the touchscreen display. Accordingly, the process300detects the occurrence of a “display lock” trigger event (task306) and, in response to that trigger event, controls the display of the interactive lock element (task308). In certain embodiments, the “display lock” trigger event corresponds to some type of user interaction with the host device, including, without limitation: physical contact with the touchscreen display (a simple touch, a designated tapping pattern, a designated swipe pattern, etc.); a voice command; movement of the host device (such as a designated type of shaking or motion); a detectable facial appearance; a detectable eye blinking pattern; and/or a biometric scan (such as a fingerprint scan, a retina or iris scan, a palm print or palm vein scan, etc.). In accordance with the exemplary embodiment described here, the “display lock” trigger event corresponds to interaction with the touchscreen display. Thus, if the user touches or taps anywhere on the touchscreen display (using a finger, a stylus, or any object that can serve as a touchscreen input device), the process300will respond by initiating the display of the interactive lock element (task308). FIG.5is a screen shot of the media player400shown inFIG.4, as captured with an interactive lock element406displayed in an unlocked state.FIG.5shows the state of the media player400after detection of a “display lock” trigger event and after the interactive lock element406has been rendered. In certain embodiments, processing of the “display lock” trigger event also causes the display of additional elements, such as user interface elements, media player controls, operating system buttons, or the like. In this regard,FIG.5depicts the media player400with the following displayed elements: media player controls408(e.g., Stop button, Back 10 Seconds button, Pause button, and Forward 30 Seconds button); a progress bar410; and a play head412associated with the progress bar. It should be appreciated that additional information and/or graphical elements can be displayed in response to detection of the “display lock” trigger event. The interactive lock element406is a graphical user interface (GUI) element that serves as a user control item. For the state depicted inFIG.5, the interactive lock element406is displayed with an unlocked appearance to indicate that the touchscreen display is currently unlocked, enabled, and active (i.e., the touchscreen remains unmodified with its normal intended functionality). When the touchscreen display is locked, disabled, or inactive, the interactive lock element406is displayed with a locked appearance (seeFIG.6). In certain embodiments, the interactive lock element406, the media player controls408, the progress bar410, and the play head412are only temporarily displayed while the media content continues to play. For example, these items may automatically disappear after being displayed for a few seconds, 5 seconds, 10 seconds, or the like, unless one or more of the items are manipulated or touched. Accordingly, these items can appear when the touchscreen display is touched to enable the user to interact with one or more of them. However, if the user does not interact with any of these items, then they are removed from the display and the media content continues to play. After these items are removed, the media player400reverts to the full screen display mode (seeFIG.4). This example assumes that some type of detectable event, user interaction, command, or state/status of the host device triggers the activation of the interactive lock element406. Accordingly, the process300detects the occurrence of an “activate lock” trigger event (task310) and, in response to detecting the occurrence of that trigger event, locks or disables at least some touchscreen functionality of the media player during playback of media content (task312). In certain embodiments, the “activate lock” trigger event corresponds to some type of user interaction with the host device, including, without limitation: selecting the interactive lock element406; physical contact with the touchscreen display (a simple touch, a designated tapping pattern, a designated swipe pattern, touching a designated area or zone of the touchscreen display, etc.); a voice command; movement of the host device (such as a designated type of shaking or motion); a detectable facial appearance; a detectable eye blinking pattern; and/or a biometric scan (such as a fingerprint scan, a retina or iris scan, a palm print or palm vein scan, etc.). In accordance with certain embodiments, the “activate lock” trigger event corresponds to interaction with the interactive lock element406. Thus, if the user touches or taps on or near the displayed interactive lock element406(using a finger, a stylus, or any object that can serve as a touchscreen input device), the process300will respond by locking, disabling, or deactivating at least some of the touchscreen functionality (task312). In accordance with the exemplary embodiment described here, the user engages the interactive lock element406to change the functionality of the touchscreen display (locked/disabled versus unlocked/enabled). Thus, task310can be associated with the detection of a touch selection of the interactive lock element406displayed on the touchscreen display. In addition, task310can be associated with the detection of a registered fingerprint (any finger, including a thumb) on the touchscreen display or on a fingerprint scanner of the host device. In certain embodiments, the presence of the registered fingerprint is detected overlying the interactive lock element406displayed on the touchscreen display. In such embodiments, the user may be required to press and hold a finger overlying the displayed interactive lock element406for a short period of time to allow the host device to read and validate the user's fingerprint. This safeguard is desirable to ensure that only authorized users can lock/unlock the touchscreen. In addition to locking/disabling the touchscreen functionality, the host device may take further actions in response to the “activate lock” trigger event. For example, the process300may hide certain non-essential, unimportant, or irrelevant user interface items in response to detecting the occurrence of the “activate lock” trigger event (task314). Thus, the media content continues playing in an uninterrupted manner, and the media player controls408, the progress bar410, and the play head412can be hidden while the touchscreen remains locked. As another example, the process300may change the appearance or status of the interactive lock element406in response to detecting the occurrence of the “activate lock” trigger event (task316). In this regard, the appearance of the interactive lock element406can be updated to indicate the locked status. FIG.6is a screen shot of the media player400, as captured with the interactive lock element406displayed in a locked state.FIG.6shows the state of the media player after detection of an “activate lock” event, which results in the removal of non-essential graphical elements, locking/disabling of the touchscreen display, and display of the locked version of the interactive lock element406. In certain embodiments, the interactive lock element406automatically disappears after a short period of time, e.g., a few seconds, five seconds, or the like. Thereafter, playback of the media content continues in a full screen mode with the touchscreen display locked or disabled. The locked status of the touchscreen display ensures that the media content plays without interruption or any distractions that might otherwise be caused by inadvertent contact with the touchscreen display. Unlocking of the touchscreen display is achieved in a similar manner. To this end, the process300may continue until detection of another “display lock” trigger event (task318), which causes the process300to control the display of the interactive lock element406once again (task320). For this example, user interaction with the touchscreen display serves as the “display lock” trigger event, which results in the display of the interactive lock element406in its locked state. Playback of the media content continues, and non-essential elements remain hidden or deactivated. This example assumes that some type of detectable event, user interaction, command, or state/status of the host device triggers the next activation of the interactive lock element406. Accordingly, the process300detects the occurrence of an “unlock” trigger event (task322) and, in response to detecting the occurrence of that trigger event, unlocks or enables the touchscreen functionality of the media player during playback of the media content (task324). In certain embodiments, the “unlock” trigger event corresponds to some type of user interaction with the host device, including, without limitation: selecting the interactive lock element406; physical contact with the touchscreen display (a simple touch, a designated tapping pattern, a designated swipe pattern, touching a designated area or zone of the touchscreen display, etc.); a voice command; movement of the host device (such as a designated type of shaking or motion); a detectable facial appearance; a detectable eye blinking pattern; and/or a biometric scan (such as a fingerprint scan, a retina or iris scan, a palm print or palm vein scan, etc.). In accordance with certain embodiments, the “unlock” trigger event corresponds to interaction with the displayed interactive lock element406. Thus, if the user touches or taps on or near the displayed interactive lock element406(using a finger, a stylus, or any object that can serve as a touchscreen input device), the process300will respond by unlocking, enabling, or activating the touchscreen functionality (task324). In accordance with the exemplary embodiment described here, the user engages the interactive lock element406to change the functionality of the touchscreen display (locked/disabled versus unlocked/enabled). Thus, task322can be associated with the detection of a touch selection of the interactive lock element406while it is displayed in its locked state. In addition, task322can be associated with the detection of a registered fingerprint (any finger, including a thumb) on the touchscreen display or on a fingerprint scanner of the host device. In certain embodiments, the presence of the registered fingerprint is detected overlying the interactive lock element406displayed on the touchscreen display. In such embodiments, the user may be required to press and hold a finger overlying the displayed interactive lock element406for a short period of time to allow the host device to read and validate the user's fingerprint. In addition to unlocking/enabling the touchscreen functionality, the host device may take further actions in response to the “unlock” trigger event. For example, the process300may unhide the previously hidden non-essential, unimportant, or irrelevant user interface items in response to detecting the occurrence of the “unlock” trigger event (task326). Thus, the media content continues playing in an uninterrupted manner, and the media player controls408, the progress bar410, and the play head412can be displayed (temporarily or persistently) while the touchscreen remains unlocked. As another example, the process300may change the appearance or status of the interactive lock element406in response to detecting the occurrence of the “unlock” trigger event (task328). In this regard, the appearance of the interactive lock element406can be updated to indicate the unlocked status (as depicted inFIG.5). The above description refers to certain trigger events that cause the display of the interactive lock element406. In some implementations, the interactive lock element406(in its unlocked state) is automatically displayed by default whenever playback of media content begins, or whenever the Play button is activated. In such implementations, the interactive lock element406may be displayed for only a short period of time before it automatically disappears. In accordance with certain embodiments, the interactive lock element406can be displayed and activated to lock the touchscreen display when media content playback is paused, when media content playback is stopped, and/or before media content playback begins. In such embodiments, the user can initiate locking of the touchscreen display at a time when the media content is not playing, but the locking or disabling of the touchscreen display is delayed until after playback actually begins. In accordance with some embodiments, the touchscreen display is automatically unlocked (without any user involvement or interaction) in response to various events, conditions, or device status. For example, the touchscreen display can be automatically unlocked when playback of the media content ends. As another example, the touchscreen display can be automatically unlocked if a commercial break, an advertisement, or other type of interstitial content is detected during playback of the media content. As another example, the touchscreen display can be automatically unlocked if the host device receives an incoming call, if one or more designated applications generates a notification or message, or the like. The touchscreen locking methodology described here is not limited or restricted to media player applications. Indeed, touchscreen locking methodology can also be utilized with other applications, software components, and devices if so desired. For example, touchscreen locking can be implemented with any of the following applications, without limitation: a music player; a geographical navigation system; a presentation (slideshow) application; a photo or video editing application; a video game system. While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application. | 31,586 |
11861096 | DETAILED DESCRIPTION OF THE INVENTION The embodiments are described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the inventive concept are shown. In the drawings, the size and relative sizes of layers and regions may be exaggerated for clarity. Like numbers refer to like elements throughout. The embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. The scope of the embodiments is therefore defined by the appended claims. The detailed description that follows is written from the point of view of a control systems company, so it is to be understood that generally the concepts discussed herein are applicable to various subsystems and not limited to only a particular controlled device or class of devices. Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the embodiments. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular feature, structures, or characteristics may be combined in any suitable manner in one or more embodiments. LIST OF REFERENCE NUMBERS FOR THE ELEMENTS IN THE DRAWINGS IN NUMERICAL ORDER The following is a list of the major elements in the drawings in numerical order.100Control Device101Housing102Buttons103Front Surface106Faceplate108Opening110Indicia202Vertical Side Walls203Horizontal Top Wall204Horizontal Bottom Wall209Trim Plate211Mounting Holes212Screws213Screws217Opening218Lens301Front Housing Portion302Rear Housing Portion303a-cPrinted Circuit Board(s) (PCB)304a-eTactile Switches305a-eTouch Sensors306Side Walls307Screws308Front Wall309Openings310Openings311Light Sources/Light Emitting Diodes (LEDs)314Side Edges315Light Bars316Orifices317Light Sensor318Orifices319Shoulders415a-eButton Zones705aLeft Touch Sensor705bRight Touch Sensor801Front Wall802Side Walls803Rear Surface805Arms806Projection807Posts810Abutments813Horizontal Pivot Axis814Vertical Pivot Axis900a-bThree-zone Height Button(s)901Front Wall902Side Walls903Rear Surface905Arms906Projections907Posts910Abutment913Horizontal Pivot Axes914Vertical Pivot Axes1002Two-zone Height Button1003Three-zone Height Button1004Four-zone Height Button1005Five Height Button1100Block Diagram of a Control Device1101Controller1102Memory1103Communication Interface1104User Interface1111Power Supply1112Switch1113Dimmer1200Button Tree LIST OF ACRONYMS USED IN THE SPECIFICATION IN ALPHABETICAL ORDER The following is a list of the acronyms used in the specification in alphabetical order.AC Alternating CurrentASIC Application Specific Integrated CircuitAV AudiovisualDC Direct CurrentHVAC Heating, Ventilation and Air ConditioningIR InfraredLED Light Emitting DiodePCB Printed Circuit BoardPoE Power-over-EthernetRAM Random-Access MemoryRF Radio FrequencyRGB Red-Green-BlueRISC Reduced Instruction Set ComputerROM Read-Only Memory MODE(S) FOR CARRYING OUT THE INVENTION For 40 years Crestron Electronics, Inc. has been the world's leading manufacturer of advanced control and automation systems, innovating technology to simplify and enhance modern lifestyles and businesses. Crestron designs, manufactures, and offers for sale integrated solutions to control audio, video, computer, and environmental systems. In addition, the devices and systems offered by Crestron streamlines technology, improving the quality of life in commercial buildings, universities, hotels, hospitals, and homes, among other locations. Accordingly, the systems, methods, and modes of the aspects of the embodiments described herein can be manufactured by Crestron Electronics, Inc., located in Rockleigh, N.J. The different aspects of the embodiments described herein pertain to the context of wall mounted control devices, but are not limited thereto, except as may be set forth expressly in the appended claims. Particularly, the aspects of the embodiments are related to an apparatus, system, and method for a wall mounted control device with interchangeable buttons or button trees that are accomplished through a combination of tactile switches and touch sensors to increase button configurations. This allows the control device to accommodate various button configurations, such as those shown inFIG.10, without the need for a large number of tactile switches. Also, the present embodiments obviate the need for multiple button designs to accomplish different functions, such as a push button, a side to side rocker button, or an up and down rocker button. The benefit is compounded further when using fixed button tree assemblies, obviating the need for large quantities of configurations of different button types. The present combination of tactile switches with touch sensors allows for a single button design or button tree design that accommodates various functions through programming and engraving, including but not limited to a push button function, a side to side rocker function, an up-down rocker function, or any combinations thereof, such as a side to side rocker with center push function or an up-down rocker with center push function. As such, the number of button types or button trees required is reduced while the number of allowable configurations is increased. Referring toFIG.1, there is shown a perspective front view of an illustrative wall mounted control device100according to an illustrative embodiment. The control device100may serve as a user interface to associated loads or load controllers in a space. According to an embodiment, the control device100may be configured as a keypad comprising a plurality of buttons, such as five single height buttons102. However, other button configuration may be used as will be described below. For example, the control device100may be configured as a lighting switch having a single button that may be used to control an on/off status of the load. Alternatively, or in addition, the single button can be used to control a dimming setting of the load. Each button102may be associated with a particular load and/or to a particular operation of a load, such as different lighting scenes. In an illustrative embodiment, the control device100may be configured to receive control commands directly from a user via buttons102, and either directly or through a control processor transmit the control command to a load (such as a light, fan, window blinds, etc.) or to a load controller (not shown) electrically connected to the load to control an operation of the load based on the control commands. In various aspects of the embodiments, the control device100may control various types of electronic devices or loads. The control device100may comprise one or more control ports for interfacing with various types of electronic devices or loads, including, but not limited to audiovisual (AV) equipment, lighting, shades, screens, computers, laptops, heating, ventilation and air conditioning (HVAC), security, appliances, and other room devices. The control device100may be used in residential load control, or in commercial settings, such as classrooms or meeting rooms. Each button102may comprise indicia110disposed thereon to provide designation of each button's function. Each button102may be backlit, for example via light emitting diodes (LEDs), for visibility and/or to provide status indication of the button102. For example, buttons102may be backlit by white, blue, or another color LEDs. Different buttons102may be backlit via different colors to distinguish between buttons, load types (e.g., emergency load), or the load state (e.g., on, off, or selected scene), AV state (e.g., selected station or selected channel), or button backlight colors may be chosen to complement the surroundings or to give a pleasing visual effect. Buttons102may comprise opaque material while the indicia110may be transparent or translucent allowing light from the LEDs to pass through the indicia110and be perceived from the front surface103of the button102. The indicia110may be formed by engraving, tinting, printing, applying a film, etching, and/or similar processes. Reference is now made toFIGS.1and2, whereFIG.2shows the control device100with the faceplate106removed. The control device100may comprise a housing101adapted to house various electrical components of the control device100, such as the power supply and an electrical printed circuit board (PCB)303a(FIG.3). The housing101is further adapted to carry the buttons102thereon. The buttons102may be removably attached to the sides of the housing101such that they appear to float on the housing101. Although in other embodiments, the buttons102may not be removable or replaceable. Other button design and attachment (e.g., non-floating buttons) may also be used with the current embodiments. The housing101may comprise mounting holes211for mounting the control device100to a standard electrical box via screws212. According to another embodiment, control device100may be mounted to other surfaces using a dedicated enclosure. According yet to another embodiment, the control device100may be configured to sit freestanding on a surface, such as a table, via a table top enclosure. Once mounted to a wall or an enclosure, the housing101may be covered using a faceplate106. The faceplate106may comprise an opening108sized and shaped for receiving the buttons102and/or at least a front portion of the housing101therein. The faceplate106may be secured to the housing101using screws213. According to an embodiment, the faceplate106may comprise a pair of vertical side walls202interconnected at their top by a horizontal top wall203and at their bottom by a horizontal bottom wall204. Horizontal top and bottom walls203and204are each adapted to receive a decorative trim plate209thereon that covers the screws213. The trim plates209may be removably attached to the top and bottom horizontal walls203and204using magnets (not shown). However, other types of faceplates may be used. A plurality of control devices100may also be ganged next to each other and covered using a multi gang faceplate as is known in the art. Referring now toFIG.3, which illustrates an exploded view of the control device100. Housing101of control device100may comprise a front housing portion301and a rear housing portion302adapted to fit within a standard electrical or junction box. Housing101contains various electrical components, for example disposed on a printed circuit board (PCB)303aor a plurality of PCBs, configured for providing various functionality to the control device100, including for receiving commands and transmitting commands wirelessly to a load or a load controlling device.FIG.11is an illustrative block diagram1100of the electrical components of the control device100. Control device100may comprise a power supply1111that may be housed in the rear housing portion302for providing power to the various circuit components of the control device100. The control device100may be powered by an electric alternating current (AC) power signal from an AC mains power source or via DC voltage. Such control device100may comprise leads or terminals suitable for making line voltage connections. In yet another embodiment, the control device100may be powered using Power-over-Ethernet (PoE) or via a Cresnet® port. Cresnet® provides a network wiring solution for Creston® keypads, lighting controls, thermostats, and other devices. The Cresnet® bus offers wiring and configuration, carrying bidirectional communication and 24 VDC power to each device over a simple 4-conductor cable. However, other types of connections or ports may be utilized. The control device100may further include a controller1101comprising one or more microprocessors, such as “general purpose” microprocessors, a combination of general and special purpose microprocessors, or application specific integrated circuits (ASICs). Additionally, or alternatively, the controller1101can include one or more reduced instruction set computer (RISC) processors, video processors, or related chip sets. The controller1101can provide processing capability to execute an operating system, run various applications, and/or provide processing for one or more of the techniques and functions described herein. The control device100can further include a memory1102communicably coupled to the controller1101and storing data and executable code. The memory1102can represent volatile memory such as random-access memory (RAM), but can also include nonvolatile memory, such as read-only memory (ROM) or Flash memory. In buffering or caching data related to operations of the controller1101, memory1102can store data associated with applications running on the controller1101. The control device100can further comprise one or more communication interfaces1103, such as a wired or a wireless communication interface, configured for transmitting control commands to various connected loads or electrical devices, and receiving feedback. A wireless interface may be configured for bidirectional wireless communication with other electronic devices over a wireless network. In various embodiments, the wireless interface can comprise a radio frequency (RF) transceiver, an infrared (IR) transceiver, or other communication technologies known to those skilled in the art. In one embodiment, the wireless interface communicates using the infiNET EX® protocol from Crestron Electronics, Inc. of Rockleigh, N.J. infiNET EX® is an extremely reliable and affordable protocol that employs steadfast two-way RF communications throughout a residential or commercial structure without the need for physical control wiring. In another embodiment, communication is employed using the ZigBee® protocol from ZigBee Alliance. In yet another embodiment, the wireless communication interface may communicate via Bluetooth transmission. A wired communication interface may be configured for bidirectional communication with other devices over a wired network. The wired interface can represent, for example, an Ethernet or a Cresnet® port. In various aspects of the embodiments, control device100can both receive the electric power signal and output control commands through the PoE interface. The control device100may further comprise a user interface1104. As shown inFIG.3, the front surface of the PCB303amay comprise a plurality of micro-switches or tactile switches304a-eand a plurality of touch sensors305a-e. For example, the PCB303amay contain a single column of five tactile switches304a-eand fifteen touch sensors305a-earranged in a three columns and five rows to accommodate various number of button configurations. However, other number of switches and touch sensors and their respective layouts may be utilized to accommodate other button configurations. The tactile switches304a-eand touch sensors305a-eare adapted to be activated via buttons102to receive user input as further discussed below. Referring back toFIG.11, the control device100may also comprise a switch1112configured for switching a connected load on or off in response to an actuation of a button102. According to one embodiment, switch1112may comprise of one or more electromechanical relays, which may use an electromagnet to mechanically operate a switch. In another embodiment, a solid-state relay (SSR) may be used comprising semiconductor devices, such as thyristors (e.g., TRIAC) and transistors, to switch currents up or down. In addition, the control device100may comprise of one or more dimmers1113configured for providing a dimmed voltage output to a connected load, such as a lighting load, in response to user input. The dimmer1113may comprise a solid-state dimmer for dimming different types of lighting loads, including incandescent, fluorescent, LED, or the like. According to an embodiment, the dimmer1113may comprise a 0-10V DC dimmer to provide a dimmed voltage output to an LED lighting load, a fluorescent lighting load, or the like. The dimmer1113of the control device100may also reduce its output based on light levels reported by the light sensor317. The control device100may further comprise a plurality of light sources311configured for providing backlighting to corresponding buttons102. Each light source311may comprise a multicolored light emitting diode (LED), such as a red-green-blue LED (RGB LED), comprising of red, green, and blue LED emitters in a single package. Although a white LED emitter or LED emitters of other colors can be used instead or additionally included. Each red, green, and blue LED emitter can be independently controlled at a different intensity to selectively produce a plurality of different colors. The plurality of LEDs311may be powered using one or more LED drivers located on PCB303a. According to an embodiment, a pair of LEDs311may be located on two opposite sides of each row of tactile switches304a-e. The control device100may further comprise a light sensor317configured for detecting and measuring ambient light. According to an embodiment, light sensor317can comprise at least one photosensor having an internal photocell with 0-65535 lux (0-6089 foot-candles) light sensing output to measure light intensity from natural daylight and ambient light sources. Light sensor317may be used to control the intensity of the load that is being controlled by the control device100. In addition, light sensor317may be used to control the intensity levels of LEDs311based on the measured ambient light levels. According to an embodiment, light sensor317may impact the intensity levels of LEDs311to stay at the same perceived brightness with respect to the measured ambient light levels. A dimming curve may be used to adjust the brightness of LEDs311based on measured ambient light levels by the light sensor317. According to another embodiment, ambient light sensor threshold values may be used to adjust the LED intensity or behavior. According to yet another embodiment, light sensor317may impact the color of the LEDs311based on the measured ambient light levels. Referring toFIG.2, the faceplate106may comprise an opening217adapted to contain a lens218. Lens218may direct ambient light from a bottom edge of the faceplate106toward the light sensor317. The lens218may be hidden from view by the trim plate209. The PCB303amay comprise other types of sensors, such as motion or proximity sensors. Referring back toFIG.3, the control device100may further comprise a plurality of horizontally disposed rectangular light pipes or light bars315each adapted to be positioned adjacent a respective row of tactile switches304a-eand touch sensors305a-eand between a respective pair of light sources311. According to one embodiment, the light bars315may be individually attached to the front surface of the PCB303a, for example, using an adhesive. According to another embodiment, the light bars315may be interconnected into a single tree structure and adapted to be attached within the housing101via screws307. Light bars315may be fabricated from optical fiber or transparent plastic material such as acrylic, polycarbonate, or the like. Each pair of oppositely disposed light sources311may extend from the front surface of the PCB303ato direct light to opposite side edges314of a respective light bar315. Each light bar315in turn will distribute and diffuse light from the respective pair of light sources311and direct the light through the indicia110of the respective button102. The front housing portion301is adapted to be secured to the rear housing portion302using screws307such that the PCB303aand light bars315are disposed therebetween. The front housing portion301comprises a front wall308with a substantially flat front surface. The front wall308may comprise a plurality of openings309extending traversely therethrough aligned with and adapted to provide access to the tactile switches304a-eas shown inFIG.4. Front wall308may further comprise rectangular horizontal openings310extending traversely therethrough that are aligned with and sized to surround at least a front portion of a respective light bar315. The front housing portion301may comprise an opaque material, such as a black colored plastic or the like, that impedes light transmission through the front wall308to prevent light bleeding from one set of light bar315and corresponding light sources311to another set. In addition, the front wall308may further comprise a plurality of orifices316, and the PCB303amay also comprise a plurality of orifices318at corresponding locations, for providing alignment points for the buttons102as described below. The front housing portion301may comprise a pair of side walls306orthogonally and rearwardly extending from side edges of the front wall308. Each side wall306may comprise one or a plurality of recessed shoulders319for buttons to clip to. For example, to accommodate five buttons, six recessed shoulders319may be provided. Referring toFIGS.4and5, there is shown a perspective view of the control device100with the buttons102removed and the PCB303a, respectively. The control device100defines a plurality of button zones415a-eadapted to receive a plurality of rows of different height buttons. Particularly, each button zone415a-emay be configured to receive a single height button102. For example, the control device100is shown containing five button zones415a-eadapted to receive five single height buttons, but it may comprise any other number of button zones. Each button zone415a-emay comprise one or more tactile switches304a-eand one or more touch sensors305a-e, and optionally, one or more button alignment orifices316, a light bar315, and one or more corresponding light sources311. According to an embodiment, as shown inFIG.5, each button zone415a-emay comprise a single tactile switch304a-e, although additional tactile switches per button zone may be utilized. Tactile switches304a-eare mechanical switches that provide tactile or haptic feedback via mechanical components by which the user can feel and perceive that a key press has been registered. The feedback may be provided via a spring, a metal snap dome, a rubber dome, a membrane, a leaf spring, a tactile actuator, or other mechanical mechanism known in the art. For example, for a five button zone415a-econfiguration, the PCB303amay comprise five tactile switches304a-earranged in a single column, although a different arrangement can also be used. As such, each button102, no matter of the size or type, is still provided with at least one tactile switch304a-eto provide tactile feedback. Each button zone415a-eis further associated with an array of a plurality of touch sensors305a-e. Each touch sensor305a-emay comprise a capacitive touch sensor comprising at least one conductor pad, such as a copper pad, disposed on the PCB303aand connected to capacitive sensing controller, which may be separate from or integrated into controller1101. The conductor pad acts as a capacitor plate that is exposed to an increase in capacitance when a finger comes near or in contact with the pad. The capacitive sensing controller measures changes in the capacitance compared to the environment to detect presence of a finger on or near the conductive pad. Each capacitive touch sensor305a-ecan be used to detect a touch of a user's finger through an overlay, in this case the front wall801(FIG.8) of the button102when it is attached to the control device101. According to other embodiments, touch sensors305a-emay comprise other touch sensing technologies known in the art, such as but not limited to inductive touch sensors, infrared touch sensors, surface acoustic wave touch sensors, or the like. The array of touch sensors305a-ein each button zone415a-emay detect the location where a person is pushing the button, e.g., whether a person is pushing the left, right, and/or center part of the button. This can be accomplished with as many as three touch sensing points (e.g.,FIG.5) or as few as two (e.g.,FIG.6) per zone, although other number of touch sensors may be utilized per zone.FIG.5illustrates three touch sensors305a-earranged in a row per each button zone415a-e, respectively, resulting in fifteen touch sensing points. Each center touch sensor in the array305a-emay be in proximity to the respective tactile switch304a-ein the respective button zone415a-e. The other two touch sensors in the array305a-emay be arranged on two opposite sides of the respective tactile switch304a-ein each button zone415a-e. As such, each button zone415a-ecomprises a group of three touch sensing points—for center, left, and right detection. Referring toFIG.10, different button configurations can be implemented by combining different height buttons. In addition to a single height button that spans a single button zone (e.g.,102), two or more button zones415a-emay be combined to receive a multi-zone height button, such as a two-zone height button that spans two button zones (e.g.,1002), a three-zone height button that spans three button zones (e.g.,1003), a four-zone height button that spans four button zones (e.g.,1004), or a five-zone height button that spans five button zones (e.g.,1005). The various button configurations beneficially share the same circuit board layout shown inFIG.5by utilizing one or more of the tactile switches304a-eand touch sensors305a-e. Depending on which tactile switches304a-eand touch sensors305a-eare exposed by a button, each of the various single or multi-zone button height buttons may be configured to operate as a push button (e.g., button1002), a side to side rocker button with or without a center push (e.g., button1006), an up and down rocker button with or without a center push (e.g., button1005), or other types of buttons, as further discussed below. As such, the control device100of the present embodiments may interchangeably receive various single or multi-zone height buttons to provide a vast number of possible configurations and operations, as required by an application, some of which are shown inFIG.10. Other button assembly configurations are also contemplated by the present embodiments. The wall-mounted control device100can be configured in the field, such as by an installation technician, in order to accommodate many site-specific requirements. Field configuration can include selection and installation of an appropriate button configuration, and assignment of button functions, based on the type of load, the available settings for the load, etc. Advantageously, such field configurability allows an installation technician to adapt the electrical device to changing field requirements (or design specifications). The buttons can be field replaceable without removing the device from the wall. After securing the buttons102on the control device100, the installer may program the button configuration through a setup application or by tapping on the installed buttons through a setup sequence. The configured buttons can then be assigned to a particular load or a load function. Referring toFIGS.8A-8D, there is shown an exemplary single press single height button102, whereFIG.8Ashows a front perspective view of the single height button102,FIG.8Bshows a rear perspective view of the single height button102,FIG.8Cshows a top view of the single height button102, andFIG.8Dshows a side view of the single height button102. Button102may comprise a front wall801comprising the front surface103and a rear surface803. A pair of side walls802may laterally and rearwardly extend from the side edges of the front wall801. Each side wall802may comprise a pair of arms805transversely and inwardly extending from a terminal end of the side wall802. The button102may further comprise a pair of alignment posts807transversely extending from the rear surface803of the front wall801. The posts807may be received in orifice316in the front wall308of the front housing portion301and in orifice318in the PCB303ato align the button102with the housing101. In addition, the button102may comprise one or more abutments810transversely extending from its rear surface803to provide one or more pivot points or axes, such as a horizontal pivot axis813, and one or two vertical pivot axes814. Although other button designs are contemplated where posts807and/or abutments810are not included. The button102may also comprise a switch actuator in the form of a projection or a hammer806transversely extending from the horizontal center of the rear surface803of the button102. The projection806is adapted to depress or strike at least one tactile switch304a-elocated on the PCB303awhen the button102is pressed by a user. Referring toFIGS.4-5, and8A-8D, the single-zone height button102comprises a height substantially equal to a height of a single button zone (e.g.,415a) such that it may be attached to the front housing portion301at any one of the button zones415a-e. For example, the single-zone height button102may be attached to the front housing portion301at zone415aby being snapped onto the front housing portion301. Particularly, each pair of arms805on each side wall802of button102are caused to engage a pair of adjacent shoulders319in a respective side wall306of the front housing portion301aligned with zone415a, such that the button102hugs side walls306of the front housing portion301, as shown inFIG.2. Although the button102may be attached to the housing101in a different manner. The pair of posts807are inserted through the pair of alignment orifices316of front housing portion301and into respective alignment orifices318on the PCB303aat zone415a. This prevents vertical and horizontal displacement of the button102out of the zone415a. Abutments810may abut against the front housing portion301to provide pivoting points or axes. The switch actuating projection806will rest against the tactile switch304awithout depressing it. A single-zone height button102will expose switch304aand the array of touch sensors305ain zone415a. The location of where the button102is pressed, such as center, left side, or right side, can be determined via one of the touch sensors305alocated underneath the button102. Button102can be programmed as a push button or a side to side rocker button with or without center push. When programmed as a push button, the tactile switch304amay be activated while touch sensors305ain button zone415amay be deactivated. According to another embodiment, the center touch sensor in array305amay be activated to detect whether the button102is pressed in its center. In use, the button102may be depressed by a user at proximity to its center, for example to provide an on/off operation, causing the button102to pivot about pivot axis813in a downward direction and the projection806to depress the tactile switch304aof zone415a. In response, the controller1101detecting a press of switch304a, and/or proximity of the user's finger to center touch sensor305a, may execute an assigned command. Button102can also be programmed as a side to side rocker with or without center push, for example to provide a shade raise or lower operation with an optional center push to toggle the shade to fully open or close. When programmed as a side to side rocker, the controller1101may activated tactile switch304aand either all of the touch sensors305aor two of the touch sensors305slocated on two opposite sides of the tactile switch304a. In use, the button102may be pressed by the user and the controller1101detects whether the button102is pressed in its center, on its left side, or on its right side—depending on the proximity of the user's fingers to the touch sensors305ain zone415a. If the button102is pressed on the center, the controller1101may ignore the press or execute a command associated with the center press, if any. If the button102is pressed on its left side, the projection806will depress the tactile switch304aof zone415agiving the user the tactile feedback and the button102will pivot left with respect to the vertical axis814. Similarly, if the button102is pressed on its right side, the projection806will depress the tactile switch304aof zone415agiving the user the tactile feedback and the button102will pivot right with respect to the vertical axis814. The controller1101will detect whether the button102was pressed on its left or right side using the respective touch sensor305aand will execute assigned command associated with that input. Referring toFIGS.9A and9B, there is shown an exemplary multi-zone height button, for example a three-zone height button900a, whereFIG.9Ashows a front perspective view of button900aandFIG.9Bshows a rear perspective view of button900a. Button900amay comprise a similar configuration as button102comprising a front wall901, side walls902, arms905, as well as a pair of alignment posts907and abutments910extending from its rear surface903. Button900amay further comprise a pair of switch actuators in the form of projections or hammers906transversely extending from the rear surface903of the front wall901along its horizontal center. A first projection906may be adjacent the top edge of the front wall901and a second projection906may be adjacent the bottom edge of the front wall901. Although according to another embodiment, a single projection906may be implemented in multi-zone height buttons at a location adapted to engage one of the tactile switches304a-eas further discussed below. Referring toFIGS.4-5and9A-9B, the three-zone height button900amay be attached to the front housing portion301over any combination of three adjacent button zones415a-e. For example, the three-zone height button900amay be attached to the front housing portion301over zones415c,415d, and415eby being snapped onto the front housing portion301by engaging arms905with shoulders319of the front housing portion301while abutments910abut against the front wall308of the front housing portion301. The three-zone height button900awith two switch actuating projections906will expose two tactile switches304cand304eand arrays305c,305d, and305eof nine touch sensors in zones415c-e, although additional projections906may be provided to expose the third tactile switch304d. Button900amay be programmed as a push button, a side to side rocker, an up and down rocker, or any combinations thereof, such as a side to side rocker with or without a center push, an up and down rocker with or without a center push, or an up-down and side to side rocker with or without a center push. The location of where the button900ais pressed, such as center, top side, bottom side, left side, or right side, can be determined via one of the touch sensors305c-elocated underneath button900a. As such, a separate button cap for each button type is not necessary to achieve different button types. Instead, the same button cap can be used for each button type while the keypad can be programmed to the desired button type and the desired function. The installer may provide input or programming data to the controller1101comprising the installed button size, the installed button zone location415a-e, the desired function for the button, or the like. In response, the controller1101can determine which combination of the tactile button switches304a-eand/or touch sensors305a-eto activate or receive input from, determine various input combinations from tactile button switches304a-eand/or touch sensors305a-e, and associate each input combination with one or more control commands. The controller1101may store the programming data in its memory1102. Accordingly, for a five button zone keypad, five different button sizes can be provided to achieve a large number of button configurations and actions. As an example, when programmed as a push button, the controller1101can turn off touch sensors305c-ein button zones415c-eand detect a button press when either of the two tactile switches304cand304eare pressed via button900a. Although the controller1101can keep one or more of the touch sensors305c-eturned on to detect whether button900awas pressed at its center. When programmed as a side to side rocker button, the controller1101can use the touch sensors305c-elocated underneath the button900ato detect whether the button900ais pressed on its left side or on its right side, as discussed above. Button900acan pivot about abutments910along vertical axes914when it is pressed on its left or right side. Tactile switches304cand304ecan be depressed via projections906to give the user tactile feedback. The controller1101may ignore button presses if it detects that the button900awas pressed on its center, on its top side, or on its bottom side. A multi-zone button, such as three-zone height button900acan be also programmed as an up and down rocker, for example to provide an on and off operation. In use, the controller1101receives signals from the touch sensors305c-elocated underneath the button900ato detect whether the button900ais pressed on its upper side or on its lower side—depending on the proximity of the user's fingers to the touch sensors305c-ein zones415c-e. If the button102is pressed on its upper side, the upper projection906will depress the tactile switch304cof zone415cgiving the user the tactile feedback, the button900awill pivot up with respect to the horizontal axis913, and the controller1101will detect that the button900awas pressed on its upper side via touch sensors305c. Similarly, if the button900ais pressed on its lower side, the lower projection906will depress the tactile switch304eof zone415egiving the user the tactile feedback, the button902will pivot down with respect to the horizontal axis913, and the controller1101will detect that the button900awas pressed on its lower side via touch sensors305e. The controller1101may ignore button presses if it detects that the button900ais pressed on its center, on its left side, or on its right side. According to another embodiment, another function may be assigned to a center press of the up and down rocker. FIG.9Cillustrates a rear perspective view of yet another embodiment of a multi-zone height button, such as three-zone height button900b, that can be attached over a combination of three button zones, such as zones415c,415d, and415e. Instead of two projections, button900bcan comprise a single switch actuating projection906proximate to its center to expose a single tactile switch304dand arrays305c,305d, and305eof nine touch sensors in zones415c-e. Button900bmay be programmed as a push button, a side to side rocker, an up and down rocker, or any combinations thereof. Whenever the button900bis pressed, irrespective of the location where it is pressed, the tactile switch304dis depressed using projection906. The location of where the button900bis pressed, whether in its center, top side, bottom side, left side, or right side, can be determined via one of the touch sensors305c-elocated underneath button900b. When programmed as a push button, the controller1101can turn off touch sensors305c-ein button zones415c-eand detect a button press using the tactile switch304d. Although the controller1101can keep one or more of the touch sensors305c-eturned on to detect whether button900bwas pressed at its center. When programmed as a side to side rocker button, the controller1101can use the touch sensors305c-elocated underneath the button900bto detect whether the button900bis pressed on its left side or on its right side, as discussed above. When programmed as an up and down rocker, the controller1101can use the touch sensors305c-elocated underneath the button900bto detect whether the button900bis pressed on its upper side or on its lower side. When programmed as an up-down and side to side rocker, the controller1101can use the touch sensors305c-elocated underneath the button900bto detect whether the button900bis pressed on its upper side, lower side, left side, or right side. Other multi-zone button configurations may comprise similar configurations to buttons900a-b, including the two1002, four1004, and five1005zone height button configurations shown inFIG.10. The other height buttons sizes can be similarly configured and programmed as discussed above with reference to buttons900a-b. While the above embodiments are described using five button zones, it should be apparent that a different number of button zones can be utilized with a different number of button height sizes without departing from the scope of the present embodiments. In addition, although separate button caps are illustrated, the buttons for each configuration type shown inFIG.11can be interconnected to form a button tree comprising a plurality of interconnected buttons, for example button tree1200shown inFIG.12. According to another embodiment, as shown inFIG.6, the center touch sensor in each button zone415a-eof the PCB303bcan be eliminated, resulting in ten touch sensing points. In such a configuration, center presses can be detected using the center tactile switches304a-eand/or detecting a close to equal capacitance from both the left and right sensors.FIG.7shows yet another embodiment of a PCB303ccomprising a first touch sensor strip705aand a second touch sensor strip705b. The first touch sensor strip705acan longitudinally extend on one side of the column of tactile switches304a-eacross all the button zones415a-e. The second touch sensor strip705bcan longitudinally extends on the opposite side of the column of tactile switches304a-eacross all the button zones415a-e. Using such a combination, the controller1101can determine which tactile buttons was pressed304a-ein which zone415a-eand also which side of the button was pressed in a similar manner as discussed above. A third touch sensor strip can be also placed proximate to the column of tactile switches304a-efor more accurate center detection. INDUSTRIAL APPLICABILITY The disclosed embodiments provide an apparatus, system, and method for a wall mounted control device with interchangeable buttons that is accomplished through a combination of tactile switches and touch sensors to increase button configurations. It should be understood that this description is not intended to limit the embodiments. On the contrary, the embodiments are intended to cover alternatives, modifications, and equivalents, which are included in the spirit and scope of the embodiments as defined by the appended claims. Further, in the detailed description of the embodiments, numerous specific details are set forth to provide a comprehensive understanding of the claimed embodiments. However, one skilled in the art would understand that various embodiments may be practiced without such specific details. Although the features and elements of aspects of the embodiments are described being in particular combinations, each feature or element can be used alone, without the other features and elements of the embodiments, or in various combinations with or without other features and elements disclosed herein. This written description uses examples of the subject matter disclosed to enable any person skilled in the art to practice the same, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims. The above-described embodiments are intended to be illustrative in all respects, rather than restrictive, of the embodiments. Thus the embodiments are capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the embodiments unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Additionally, the various methods described above are not meant to limit the aspects of the embodiments, or to suggest that the aspects of the embodiments should be implemented following the described methods. The purpose of the described methods is to facilitate the understanding of one or more aspects of the embodiments and to provide the reader with one or many possible implementations of the processed discussed herein. The steps performed during the described methods are not intended to completely describe the entire process but only to illustrate some of the aspects discussed above. It should be understood by one of ordinary skill in the art that the steps may be performed in a different order and that some steps may be eliminated or substituted. All United States patents and applications, foreign patents, and publications discussed above are hereby incorporated herein by reference in their entireties. Alternate Embodiments Alternate embodiments may be devised without departing from the spirit or the scope of the different aspects of the embodiments. | 45,399 |
11861097 | DETAILED DESCRIPTION For clearer descriptions of the principles, technical solutions, and advantages in the present disclosure, the implementation of the present disclosure is described in detail below in combination with the accompanying drawings. FIG.1is a schematic partial structural diagram of a display device according to an embodiment of the present disclosure. As shown inFIG.1, the display device01includes a touch panel011, a polarizer012and a TFPC013. The touch panel011and the polarizer012are superimposed. The TFPC013includes a first portion0131and a connecting portion0132that are connected to each other. The first portion0131is inserted between the touch panel011and the polarizer012and electrically connected with the touch panel011. The connecting portion0132is bent to a side, distal from the polarizer012, of the touch panel011. Optionally, as shown inFIG.1, the display device01may further include a display panel014, a cover plate (not shown inFIG.1), an optically clear adhesive (OCA for short)015, and the like. The display panel014is disposed between the touch panel011and the connecting portion0132, the OCA015is disposed between the display panel014and the touch panel011, and the cover plate is disposed on a side, distal from the touch panel011, of the polarizer012. It should be noted that, since the first portion0131of the TFPC013is inserted between the touch panel011and the polarizer012, a height difference is present between a portion, Which is in contact with the TFPC013, of the touch panel011and other portions of the touch panel013(such as a portion, which is in contact with the polarizer012, of the touch panel011). Therefore, the touch panel011is more likely to break under the action of the height difference. An embodiment of the present disclosure provides another display device. The touch panel011in the display device is less likely to break. For example,FIG.2is a schematic partial structural diagram of another display device according to an embodiment of the present disclosure. As shown in MG.2, the display device02includes a display panel01, a touch panel02, a polarizer03, a cushion layer07, a TFPC04, a cover plate05, and a first adhesive layer06. Among them, the touch panel02is disposed on a display side of the display panel01, and the polarizer03is disposed on a side, distal from the display panel01, of the touch panel02. The TFPC04includes a first portion041, a second portion042, and a bending portion043connecting the first portion041and the second portion042. The first portion041is disposed on a side, distal from the display panel01, of the touch panel02, and the second portion042is disposed on a non-display side of the display panel01(that is, a lower side of the display panel01inFIG.2). The first portion041is provided with a first connection terminal, and the touch panel02is provided with a second connection terminal. The first connection terminal is electrically connected to the second connection terminal. The cover plate05is disposed on a side, distal from the touch panel02, of the polarizer03. The first adhesive layer06is disposed between the cover plate05and the polarizer03. The cushion layer07is disposed between the first portion041and the first adhesive layer06. An orthographic projection of the polarizer03on a reference plane V falls outside an orthographic projection of the first portion041on the reference plane V. The reference plane V is a plane in which a touch surface of the touch panel02is disposed. In summary, in the display device according to the embodiment of the present disclosure, the polarizer and the first portion of the TFPC are arranged separately, which may avoid the superposition of the TFPC and the polarizer and ensure that there is no height difference between a portion, which is in contact with the TFPC, of the touch panel and other portions of the touch panel. Accordingly, the risk of breakage of the touch panel is reduced. Moreover, under the action of the cushion layer, the TFPC may be stuck between the cushion layer and the touch panel, enhancing the stability of the TFPC and further reducing the probability of the breakage of the TFPC when the TFPC is bent. Optionally, a surface, distal from the touch panel02, of the cushion layer07may be substantially flush with a surface, distal from the touch panel02, of the polarizer03. As such, the flatness of one side, distal from the touch panel02, of the polarizer03is higher. The description that the two surfaces are substantially flush means that the two surfaces are completely flush, or the two surfaces are not completely flush, but a distance between the two surfaces is smaller (for example, less than a distance threshold). For example, the range of the distance threshold may be 10 microns to 100 microns, or 10 microns to 20 microns, or the like. InFIG.2, it is exemplified that a surface, distal from the touch panel02, of the cushion layer07is completely flush with a surface, distal from the touch panel02, of the polarizer03. Optionally, an edge of the touch panel02is substantially flush with an edge of the cushion layer07. Optionally, an orthographic projection of the cushion layer07on the reference plane V may substantially coincide with an orthographic projection of the first portion041on the reference plane V. The description that the two orthographic projections substantially coincide means that the two orthographic projections completely coincide, or a deviation is present with respect to centers of the two orthographic projections, wherein the deviation is less than a deviation threshold. InFIG.2, it is exemplified that the orthographic projection of the cushion layer07on the reference plane V completely coincides with the orthographic projection of the first portion041on the reference plane V. Optionally,FIG.3is a schematic structural diagram of a cushion layer according to an embodiment of the present disclosure. As shown inFIG.3, the cushion layer07may include a polyethylene terephthalate (PET) substrate071, and a first adhesive072and a second adhesive073that are stacked on both sides of the PET substrate071. Under the action of the first adhesive072and the second adhesive073, the cushion layer07is sticky. Therefore, the cushion layer07may be firmly fixed to the first portion041of the TFPC04, and thus the cushion layer07may be prevented from falling off and the stability of the TFPC04is enhanced. Optionally, the first adhesive072and the second adhesive073may be made of a conductive material or an insulating material, which is not limited in the embodiment of the present disclosure. It should be noted that in the embodiment of the present disclosure, the polarizer03and the first portion041of the TFPC04are arranged separately. As such, the polarizer03and the first portion041may have various shapes. The two shapes of the polarizer03and the first portion041are described as examples hereinafter. Further,FIG.4is a schematic diagram showing shapes of a polarizer and a TFPC according to an embodiment of the present disclosure. A touch panel02is not shown inFIG.4, and a bending portion043in the TFPC inFIG.4is not in a bent state. Still referring toFIG.2andFIG.4, an orthographic projection of the polarizer03on the reference plane V inFIG.2may include a notch031, and an orthographic projection of the first portion041on the reference plane V inFIG.2falls within the notch031. In this case, the polarizer03half-encloses the first portion041of the TFPC04, and the area of the polarizer03is greater. In addition, inFIG.3, the notch031is exemplarily rectangular. Optionally, the notch031may further be circular, semicircular, elliptical, irregular, or the like, which is not limited in the embodiment of the present disclosure. In addition,FIG.5is a schematic diagram showing shapes of another polarizer and a first portion according to an embodiment of the present disclosure. A touch panel02is not shown inFIG.5. Referring toFIG.2andFIG.5, an orthographic projection of the polarizer03on the touch panel02includes no notch, and no overlap is present the orthographic projection of the first portion041on the touch panel02and the orthographic projection of the polarizer03on the touch panel02. Further, regardless of the shapes of the polarizer03and the first portion041, in a direction parallel to the reference plane V inFIG.2, a minimum spacing between the polarizer03and the first portion041may be greater than or equal to 0.23 mm (for example, the minimum spacing is 0.3 mm, 0.4 mm, and the like). The direction parallel to the reference plane V inFIG.2may include a direction of the polarizer towards the first portion, for example, a direction F inFIG.4andFIG.5. The direction parallel to the reference plane V inFIG.2may further include another direction different from the direction of the polarizer towards the first portion (for example, a direction perpendicular to the direction FinFIG.4andFIG.5), which is not limited in the embodiment of the present disclosure. Optionally, in the direction parallel to the reference plane V inFIG.2, the minimum spacing between the polarizer03and the first portion041may be greater than or equal to 0.3 mm. In the direction parallel to the reference plane V inFIG.2, the minimum spacing between the polarizer03and the first portion041may be further greater than or equal to 0.24 mm or 0.4 mm, or the like, which is not limited in the embodiment of the present disclosure. It should be noted that a size error W1of the polarizer03is about ±0.15 mm; an error W2when the polarizer03is disposed on one side of the touch panel02is about ±0.1 mm; an error W3in the first direction when the first portion041of the TFPC is disposed on one side of the touch panel02is about ±0.1 mm; an error W4in the second direction when the first portion041of the TFPC04is disposed on one side of the touch panel02is about ±0.1 mm; and the first direction is perpendicular to the second direction. Considering the existence of W1, W2, W3, and W4, the minimum spacing between the polarizer03and the first portion041needs to be at least √{square root over (W12+W22+W32+W42)}≈0.23 mm. In this way, it is possible to prevent the polarizer03and the first portion041from being arranged to be proximal to each other, and further prevent the polarizer03and the first portion041from being superimposed due to one or more of the size error and an operation error. Optionally,FIG.6is a schematic structural diagram of a TFPC according to an embodiment of the present disclosure. Referring toFIG.2andFIG.6, as shown inFIG.2, the TFPC04may include a first insulating layer, a first circuit layer, and a second insulating layer that are sequentially stacked in a direction distal from the touch panel02. The first insulating layer includes a first insulating portion4011and a second insulating portion4012. The first circuit layer includes a first circuit portion4021, a second circuit portion4022, and a third circuit portion4023. The second insulating layer includes a third insulating portion4031, a fourth insulating portion4032and a fifth insulating portion4033. The first portion401in the TFPC04includes the first circuit portion4021and the third insulating portion4031, The second portion402in the TFPC04includes the second insulating portion4012, the third circuit portion4023, and the fifth insulating portion4033. The bending portion403in the TFPC04includes the first insulating portion4011, the second circuit portion4022, and the fourth insulating portion4032. Optionally, the TFPC04further includes a second circuit layer404and a third insulating layer405. The second portion042of the TFPC04includes a first thickness region0421and a second thickness region0422. A thickness of the first thickness region0421is greater than that of the second thickness region0422. Moreover, orthographic projections of boundary lines of the first thickness region0421and the second thickness region0422on the reference plane V inFIG.2fall outside an orthographic projection of a display region Q of the display panel01inFIG.2on the reference plane V. The first thickness region0421includes a second insulating portion4012, a third circuit portion4023, a fifth insulating portion4033, a second circuit layer404, and a third insulating layer405that are sequentially stacked in a direction distal from the display panel01. The first insulating portion4011is made of a thermosetting ink, wherein the thermosetting ink is a heat-set ink. The first circuit portion4021is provided with a first connection terminal. For example, the thermosetting ink includes a green thermosetting ink. The flexibility of the thermosetting ink is greater than that of a composite material of polyimide and a pressure sensitive adhesive. Optionally, the thermosetting ink may be a black thermosetting ink or a yellow thermosetting ink, or the thermosetting ink may not include a heat-set ink. For instance, the thermosetting ink includes a light-sensitive ink or the like, which is not limited in the embodiment of the present disclosure. It should be noted that, in general, the first insulating portion4011of the TFPC is made of a composite material of polyimide and a pressure-sensitive adhesive. However, in the embodiment of the present disclosure, the first insulating portion4011is made of a thermosetting ink the flexibility of which is greater than that of the composite material. Therefore, the flexibility of the first insulating portion4011in the embodiment of the present disclosure is relatively high. In this way, the stress generated when the TFPC04is bent may be reduced, and the probability of the breakage of the touch panel02under the action of the stress may be lowered. The first insulating portion4011may be only made of a thermosetting ink. At this time, a structure of the first insulating portion4011may be as shown inFIG.6andFIG.7. Among them,FIG.7is a schematic diagram of binding of a TFPC to a touch panel according to an embodiment of the present disclosure. Moreover, the bending portion043inFIG.7is not in a bent state. Optionally, the first insulating portion4011may include not only a thermosetting ink but also other materials (such as a composite material of polyimide and a pressure-sensitive adhesive). In this case, a structure of the first insulating layer041may be implemented in various ways. In addition, referring toFIG.7, the touch panel02may include a panel body0211, a touch circuit layer0212, and a touch insulating layer0213that are arranged in sequence, and the touch circuit layer0122is provided with a second connection terminal02121, The second connection terminal02121is electrically connected to the first connection terminal40211of the first circuit unit4021in the TFPC04. For example, the first connection terminal40211may be electrically connected to the second connection terminal02121by a conductive adhesive Y. For example,FIG.5is a schematic diagram showing a shape of a first insulating layer according to an embodiment of the present disclosure.FIG.9is a schematic diagram of binding of another TFPC to a touch panel according to an embodiment of the present disclosure. Referring toFIG.5andFIG.9, the first insulating portion4011includes a target insulating pattern10and a composite insulating pattern11, and the second insulating portion4012and the composite insulating pattern11are of an integral structure. The composite insulating pattern11includes a hollow-out portion, and the target insulating pattern10is disposed within the hollow-out portion of the composite insulating pattern11. The target insulating pattern10is made of a thermosetting ink, and the composite insulating pattern11is made of a composite material of polyimide and a pressure sensitive adhesive. InFIG.9, it is exemplified that a thickness of the target insulating pattern10is less than that of the composite insulating pattern11, or a thickness of the target insulating pattern10is greater than or equal to that of the composite insulating pattern11, which is not limited in the embodiment of the present disclosure. Still optionally, the hollow-out portions in the composite insulating pattern11may be arranged in an array. In this case, the target insulating pattern10includes a plurality of insulating blocks101spaced apart. For example, as shown inFIG.8, these insulating blocks101may be arranged in sequence along a direction U1of a bending axis of the bending portion043. Optionally, these insulating blocks101may be further arranged in sequence along a bending direction U2of the bending portion043. Such an arrangement is not shown in the drawing of the embodiment of the present disclosure. Certainly, when the first insulating layer041is made of not only a thermosetting ink but also a composite material of polyimide and a pressure-sensitive adhesive, a structure of the first insulating layer041may be different from that of the insulating layer041shown inFIG.8andFIG.9, which is not limited in the embodiment of the present disclosure. Still referring toFIG.2, the display device may further include a touch drive circuit08, wherein the touch drive circuit08may be disposed on a side, proximal to the display panel01, of the second portion042, and electrically connected to the second portion042. FIG.10is a schematic diagram showing a structure of a cover plate according to an embodiment of the present disclosure, andFIG.2shows a structure of a cross section B inFIG.10. Referring toFIG.2andFIG.10, the cover plate05may include a center region051and an edge region. A thickness of the edge region gradually decreases in a direction distal from the center region051. The edge region includes a first subregion0521and a second subregion0522that are opposite to each other, and a third subregion0523and a fourth subregion0524that are opposite to each other. The center region051is disposed between the first subregion0521and the second subregion0522, and between the third subregion0523and the fourth subregion0524. An orthographic projection of a boundary line C in which the first subregion0521is thinned on the reference plane V falls outside an orthographic projection of a display region Q of the display panel01on the reference plane V. Orthographic projections of a boundary line D in which the second subregion0522is thinned, a boundary line E in which the third subregion0523is thinned, and a boundary line G in which the fourth subregion0524is thinned on the reference plane V fall within the orthographic projection of the display region Q on the reference plane V, and are proximal to the orthographic projection of an edge of the display region Q on the reference plane V. Still referring toFIG.2. the display device may further include a second adhesive layer09, wherein the second adhesive layer09is disposed between the display panel01and the touch panel02. Optionally, the first adhesive layer06and the second adhesive layer09may be both made of an OCA, and the OCA in the first adhesive layer06is an ultraviolet light type OCA, and the OCA in the second adhesive layer09is a non-ultraviolet light type OCA. HQ.11is a schematic structural diagram of a display panel according to an embodiment of the present disclosure. Referring toFIG.2andFIG.11, the display panel01includes a PET tape011and a buffer foam012; wherein edges, proximal to the bending portion043, of the a second adhesive layer09, the PET tape011and the cushion foam012are substantially flush, and orthographic projections of the edges of the second adhesive layer09, the PET tape011and the cushion foam012on the reference plane V fall within an orthographic projection of the cushion layer07on the reference plane V. Optionally, still referring toFIG.2andFIG.11, the display panel01further includes a first panel portion013, a bending panel portion014, and a second panel portion015connected in sequence; wherein thicknesses of regions, proximal to the bending panel portion014, of the first panel portion013and the second panel portion015are the same as a thickness of the bending panel portion014, and thicknesses of portions, distal from the bending panel portion014, of the first panel portion013and the second panel portion015are greater than the thickness of the bending panel portion014. Optionally, a boundary, proximal to the first portion041of the bending panel portion proximal to the first portion041, of the bending portion043, and a boundary, proximal to the second portion042, of the bending panel portion014is more proximal to the center of the display region Q of the display panel01than a boundary, proximal to the second portion014, of the bending portion042. Optionally, still referring toFIGS.2and11, the display device further includes a coating layer010, wherein the coating layer010covers a surface, proximal to the touch flexible printed circuit board04, of the bending panel portion014, and surfaces of portions, proximal to the bending panel portion014, of the first panel portion013and the second panel portion015. Optionally, the coating layer010may be made of an organic silicon rubber or a photoresist and the like. For example, edges of the PET tape011and the cushion foam012are disposed on thicker portions of the first panel portion013and the second panel portion015, and orthographic projections of the edges of the PET tape011and the cushion foam012on the reference plane fall within an orthographic projection of the coating layer010on the reference plane. Optionally, the display device inFIG.2further includes an insulating protective film020, wherein the insulating protective film020is disposed on a side, distal from the display panel01, of the second portion042of the touch flexible printed circuit board04. Optionally, the display de-vice inFIG.2further includes an ink layer030, wherein the ink layer030is disposed between an edge region of the cover plate05and the first adhesive layer06. It should be noted that in the embodiment of the present disclosure, it is exemplified that the display device includes a cushion layer. Optionally, the display device may not include the cushion layer, which is not limited in the embodiment of the present disclosure. In summary, in the display device according to the embodiment of the present disclosure, the polarizer and the first portion of the TFPC are arranged separately, which may avoid the superposition of the TFPC and the polarizer and ensure that there is no height difference between a potion, which is in contact with the TFPC, of the touch panel and other portions of the touch panel. Accordingly, the risk of the breakage of the touch panel is reduced. The touch panel and the display panel in the embodiment of the present disclosure are independent of each other. Therefore, the touch panel may be referred to as an external touch panel. The display device according to the embodiment of the present disclosure may be any product or component with a display function, for example, an electronic paper, a mobile phone, a tablet computer, a television, a display, a notebook computer, a digital photo frame, and a navigator. It should be noted that in the drawings, the size of layers and regions may be exaggerated for clarity of illustration. Moreover, it should be understood that when an element or layer is referred to as being “on” another element or layer; the element or layer may be directly on the other element or an intervening layer may be present. In addition, it should be understood that when an element or layer is referred to as being “under” another element or layer, the element or layer may be directly under the other element, or more than one intervening layer or element may be present. In addition, it should be further understood that when a layer or element is referred to as being “between” two layers or two elements, the layer or element may be a unique layer between the two layers or two elements, or more than one intervening layer or element may be present, Similar reference signs indicate similar elements throughout the whole text. In the present disclosure, the terms “first”, “second”, “third” and “fourth” are for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term “a plurality of” refers to two or more, unless otherwise specifically, defined. Described above are merely exemplary embodiments of the present disclosure, and are not intended to limit the present disclosure. Within the spirit and principles of the disclosure, any modifications, equivalent substitutions, improvements, or the like are within the protection scope of the present disclosure. | 24,658 |
11861098 | DETAILED DESCRIPTION OF EMBODIMENTS Technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of embodiments of the present disclosure, rather than all the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure. As shown inFIG.1, which is a schematic cross-sectional view of a touch display panel according to an embodiment of the present disclosure, the touch display panel100comprises a display panel20and a touch component30, and the touch component30is located on a light-emitting side of the display panel20. In this embodiment, the display panel20is an organic light-emitting diode display panel. As shown inFIG.2, which is a schematic cross-sectional view of the display panel shown inFIG.1, the display panel20comprises a substrate201, a thin film transistor array layer202, an organic light-emitting diode array layer203, and a thin film encapsulation layer204. The thin film transistor array layer202is disposed on the substrate201, and the thin film transistor array layer202comprises a plurality of thin film transistors arranged in an array. The organic light-emitting diode array layer203is disposed on the thin film transistor array layer202, and the organic light-emitting diode array layer203comprises a plurality of organic light-emitting diodes arranged in an array. The plurality of organic light-emitting diodes comprise red organic light-emitting diodes, blue organic light-emitting diodes, and green organic light-emitting diodes. The red organic light-emitting diodes are red sub-pixels R, the blue organic light-emitting diodes are blue sub-pixels B, and green organic light-emitting diodes are green sub-pixels G. The thin film encapsulation layer204is disposed on the organic light-emitting diode array layer203. The thin film encapsulation layer204comprises a first inorganic layer, a second inorganic layer, and an organic layer located between the first inorganic layer and the second inorganic layer. The first inorganic layer is disposed on the organic light-emitting diode array layer203. As shown inFIG.3, which is a first schematic plan view of the touch display panel according to an embodiment of the present disclosure, the display panel20comprises a display area200a, a first sector area200b, a bending area200c, a second sector area200d, and a pad area200e. The display area200a, the first sector area200b, the bending area200c, the second sector area200d, and the pad area200eare arranged in sequence. As shown inFIG.4andFIG.5, the red sub-pixels R, the blue sub-pixels B, and the green sub-pixels G are all disposed in the display area200a, and the display area200ais defined by an area where the red sub-pixels R, the blue sub-pixels B, and the green sub-pixels G are disposed. The display area200afurther comprises a plurality of power lines (not shown) and a plurality of data lines (not shown). The plurality of power lines are divided into multiple groups. Each group of power lines comprises a plurality of adjacent power lines, and extends to the first sector area200b, the bending area200c, the second sector area200d, and the pad area200ein sequence. The plurality of data lines are divided into multiple groups, and each group of data lines comprises a plurality of adjacent data lines, and extends to the first sector area200b, the bending area200c, the second sector area200d, and the pad area200ein sequence. The pad area200eis provided with a plurality of input pads (not shown) and a plurality of output pads (not shown), and the plurality of input pads are configured to input power, communication, and control signals required for operation of a touch display driver integrated chip (TDDI)205. The output pads comprise first output pads that are connected to the data lines and output display data signals, and the output pads further comprise second output pads that are electrically connected to second touch electrode lines and output touch signals. The touch display driver integrated chip205is connected to the first output pads and the second output pads. In this embodiment, the touch component30is disposed on the thin film encapsulation layer204of the display panel20. It is understandable that the touch component30can also be disposed on an independent substrate, and the touch component30is disposed on the display panel20through an adhesive layer. Please continue to refer toFIG.3, the touch component30comprises a first area300aand a second area300b. The first area300aand the second area300bare arranged adjacent to each other. The second area300bis greater than the first area300a, and the first area300aand the second area300bare both set corresponding to the display area200aof the display panel20, that is, both the first area300aand the second area300boverlap the display area200aof the display panel20. The touch component30comprises a plurality of touch electrodes301, a plurality of first touch electrode lines302, and a plurality of second touch electrode lines303, and the touch electrodes301, the first touch electrode lines302, and the second touch electrodes line303are set on a same layer. The plurality of touch electrodes301and the plurality of second touch electrode lines303are all disposed in the second area300b, and the plurality of first touch electrode lines302are disposed in the first area300a. One touch electrode301and one second touch electrode line303are connected one-to-one, one second touch electrode line303and one first touch electrode line302are connected one-to-one, and the second touch electrode line303is connected between the first touch electrode line302and the touch electrode301. In this embodiment, the plurality of touch electrodes301are self-capacitive touch electrodes, and the plurality of touch electrodes301are arranged in an array along a first direction and a second direction. A shape of the plurality of touch electrodes301is rectangular, and the plurality of touch electrodes301may also have other shapes. The first direction is a direction in which the second area300bpoints to the first area300a, and the second direction is perpendicular to the first direction. As shown inFIG.4, each touch electrode301is composed of a metal grid pattern, and the metal grid pattern is composed of arc-shaped metal lines. Each touch electrode301comprises a plurality of elliptical ring patterns3011, a plurality of first connection portions3012, and a plurality of second connection portions3013. In each touch electrode301, each of the first connection portions3012connects two adjacent elliptical ring patterns3011in the first direction, and each of the second connection portions3013connects two adjacent elliptical ring patterns3011in the second direction. Portions of four adjacent elliptical ring patterns3011, the first connection portions3012, and the second connection portions3013are connected to form an octagonal ring pattern3014. Each elliptical ring pattern3011surrounds the green sub-pixel G, and each octagonal ring pattern3014surrounds the blue sub-pixel B or the red sub-pixel R. As shown inFIG.5, a touch electrode shown inFIG.5is basically similar to the touch electrode shown inFIG.4, and the touch electrodes301are both composed of metal grid patterns. The difference lies in that the metal grid pattern inFIG.5is composed of linear metal lines, and the metal grid pattern inFIG.5is comprised of quadrilateral ring patterns, and each of the quadrilateral ring patterns surrounds the green sub-pixel G, the red sub-pixel R, or the blue sub-pixel B. In this embodiment, as shown inFIG.3, the plurality of touch electrodes301comprise first type touch electrodes301aand second type touch electrodes301b. In the first direction (a column direction), the first type touch electrodes301aand the second type touch electrodes301bare arranged side by side, and a size of the first type touch electrodes301ais different from a size of the second type touch electrodes301b. Compared with a same size of touch electrodes in a traditional technology, a size difference design of the touch electrodes in the first direction of the present disclosure makes a portion of the touch component30corresponding to the display area200aof the display panel20have an extra space. The extra space is used to lay the plurality of first touch electrode lines302. The plurality of first touch electrode lines302are divided into several groups, and each group of first touch electrode lines302are arranged together. A minimum distance between two adjacent first touch electrode lines302in each group of first control electrode lines302is less than a distance between two adjacent second touch electrode lines303. In the first area300a, a distance between two adjacent first touch electrode lines302shows a decreasing trend in a direction from the second area300bto the first area300a, that is, touch wires arranged in a touch sector area in the traditional technology are disposed corresponding to the display area of the display panel. Compared to the traditional technology in which a size of the touch sector area is greater than a size of a first sector area of a display panel, resulting in a wider lower frame of a touch display panel, the touch component30in the embodiment of the present disclosure does not require a separate touch sector area in a non-display area, so that a frame of the touch display panel is narrowed. In this embodiment, in the first direction, the second type touch electrodes301bare disposed in a same column as the first type touch electrodes301a, and each column of touch electrodes301comprises at least one first type touch electrode301aand at least one second type touch electrode301b. In the second direction, a plurality of first type touch electrodes301aare arranged in a same row, and a plurality of second type touch electrodes301bare arranged in a same row, that is, each row of touch electrodes301comprises a plurality of first type touch electrodes301aside by side or a plurality of second type touch electrodes301bside by side, an interval between any two adjacent columns of touch electrodes301in the first direction is the same, and an interval between any two adjacent rows of touch electrodes301is the same. In the first direction, the size of the second type touch electrodes301bis less than the size of the first type touch electrodes301a, and the second type touch electrodes301bare disposed close to a first edge2011and/or a second edge2012, and the first edge2011is an edge of a portion of the display panel20corresponding to the display area200aclose to the first touch electrode lines302, and the second edge2012is an edge of the portion of the display panel20corresponding to the display area200aopposite to the first edge2011. Since touch performance requirement of the touch component30corresponding to edges of the display panel20is lower than touch performance requirement of the touch component30corresponding to other areas (such as a middle area) of the display panel20, the second type touch electrodes301bare disposed close to the edge of the portion of the display panel20corresponding to the display area200ain the first direction, which helps to prevent the size different design of the touch electrodes from affecting the touch performance of main touch area of the touch component. Specifically, the second type touch electrodes301bare disposed close to the first edge2011, and the first type touch electrodes301aare located at a side of the second type touch electrodes301baway from the first edge2011, so that the second type touch electrodes301bare all disposed close to the first area300a, which simplifies manufacturing of the touch component30. The plurality of touch electrodes301inFIG.3only comprises one row of second type touch electrodes301b, and the others are all first type touch electrodes301a. As shown inFIG.6, which is a second schematic plan view of the touch display panel according to an embodiment of the present disclosure. The touch display panel shown inFIG.6is basically similar to the touch display panel shown inFIG.3, except that the plurality of touch electrodes301inFIG.6comprise a plurality of rows of adjacent second type touch electrodes301b. The plurality of rows of adjacent second type touch electrodes301bare disposed close to the first area300a. Specifically, two adjacent rows of second type touch electrodes301bare disposed close to the first area300a. As shown inFIG.7, which is a third schematic plan view of the touch display panel according to an embodiment of the present disclosure. The touch display panel shown inFIG.7is basically similar to the touch display panel shown inFIG.3, except that a row of second type touch electrodes301bis disposed between two adjacent rows of first type touch electrodes301a, and another row of second type touch electrodes301bis disposed close to the first area300a. In this embodiment, in the first direction, the first area300ahas a first size D1, and the first size D1is equal to an interval between the touch electrodes301close to the first touch electrode line302and the first edge2011. In the second direction, an interval between two adjacent first type touch electrodes301ais equal to a second size D2, and the first size D1is less than or equal to the second size D2, so that a touch control algorithm can be optimized to ensure the touch performance of the touch display panel, which prevents the touch performance of the lower edge of the touch display panel from decreasing. Specifically, the second size D2is less than or equal to 1.2 mm, for example, a value of the first size D1is 1 mm, 0.8 mm, 0.6 mm, 0.4 mm, or 0.2 mm, and a value of the second size D2is 1.2 mm, 1 mm. or 0.8 mm. In this embodiment, in the first direction, a ratio of the size of the second type touch electrodes301bto the size of the first type touch electrodes301ais greater than or equal to ½ to ensure that the size of the second type touch electrodes301bcan guarantee basic touch performance. Specifically, the ratio of the size of the second type touch electrodes301bin the first direction to the size of the first type touch electrodes301ain the first direction is ¾ or ⅔. In this embodiment, as shown inFIG.3,FIG.6, andFIG.7, in the first direction, the first type touch electrodes301ahave a first height L1, and the second type touch electrodes301bhave a second height L2. The first area300ahas the first size D1, and a number of the second type touch electrodes301barranged in a same row with the at least one first type touch electrode301ais N, and N is an integer greater than or equal to 1, wherein the first height L1, the second height L2, the first size D1, and N satisfy a formula L2=L1−D1/N, so as to make full use of extra space by reducing the height of some of the touch electrodes. Specifically, a value of the first height L1of the first type touch electrodes301aranges from 3 mm to 5 mm, and a value of the second height L2of the second type touch electrodes301branges from 2.5 mm to 4.5 mm. For example, the first height L1of the first type touch electrodes301ais 4 mm, and the second height L2of the second type touch electrodes301bis 3 mm. In this embodiment, the plurality of first touch electrode lines302are disposed in a non-luminous area between adjacent sub-pixels, so as to prevent the plurality of first touch electrode lines302from blocking light emitted by the sub-pixels of the display panel20. Specifically, as shown inFIG.8, which is a first partial enlarged schematic view of the touch display panel shown inFIG.3, both the first touch electrode lines302and the second touch electrode lines303are comprised of metal grid patterns. The second touch electrode lines303extend linearly in the first direction. The second touch electrode lines303comprise the elliptical ring patterns3011and the first connection portions3012. A part of the first touch electrode lines302extend linearly in the first direction, and the part of the first touch electrode lines302are parallel to the second touch electrode lines303. A part of the first touch electrode lines302extend in the first direction in a shape of a broken line, and the part of the first touch electrode lines302comprise first wires3021parallel to the second touch electrode lines303and second wires3022perpendicular to the second touch electrode lines303, the part of the first touch electrode lines302are L-shaped or Z-shaped, and the part of the first touch electrode lines302comprise the elliptical ring patterns3011, the first connection portions3012, and the second connection portions3013. As shown inFIG.9, which is a second partial enlarged schematic view of the touch display panel shown inFIG.3, the touch display panel shown inFIG.9is basically similar to the touch display panel shown inFIG.8, except that the touch electrode lines302and the second touch electrode lines303are both composed of straight metal lines, and the first touch electrode lines302comprise third wires3023, and the third wires3023are in a diagonal shape, and an angle between a line where the third wires3023are located and a line where the second touch electrode lines303are located is an acute angle. The description of the above embodiments is only used to help understand the technical solutions and core ideas of the present disclosure; those of ordinary skill in the art should understand that it is still possible to modify the technical solutions recorded in the foregoing embodiments, or equivalently replace some of the technical features, and these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the scope of the technical solutions of the embodiments of the present disclosure. | 18,031 |
11861099 | DETAILED DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to embodiments of the present disclosure, examples of which may be illustrated in the accompanying drawings. In the following description, when a detailed description of well-known functions or configurations related to this document is determined to unnecessarily cloud a gist of the inventive concept, the detailed description thereof will be omitted. The progression of processing steps and/or operations described is an example; however, the sequence of steps and/or operations is not limited to that set forth herein and can be changed as is known in the art, with the exception of steps and/or operations necessarily occurring in a particular order. Like reference numerals designate like elements throughout. Names of the respective elements used in the following explanations are selected only for convenience of writing the specification and can be thus different from those used in actual products. Advantages and features of the present disclosure, and implementation methods thereof will be clarified through following embodiments described with reference to the accompanying drawings. The present disclosure can, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. Further, the present disclosure is only defined by scopes of claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the example embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the specification. Further, in the following description of the present disclosure, a detailed explanation of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “including,” “having,” and “comprising” used herein are generally intended to allow other components to be added unless the terms are used with the term “only.” Any references to singular can include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two parts is described using the terms such as “over,” “on,” “above,” “below,” and “next,” one or more parts can be positioned between the two parts unless the terms are used with the term “immediately” or “directly.” Further, the phrases such as ‘disposed on’, ‘disposed over’, and ‘disposed above’ can be interchangeably used. When an element or layer is disposed “on” another element or layer, another layer or another element can be interposed directly on the other element or therebetween. Although the terms “first,” “second,” and the like are used for describing various components, these components are not confined by these terms. These terms are merely used for distinguishing one component from the other components and may not define order. Therefore, a first component to be mentioned below can be a second component in a technical concept of the present disclosure. Like reference numerals generally denote like elements throughout the specification. A size and a thickness of each component illustrated in the drawing are illustrated for convenience of description, and the present disclosure is not limited to the size and the thickness of the component illustrated. The features of various embodiments of the present disclosure can be partially or entirely adhered to or combined with each other and can be interlocked and operated in technically various ways, and the embodiments can be carried out independently of or in association with each other. Hereinafter, a light emitting display apparatus according to example embodiments of the present disclosure will be described in detail with reference to accompanying drawings. FIG.1is a plan view of a light emitting display apparatus according to one embodiment of the present disclosure.FIG.2is an enlarged view of area A ofFIG.1. All the components of each light emitting display apparatus according to all embodiments of the present disclosure are operatively coupled and configured. InFIGS.1and2, a substrate110, pixels PX, a power supply wiring part VDD, power wirings VDDL, and a pad part PAD are illustrated, among various components of the light emitting display apparatus100, for convenience of description. The substrate110can be a substrate110configured to support and protect the various components of the light emitting display apparatus100. The substrate110can be formed of glass or a plastic material having flexibility (e.g., a flexible substrate). When the substrate110is formed of a plastic material, the plastic material can be, for example, polyimide PI, but embodiments of the present disclosure are not limited thereto. The substrate110can include a display area AA and a non-display area NA surrounding the display area AA. The display area AA can be an area in which an image is displayed in the light emitting display apparatus100. In the display area AA, a display element and various driving elements configured to drive the display element can be disposed. For example, the display element can be a light emitting diode including an anode, a light emitting layer, and a cathode, but embodiments of the present disclosure are not limited thereto. The display element can be a liquid crystal display element. In addition, various driving elements for driving the display element, such as a thin film transistor, a capacitor, and a wiring, can be disposed at the display area AA. The display area AA will be described in more detail below with reference toFIG.4 A plurality of pixels PX are disposed at the display area AA. The plurality of pixels PX can include a plurality of sub-pixels, respectively. For example, the plurality of pixels PX can include a red sub-pixel, a green sub-pixel, and a blue sub-pixel and can be minimum units for emitting light. In addition, the plurality of pixels PX can further include a white sub-pixel. With reference toFIG.2, the plurality of pixels PX can be connected to the power wirings VDDL. Also, each of the plurality of pixels PX at the display area AA can be connected to a gate wiring and a data wiring. The non-display area NA can be an area in which an image is not displayed, and can be defined as an area surrounding the display area AA. In the non-display area NA, various components configured to drive the plurality of pixels PX can be disposed. The non-display area NA can include a power supply wiring area VLA. The power supply wiring area VLA can be an area in which wirings for supplying power to the light emitting diodes are disposed. The power supply wiring area VLA can be disposed adjacent to one side of the display area AA. That is, the power supply wiring area VLA can be an area positioned between the pad part PAD, to which a flexible printed circuit board FPCB is bonded, and the display area AA, such that power wirings are disposed therein to transmit power from the flexible printed circuit board FPCB to the light emitting diodes of the display area AA. InFIG.2, the power supply wiring part VDD configured to supply a high-potential voltage is illustrated, among various wirings, for convenience of description, but the arrangement of the power wirings is not limited thereto. With reference toFIG.2, the power supply wiring part VDD can be disposed at the power supply wiring area VLA, which is a partial portion of the non-display area NA. The power supply wiring part VDD can be disposed adjacent to an upper end (or an upper portion) of the display area AA. The power supply wiring part VDD, which is a wiring part configured to supply a high-potential voltage to each of the pixels PX in the display area AA, can be connected to each of the plurality of power wirings VDDL. The power supply wiring part VDD can include one or more power supply wirings. The power supply wirings constituting the power supply wiring part VDD will be described below with reference toFIG.4. The power supply wiring part VDD can extend in the same direction as the direction in which the gate wirings disposed in the display area AA extend. Also, the power supply wiring part VDD can be connected to the plurality of power wirings VDDL through connection wirings from the non-display area NA. In this situation, the connection wiring can have a smaller width than the power supply wiring part VDD, but embodiments of the present disclosure are not limited thereto. With reference toFIG.2, the power supply wiring part VDD can be connected to the plurality of power wirings VDDL. The plurality of power wirings VDDL can be disposed at the display area AA to supply a high-potential voltage to the plurality of pixels PX. Each of the plurality of power wirings VDDL can be connected to the power supply wiring part VDD. Accordingly, each of the plurality of power wirings VDDL can receive the same high-potential voltage from the power supply wiring part VDD at the same time. FIG.3is an enlarged plan view of the light emitting display apparatus according to one embodiment of the present disclosure. InFIG.3, the anode121is illustrated among various components of the light emitting diode120. With reference toFIG.3, a plurality of sub-pixels SP can be individual units configured to emit light, and the light emitting diode120can be disposed at each of the plurality of sub-pixels SP. The plurality of sub-pixels SP can include a first sub-pixel SP1, a second sub-pixel SP2, and a third sub-pixel SP3that emit light in different colors from each other. For example, the first sub-pixel SP1can be a blue sub-pixel, the second sub-pixel SP2can be a green sub-pixel, and the third sub-pixel SP3can be a red sub-pixel. The plurality of sub-pixels SP can be disposed in a pentile structure. For example, a plurality of first sub-pixels SP1and a plurality of third sub-pixels SP3can be alternately disposed in the same columns and in the same rows. For example, the first sub-pixels SP1and the third sub-pixels SP3can be alternately disposed in the same columns, and the first sub-pixels SP1and the third sub-pixels SP3can be alternately disposed in the same rows. The plurality of second sub-pixels SP2can be disposed in different columns and in different rows from the plurality of first sub-pixels SP1and the plurality of third sub-pixels SP3. For example, the plurality of second sub-pixels SP2can be disposed in one row, and the plurality of first sub-pixels SP1and the plurality of third sub-pixels SP3can be alternately disposed in another row adjacent to the row in which the plurality of second sub-pixels SP2is disposed. The plurality of second sub-pixels SP2can be disposed in one column, and the plurality of first sub-pixels SP1and the plurality of third sub-pixels SP3can be alternately disposed in another column adjacent to the column in which the plurality of second sub-pixels SP2is disposed. The plurality of first sub-pixels SP1and the plurality of second sub-pixels SP2can face each other in a diagonal direction, and the plurality of third sub-pixels SP3and the plurality of second sub-pixels SP2can also face each other in the diagonal direction. Accordingly, the plurality of sub-pixels SP can be disposed in a lattice shape. Although it is illustrated inFIG.3that the plurality of first sub-pixels SP1and the plurality of third sub-pixels SP3can be disposed in the same columns and in the same rows, and the plurality of second sub-pixels SP2can be disposed in different columns and in different rows from the plurality of first sub-pixels SP1and the plurality of third sub-pixels SP3, the disposition of the plurality of sub-pixels SP is not limited thereto. In addition, although it is described in the present disclosure that the plurality of sub-pixels SP includes first sub-pixels SP1, second sub-pixels SP2, and third sub-pixels SP3, the arrangement, the number, and the color combination of the plurality of sub-pixels SP can be variously modified according to design, but embodiments of the present disclosure are not limited thereto. FIG.4is a cross-sectional view of the light emitting display apparatus taken along line IV-IV′ ofFIG.2. With reference toFIG.4, the light emitting display apparatus100according to an embodiment of the present disclosure includes a substrate110, a buffer layer111, a gate insulating layer112, an interlayer insulating layer113, a passivation layer114, a first over-coating layer115, a second over-coating layer116, a bank117, a driving transistor TR, a light emitting diode120, an encapsulation part130, a touch part140, and a power supply wiring part VDD. With reference toFIG.4, the substrate110is a support member configured to support the other components of the light emitting display apparatus100, and can be formed of an insulating material. For example, the substrate110can be formed of glass, resin, or the like. Alternatively, the substrate110can be formed of a polymer or plastic such as polyimide PI, or can be formed of a material having flexibility. The buffer layer111is disposed on the display area AA and the non-display area NA of the substrate110. The buffer layer111can reduce or prevent permeation of moisture or impurities through the substrate110. The buffer layer111can be formed as a single-layer or a multi-layer of, for example, silicon oxide SiOx or silicon nitride SiNx, but embodiments of the present disclosure are not limited thereto. Meanwhile, the buffer layer111can be omitted according to the type of substrate110or the type of transistor, but embodiments of the present disclosure are not limited thereto. The driving transistor TR is disposed on the buffer layer111. The driving transistor TR includes an active layer ACT, a gate electrode GE, a source electrode SE, and a drain electrode DE. The active layer ACT is disposed on the buffer layer111. The active layer ACT can be formed of a semiconductor material such as an oxide semiconductor, amorphous silicon, or polysilicon, but embodiments of the present disclosure are not limited thereto. For example, when the active layer ACT is formed of an oxide semiconductor, the active layer ACT can include a channel region, a source region, and a drain region, the source region and the drain region being conductive regions, but embodiments of the present disclosure are not limited thereto. The gate insulating layer112can be disposed on the active layer ACT. The gate insulating layer112can be an insulating layer for insulating the gate electrode GE from the active layer ACT, and can be formed as a single-layer or a multi-layer of silicon oxide SiOx or silicon nitride SiNx, but embodiments of the present disclosure are not limited thereto. The gate electrode GE can be disposed on the gate insulating layer112. The gate electrode GE can be formed of a conductive material, for example, copper Cu, aluminum Al, molybdenum Mo, nickel Ni, titanium Ti, chromium Cr, or an alloy thereof, but embodiments of the present disclosure are not limited thereto. The interlayer insulating layer113can be disposed on the gate electrode GE. A contact hole can be formed at the interlayer insulating layer113to connect each of the source electrode SE and the drain electrode DE to the active layer ACT. The interlayer insulating layer113can be formed as a single-layer or a multi-layer of silicon oxide SiOx or silicon nitride SiNx, but embodiments of the present disclosure are not limited thereto. The source electrode SE and the drain electrode DE can be disposed on the interlayer insulating layer113at the display area AA. The source electrode SE and the drain electrode DE disposed to be spaced apart from each other can be electrically connected to the active layer ACT. The source electrode SE and the drain electrode DE can be formed of a conductive material, for example, copper Cu, aluminum Al, molybdenum Mo, nickel Ni, titanium Ti, chromium Cr, or an alloy thereof, but embodiments of the present disclosure are not limited thereto. The passivation layer114can be disposed at the display area AA and the non-display area NA and on the source electrode SE and the drain electrode DE. The passivation layer114can be an insulating layer for protecting the components disposed lower than the passivation layer114. For example, the passivation layer114can be formed as a single-layer or a multi-layer of silicon oxide SiOx or silicon nitride SiNx, but embodiments of the present disclosure are not limited thereto. In addition, the passivation layer114can be omitted in some embodiments of the present disclosure. The first over-coating layer115can be disposed on the passivation layer114. The first over-coating layer115can be an insulating layer for planarization over the substrate110. The first over-coating layer115can be formed of an organic material, and can be formed as a single-layer or a multi-layer of, for example, polyimide or photo acryl, but embodiments of the present disclosure are not limited thereto. An intermediate electrode118can be disposed on the first over-coating layer115at the display area AA. The intermediate electrode118can be electrically connected to the drain electrode DE through a contact hole. The intermediate electrode118can be formed of the same material as either the source electrode SE or the drain electrode DE, but embodiments of the present disclosure are not limited thereto. The second over-coating layer116can be disposed in the display area AA and on the intermediate electrode118. The second over-coating layer116can be provided for planarization over the intermediate electrode118. The second over-coating layer116can be formed of the same material as the first over-coating layer115, but embodiments of the present disclosure are not limited thereto. With reference toFIGS.3and4together, the plurality of the light emitting diodes120can be disposed at the plurality of sub-pixels SP, respectively, on the second over-coating layer116at the display area AA. The light emitting diode120can include an anode121, a light emitting layer122, and a cathode123. The anode121can be disposed on the second over-coating layer116. The anode121can be electrically connected to a transistor of a pixel circuit, for example, the driving transistor TR, to receive a driving current supplied therefrom. The anode121, which supplies holes to the light emitting layer122, can be formed of a conductive material having a high work function. The anode121can be formed of, for example, a transparent conductive material such as indium tin oxide ITO or indium zinc oxide IZO, but embodiments of the present disclosure are not limited thereto. In addition, the light emitting display apparatus100can be implemented in a top emission type or in a bottom emission type. When the light emitting display apparatus100is in the top emission type, a reflective layer can be additionally included under the anode121, the reflective layer being formed of a metal material having excellent reflection efficiency, such as aluminum (Al) or silver (Ag), so that light emitted from the light emitting layer122is reflected by the anode121and then directed upwardly, for example, toward the cathode123. On the other hand, when the light emitting display apparatus100is in the bottom emission type, the anode121can be formed of the transparent conductive material alone. Hereinafter, it is assumed that the light emitting display apparatus100according to an embodiment of the present disclosure is in the top emission type. The bank117can be disposed on the anode121and the second over-coating layer116. The bank117can be an insulating layer disposed between the plurality of sub-pixels SP to distinguish the plurality of sub-pixels SP from each other. The bank117can include an opening for exposing a partial portion of the anode121. The bank117can be an organic insulating material disposed to cover an edge or a perimeter or a periphery portion of the anode121. The bank117can be formed of, for example, polyimide-based, acrylic-based, or benzocyclobutene (BCB)-based resin, but embodiments of the present disclosure are not limited thereto. The light emitting layer122can be disposed on the anode121and over the bank117. The light emitting layer122can be a layer for emitting light in a specific color. Different light emitting layers122can be disposed in the first sub-pixel SP1, the second sub-pixel SP2, and the third sub-pixel SP3, respectively, or identical light emitting layers122can be disposed in all of the plurality of sub-pixels SP. For example, when different light emitting layers122are disposed in the plurality of sub-pixels SP, respectively, a blue light emitting layer can be disposed in the first sub-pixel SP1, a green light emitting layer can be disposed in the second sub-pixel SP2, and a red light emitting layer can be disposed in the third sub-pixel SP3. Alternatively, the light emitting layers122of the plurality of sub-pixels SP can be connected to each other to form a single layer over the plurality of sub-pixels SP. For example, a light emitting layer122can be disposed on all of the plurality of sub-pixels SP, and light from the light emitting layer122can be converted into light with various colors through a light conversion layer, a color filter, or the like, which is separately provided. The cathode123can be disposed on the light emitting layer122. The cathode123, which supplies electrons to the light emitting layer122, can be formed of a conductive material having a low work function. The cathode123can be formed as a single layer over the plurality of sub-pixels SP. For example, the respective cathodes123of the plurality of sub-pixels SP can be connected to each other to be integrally formed. The cathode123can be formed of, for example, a transparent conductive material such as indium tin oxide ITO or indium zinc oxide IZO, a metal alloy such as MgAg, or a ytterbium Yb alloy, and can further include a metal-doped layer, but embodiments of the present disclosure are not limited thereto. Also, the cathode123can be electrically connected to a low-potential power supply wiring to receive a low-potential power supply signal supplied therefrom. With reference toFIG.4, the encapsulation part130can be disposed on the light emitting diode120. For example, the encapsulation part130can be disposed on the cathode123to cover the light emitting diode120. The encapsulation part130can protect the light emitting diode120from moisture or the like permeating from the outside of the light emitting display apparatus100. The encapsulation part130can include a first encapsulation layer131, a foreign material cover layer132, and a second encapsulation layer133. The first encapsulation layer131can be disposed on the cathode123to suppress or prevent permeation of moisture or oxygen. The first encapsulation layer131can be formed of an inorganic material such as silicon nitride SiNx, silicon oxynitride SiNxOy, or aluminum oxide AlyOz, but embodiments of the present disclosure are not limited thereto. The foreign material cover layer132can be disposed on the first encapsulation layer131to planarize a surface. In addition, the foreign material cover layer132can cover foreign materials or particles that can occur in the manufacturing process. The foreign material cover layer132can be formed of an organic material such as silicon oxycarbon (SiOxCz) or acrylic-based or epoxy-based resin, but embodiments of the present disclosure are not limited thereto. The second encapsulation layer133can be disposed on the foreign material cover layer132. The second encapsulation layer133can be disposed to cover a top surface (or an upper surface) and a side surface of the foreign material cover layer132, a side surface of the bank117, and a side surface of the second over-coating layer116. Like the first encapsulation layer131, the second encapsulation layer133can suppress or prevent permeation of moisture or oxygen. The second encapsulation layer133can be formed of an inorganic material such as silicon nitride SiNx, silicon oxynitride SiNxOy, silicon oxide SiOx, or aluminum oxide AlyOz, but embodiments of the present disclosure are not limited thereto. The second encapsulation layer133can be formed of the same material as the first encapsulation layer131, or can be formed of a different material than the first encapsulation layer131. A first power supply wiring VDD1can be disposed on the interlayer insulating layer113at the non-display area NA. The first power supply wiring VDD1can be electrically connected to the driving transistor TR. Accordingly, the first power supply wiring VDD1supplies a voltage to the driving transistor TR to operate the driving transistor TR. The first power supply wiring VDD1can be formed of the same material as the drain electrode DE, but embodiments of the present disclosure are not limited thereto. A second power supply wiring VDD2can be disposed on the first over-coating layer115at the non-display area NA. The second power supply wiring VDD2disposed on the first power supply wiring VDD1can be electrically connected in parallel to the first power supply wiring VDD1through a contact hole. The second power supply wiring VDD2can be formed of the same material as the intermediate electrode118, but embodiments of the present disclosure are not limited thereto. With reference toFIG.4, the touch part140can be disposed on the second encapsulation layer133at the display area AA. The touch part140can be disposed on the first over-coating layer115and the second power supply wiring VDD2at the non-display area NA. The touch part140can include a first inorganic insulating layer141, a first touch part144, a second inorganic insulating layer142, a second touch part145, and an organic insulating layer143. The first inorganic insulating layer141can be disposed on the second encapsulation layer133at the display area AA and on the first over-coating layer115and the second power supply wiring VDD2at the non-display area NA. The first inorganic insulating layer141can be disposed to contact a top surface (or an upper surface) and a side surface of the second encapsulation layer133at the display area AA and cover the first over-coating layer115and the second power supply wiring VDD2at the non-display area NA. The first inorganic insulating layer141can be formed of an inorganic material, such as silicon nitride SiNx, silicon oxide SiOx, or silicon oxynitride SiON, but embodiments of the present disclosure are not limited thereto. The first touch part144can be disposed on the first inorganic insulating layer141. The first touch part144can be disposed in the display area AA on the first inorganic insulating layer141. The first touch part144can include a plurality of patterns disposed to be spaced apart from each other in an X-axis direction and a plurality of patterns disposed to be spaced apart from each other in a Y-axis direction. The first touch part144supplies a touch driving signal for driving the touch part140. In addition, the first touch part144can transmit touch information sensed by the touch part140to a driving IC. The first touch part144can be formed in a mesh shape, but embodiments of the present disclosure are not limited thereto. The first touch part144can be formed of the same material as the source electrode SE or the drain electrode DE of the driving transistor TR, but embodiments of the present disclosure are not limited thereto. The second inorganic insulating layer142can be disposed on the first touch part144and the first inorganic insulating layer141. The second inorganic insulating layer142can suppress a short circuit of the first touch part144disposed adjacent thereto. The second inorganic insulating layer142can be formed of an inorganic material, such as silicon nitride SiNx, silicon oxide SiOx, or silicon oxynitride SiON, but embodiments of the present disclosure are not limited thereto. The second touch part145can be disposed on the second inorganic insulating layer142. The second touch part145can connect the plurality of patterns disposed to be spaced apart from each other in the X-axis direction to each other, or can connect the plurality of patterns disposed to be spaced apart from each other in the Y-axis direction to each other. Since the plurality of patterns disposed in the X-axis direction and the plurality of patterns disposed in the Y-axis direction included in the first touch part144are disposed on the same plane, the plurality of patterns disposed in the X-axis direction or the plurality of patterns disposed in the Y-axis direction are spaced apart and separated from each other at sections where the plurality of patterns disposed in the X-axis direction and the plurality of patterns disposed in the Y-axis direction intersect each other. Accordingly, the second touch part145can connect the plurality of patterns disposed to be spaced apart from each other in the X-axis direction to each other, or can connect the plurality of patterns disposed to be spaced apart from each other in the Y-axis direction to each other. The organic insulating layer143can be disposed on the first inorganic insulating layer141, the second touch part145, and the second inorganic insulating layer142. The organic insulating layer143can be provided for planarization over the second touch part145and protect the components disposed lower than the organic insulating layer143. The organic insulating layer143can be formed of an epoxy-based or acrylic-based polymer, but embodiments of the present disclosure are not limited thereto. In addition, a polarizing plate can be further disposed on the touch part140. The polarizing plate can be disposed on the touch part140to reduce reflection of external light incident on the light emitting display apparatus100. In addition, various optical films or protective films can be further disposed on the touch part140. In the light emitting display apparatus100according to an embodiment of the present disclosure, a voltage drop can be reduced by using the first power supply wiring VDD1and the second power supply wiring VDD2included in the power supply wiring part VDD. For example, the first power supply wiring VDD1and the second power supply wiring VDD2can be disposed in the power supply wiring area VLA of the non-display area NA and electrically connected in parallel to each other through a contact hole. Accordingly, the power supply wiring part VDD can more stably supply a high-potential voltage to the plurality of power wirings VDDL, thereby reducing a voltage drop in the plurality of power wirings VDDL and improving a brightness uniformity of the light emitting display apparatus100. FIGS.5and6are enlarged views of a light emitting display apparatus according to another embodiment of the present disclosure.FIG.7is a cross-sectional view of the light emitting display apparatus according to another embodiment of the present disclosure. As compared with the light emitting display apparatus100ofFIGS.1to4, the light emitting display apparatus200ofFIGS.5to7substantially has the same configuration, while being different only in a power supply wiring part VDD and a touch part240. Thus, the overlapping description for the same components will be omitted.FIG.5is a diagram for explaining a first power supply wiring VDD1and a second power supply wiring VDD2of the power supply wiring part VDD, andFIG.6is a diagram for explaining a third power supply wiring VDD3of the power supply wiring unit VDD. For convenience of description, components other than the power supply wiring part VDD are schematically illustrated or omitted inFIGS.5and6. With reference toFIG.5, the first power supply wiring VDD1and the second power supply wiring VDD2of the power supply wiring part VDD can be disposed in the power supply wiring area VLA, which is a partial portion of the non-display area NA. The first power supply wiring VDD1and the second power supply wiring VDD2can be disposed adjacent to an upper end of the display area AA. The first power supply wiring VDD1and the second power supply wiring VDD2, which are wirings for supplying a high-potential voltage to each of the pixels PX in the display area AA, can be connected to each of the plurality of power wirings VDDL. The first power supply wiring VDD1and the second power supply wiring VDD2can include a first portion directly connected to the pad part PAD, a second portion directly connected to the plurality of power wirings VDDL, and a third portion connecting the first portion and the second portion to each other. As illustrated inFIG.5, the first portion (e.g., three vertical branches inFIG.5) can include a plurality of branch-shaped portions directly connected to the pad part PAD and a straight line-shaped portion (e.g., horizontal bar shaped portion inFIG.5) to which the plurality of branch-shaped portions are connected (e.g., together forming a trident shape or a letter “E” shape rotated 270 degrees). Also, the pad part PAD and the first portion of the first power supply wiring VDD1and the second power supply wiring VDD2can form a letter “B” shape rotated 270 degrees. As illustrated inFIG.5, the second portion can have a straight line shape (e.g., the lower horizontal bar shaped portion inFIG.5). In addition, the third portion (e.g., the fifteen vertical wire segments inFIG.5) can include a plurality of wirings connecting the first portion and the second portion to each other. Also, the straight line-shaped portion of the first portion can be shorter than the straight line shape of the second portion, and the straight line shape of the second portion can be thicker than the straight line-shaped portion of the first portion. However, the shapes of the first portion, the second portion, and the third portion are not limited thereto. With reference toFIG.6, the third power supply wiring VDD3of the power supply wiring part VDD can be disposed at the power supply wiring area VLA, which is a partial portion of the non-display area NA. The third power supply wiring VDD3can be disposed adjacent to the upper end of the display area AA. The third power supply wiring VDD3can be disposed to overlap the first portion of the first power supply wiring VDD1and the second power supply wiring VDD2. In addition, the third power supply wiring VDD3can have the same shape as the first portion of the first power supply wiring VDD1and the second power supply wiring VDD2. For example, taken together, the third power supply wiring VDD3and the first portion of the first power supply wiring VDD1and the second power supply wiring VDD2can form a triple layered trident head shape. The third power supply wiring VDD3can be electrically connected in parallel to the first power supply wiring VDD1and the second power supply wiring VDD2. This will be described in more detail with reference toFIG.7. With reference toFIG.7, the third power supply wiring VDD3can be disposed on the first inorganic insulating layer141. The third power supply wiring VDD3can be disposed at the non-display area NA on the first inorganic insulating layer141. The third power supply wiring VDD3can be electrically connected to the second power supply wiring VDD2through a contact hole. Accordingly, the third power supply wiring VDD3can be electrically connected in parallel to the first power supply wiring VDD1and the second power supply wiring VDD2. The third power supply wiring VDD3can be formed of the same material on the same layer as the first touch part144, but embodiments of the present disclosure are not limited thereto. The third power supply wiring VDD3can have a larger thickness than the anode121of the light emitting diode120. For example, the anode has a relatively smaller thickness as compared with the third power supply wiring VDD3. If the third power supply wiring VDD3is formed of the same material at the same thickness as the anode121, the third power supply wiring VDD3can have an insufficient effect in reducing a total resistance of the power supply wiring part VDD. Thus, in the light emitting display apparatus200according to another embodiment of the present disclosure, the total resistance of the power supply wiring part VDD can be effectively reduced by forming the third power supply wiring VDD3to have a larger thickness than the anode121of the light emitting diode120. A second inorganic insulating layer242of the touch part240can be disposed on the first touch part144, the first inorganic insulating layer141, and the third power supply wiring VDD3. The second inorganic insulating layer242can suppress or prevent a short circuit of the first touch part144disposed adjacent thereto. In addition, the second inorganic insulating layer242can be disposed to cover the third power supply wiring VDD3. The second inorganic insulating layer242can be formed of an inorganic material. For example, the second inorganic insulating layer242can be formed of an inorganic material such as silicon nitride SiNx, silicon oxide SiOx, or silicon oxynitride SiON, but embodiments of the present disclosure are not limited thereto. Hereinafter, the effects and advantages of the light emitting display apparatus200according to another embodiment of the present disclosure will be described in more detail with reference toFIGS.8and9together. FIG.8is a diagram showing a voltage drop and a brightness uniformity in a light emitting display apparatus according to an experimental example, andFIG.9is a diagram showing a voltage drop and a brightness uniformity in the light emitting display apparatus according to another embodiment of the present disclosure. First, referring toFIG.8and Table 1 below, it was confirmed in the light emitting display apparatus according to the experimental example that the voltage drop (IR Drop) occurred by 0.021 V at the top ({circle around (1)}), by 0.042 V at the middle ({circle around (2)}), and by 0.048 V at the bottom ({circle around (3)}). In addition, it was confirmed in the light emitting display apparatus according to the experimental example that the brightness uniformity was 95.4%. TABLE 1IR Drop (V)UniformityTop ({circle around (1)})0.02195.4%Middle ({circle around (2)})0.042Bottom ({circle around (3)})0.048 In contrast, with reference toFIG.9and Table 2 below, it was confirmed in the light emitting display apparatus200according to another embodiment of the present disclosure that the voltage drop (IR Drop) occurred by 0.020 V at the top ({circle around (1)}), by 0.040 V at the middle ({circle around (2)}), and by 0.044 V at the bottom ({circle around (3)}). In addition, it was confirmed in the light emitting display apparatus200according to another embodiment of the present disclosure that the brightness uniformity was 97.7%. That is, it was confirmed that the light emitting display apparatus200according to another embodiment of the present disclosure had a smaller voltage drop while having a larger brightness uniformity by about 2%, as compared to the organic light emitting display apparatus according to the experimental example. TABLE 2IR Drop (V)UniformityTop ({circle around (1)})0.02097.7%Middle ({circle around (2)})0.040Bottom ({circle around (3)})0.044 In the light emitting display apparatus according to the experimental example, wirings constituting a power supply wiring part having a two-stack or three-stack structure are additionally disposed in the display area. In this situation, the wirings can be formed in the same process as the anode electrodes of the light emitting diodes. However, the wirings are formed to have a relatively smaller thickness than the anode electrodes, resulting in a problem that the effect thereof is not sufficient in reducing a total resistance of the power supply wiring part. In addition, in accordance with the high resolution of the light emitting display apparatus, a pentile-structure pixel arrangement method has been introduced. However, in the pentile-structure pixel arrangement, when designing an additional wiring in the display area, the additional wiring needs to avoid power wirings and anodes, resulting in a problem that the length of the additional wiring may increase, thereby causing an increase in resistance of the additional wiring. Further, there has been another problem that no process margin may remain between the additional wiring and the power wirings and anodes, and as a result, there may be a region in which another additional wiring cannot be designed. In order to solve or address the problems, in the light emitting display apparatus200according to another embodiment of the present disclosure, a space in the power supply wiring area VLA of the non-display area NA of the touch part240allowing a wiring to be designed therein is utilized to reduce a voltage drop of the light emitting display apparatus200. For example, the voltage drop of the light emitting display apparatus200can be reduced by disposing the third power supply wiring VDD3formed of the same material as the first touch part144of the touch part240in the power supply wiring part VDD. The third power supply wiring VDD3can be disposed at the non-display area NA of the touch part140and electrically connected in parallel to the first power supply wiring VDD1and the second power supply wiring VDD2through the contact hole. Accordingly, the power supply wiring part VDD can more stably supply a high-potential voltage to the plurality of power wirings VDDL, thereby reducing voltage drops in the plurality of power wirings VDDL. Therefore, the brightness uniformity of the light emitting display apparatus200can be improved according to the reduction in voltage drop of the plurality of power wirings VDDL, by disposing the third power supply wiring VDD3in the power supply wiring part VDD. A light emitting display apparatus according to one or more embodiments of the present disclosure will be described as follows. A light emitting display apparatus according to an embodiment of the present disclosure comprises a substrate including a display area and a non-display area positioned outside the display area and having a power supply wiring part, a first power supply wiring disposed in the power supply wiring part, a second power supply wiring disposed on the first power supply wiring, and a wiring electrically connected to the second power supply wiring. According to some embodiments of the present disclosure, the light emitting display apparatus further includes a thin film transistor disposed at the display area, and a touch part disposed on the thin film transistor to overlap the display area and the non-display area. The touch part can include a first inorganic insulating layer, a first touch part disposed on the first inorganic insulating layer, a second inorganic insulating layer disposed on the first touch part, and a second touch part disposed on the second inorganic insulating layer. The wiring can be disposed on the first inorganic insulating layer at the non-display area. According to some embodiments of the present disclosure, the wiring can be disposed at the power supply wiring part. According to some embodiments of the present disclosure, the thin film transistor can include a source electrode and a drain electrode, and the first touch part and the second touch part can be formed of the same material as the source electrode and the drain electrode. According to some embodiments of the present disclosure, the second inorganic insulating layer can be disposed to cover the wiring. According to some embodiments of the present disclosure, the thin film transistor can include a source electrode and a drain electrode, and the first power supply wiring and the second power supply wiring can be formed of the same material as the source electrode and the drain electrode. According to some embodiments of the present disclosure, the light emitting display apparatus can further include an anode disposed on the thin film transistor. The thickness of the wiring is larger than the thickness of the anode. According to some embodiments of the present disclosure, the light emitting display apparatus can further include a plurality of sub-pixels disposed at the display area. The plurality of sub-pixels can be configured to have a pentile structure. According to another embodiment of the present disclosure, a light emitting display apparatus comprises a substrate including a display area and a power supply wiring part disposed outside the display area, a thin film transistor disposed at the display area and including a source electrode and a drain electrode, a touch part disposed on the thin film transistor, a first power supply wiring disposed at the power supply wiring part, a second power supply wiring disposed on the first power supply wiring, and a wiring disposed on the second power supply wiring and electrically connected to the second power supply wiring. According to some embodiments of the present disclosure, the touch part can include a first inorganic insulating layer, a first touch part disposed on the first inorganic insulating layer, a second inorganic insulating layer disposed on the first touch part, and a second touch part disposed on the second inorganic insulating layer. The wiring can be formed of the same material on the same layer as the first touch part. According to some embodiments of the present disclosure, the first power supply wiring and the second power supply wiring can be formed of the same material as the source electrode and the drain electrode. According to some embodiments of the present disclosure, the light emitting display apparatus can further include an anode disposed on the thin film transistor. The wiring can have a larger thickness than the anode. It will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the technical idea or scope of the disclosures. Thus, it may be intended that embodiments of the present disclosure cover the modifications and variations of the disclosure provided they come within the scope of the appended claims and their equivalents. | 46,585 |
11861100 | DETAILED DESCRIPTION OF THE EMBODIMENT The advantages and features of the present disclosure, and methods for accomplishing the same will be more clearly understood from exemplary embodiments described below with reference to the accompanying drawings. However, the present disclosure is not limited to the following exemplary embodiments but may be implemented in various different forms. The exemplary embodiments are provided only to complete disclosure of the present disclosure and to fully provide a person with ordinary skill in the art to which the present disclosure pertains with the category of the present disclosure, and the present disclosure will be defined by the appended claims. The shapes, dimensions, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the exemplary embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the specification. Further, in the following description of the present disclosure, a detailed explanation of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “including,” “having,” and “consist of” used herein are generally intended to allow other components to be added unless the terms are used with the term “only”. Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two parts is described using the terms such as “on”, “above”, “below”, and “next”, one or more parts may be positioned between the two parts unless the terms are used with the term “immediately” or “directly”. When an element or layer is referred to as being “on” another element or layer, it may be directly on the other element or layer, or intervening elements or layers may be present. Although the terms “first”, “second”, and the like are used for describing various components, these components are not confined by these terms. These terms are merely used for distinguishing one component from the other components. Therefore, a first component to be mentioned below may be a second component in a technical concept of the present disclosure. Throughout the whole specification, the same reference numerals denote the same elements. Since the dimensions and thickness of each component illustrated in the drawings are represented for convenience in explanation, the present disclosure is not necessarily limited to the illustrated dimensions and thickness of each component. The features of various embodiments of the present disclosure can be partially or entirely coupled to or combined with each other and can be interlocked and operated in technically various ways, and the embodiments can be carried out independently of or in association with each other. Hereinafter, various exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a diagram illustrating a schematic configuration of a display device according to an exemplary embodiment of the present disclosure.FIG.2is a view illustrating a display device according to an exemplary embodiment of the present disclosure.FIG.1is a system diagram of a display device. Referring toFIG.1, a display device includes a display panel DISP in which a plurality of data lines and a plurality of gate lines are disposed and a plurality of sub pixels defined by the plurality of data lines and the plurality of gate lines are disposed, a data driving circuit DDC which drives the plurality of data lines, a gate driving circuit GDC which drives the plurality of gate lines, and a display controller DCTR which controls operations of the data driving circuit DDC and the gate driving circuit GDC. Each of the data driving circuit DDC, the gate driving circuit GDC, and the display controller DCTR may be implemented by one or more individual components. In some cases, two or more of the data driving circuit DDC, the gate driving circuit GDC, and the display controller DCTR may be implemented to be combined as one component. For example, the data driving circuit DDC and the display controller DCTR may be implemented as one integrated chip (IC chip). In order to provide a touch sensing function, the display device according to exemplary embodiments of the present disclosure may include a touch panel TSP and a touch sensing circuit TSC. The touch panel TSP includes a plurality of touch electrodes. The touch sensing circuit TSC supplies a touch driving signal to the touch panel TSP and detects a touch sensing signal from the touch panel TSP to sense the presence of a touch of a user or a touch position (touch coordinate) in the touch panel TSP based on the detected touch sensing signal. For example, the touch sensing circuit TSC may include a touch driving circuit TDC and a touch controller TCTR. The touch driving circuit TDC supplies a touch driving signal to the touch panel TSP and detects a touch sensing signal from the touch panel TSP. The touch controller TCTR senses the presence of a touch of a user and/or a touch position in the touch panel TSP based on the touch sensing signal detected by the touch driving circuit TDC. The touch driving circuit TDC may include a first circuit part which supplies the touch driving signal to the touch panel TSP and a second circuit part which detects the touch sensing signal from the touch panel TSP. The touch driving circuit TDC and the touch controller TCTR may be implemented by separate components or in some cases, may be implemented to be combined as one component. In the meantime, each of the data driving circuit DDC, the gate driving circuit GDC, and the touch driving circuit TDC may be implemented by one or more integrated circuits. From the viewpoint of electrical connection with the display panel DISP, the circuits may be implemented by a chip on glass (COG) type, a chip on film (COF) type, a tape carrier package (TCP) type, or the like. Further, the gate driving circuit GDC may also be implemented by a gate in panel (GIP) type. In the meantime, each of circuit configurations DDC, GDC, and DCTR for driving the display panel DISP and circuit configurations TDC and TCTR for touch sensing may be implemented by one or more individual components. In some cases, one or more of circuit configurations DDC, GDC, and DCTR for display driving and one or more of circuit configurations TDC and TCTR for touch sensing are functionally integrated to be implemented by one or more components. For example, the data driving circuit DDC and the touch driving circuit TDC may be implemented to be integrated in one or two or more integrated circuit chips. When the data driving circuit DDC and the touch driving circuit TDC are implemented to be integrated in two or more integrated circuit chips, each of the two or more integrated circuit chips may have a data driving function and a touch driving function. In the meantime, the display device according to the exemplary embodiments of the present disclosure may be various types such as an organic light emitting display device or a liquid crystal display device. In the following description, for the convenience of description, it will be described that the display device with an integrated touch screen is an organic light emitting display device as an example. That is, even though the display panel DISP may be various types such as an organic light emitting display panel or a liquid crystal display panel, in the following description, for the convenience of description, it will be described that the display panel DISP is an organic light emitting display panel as an example. The touch panel TSP may include a plurality of touch electrodes which are applied with a touch driving signal or detect a touch sensing signal therefrom and a plurality of touch routing lines which connect the plurality of touch electrodes to the touch driving circuit TDC. The touch panel TSP may be provided at the outside of the display panel DISP. For example, the touch panel TSP and the display panel DISP may be separately manufactured to be combined. Such a touch panel TSP is called an external type or an add-on type, but is not limited to this terminology. The touch panel TSP may be embedded in the display panel DISP. For example, when the display panel DISP is manufactured, a touch sensor structure such as a plurality of touch electrodes and a plurality of touch routing lines which configure a touch panel TSP may be formed together with electrodes and signal lines for driving the display device. Such a touch panel TSP is called an embedded type, but is not limited to this terminology. Hereinafter, for the convenience of description, it is assumed that the touch panel TSP is an embedded type, but the touch panel TSP is not limited thereto. Referring toFIG.2, the display panel DISP includes a camera area CA in which the plurality of sub pixels SP are not disposed, but a camera is disposed, a display area AA which surrounds the camera area CA and has a plurality of sub pixels SP disposed therein, and a non-display area NA which is disposed in the outer periphery of the display area AA. In the display area AA, a plurality of sub pixels SP for image displaying is disposed and various electrodes or signal lines for display driving may be disposed. Further, in the display area AA, a plurality of touch electrodes for touch sensing and a plurality of touch routing lines electrically connected thereto may be disposed. Accordingly, the display area AA may also be referred to as a touch sensing area which is capable of sensing the touch. In the non-display area NA, no image is displayed and wiring lines and circuit units may be formed. For example, in the non-display area NA, a plurality of pads may be disposed and the pads may be connected to the plurality of sub pixels SP of the display areas AA, respectively. Further, in the non-display area NA of the display panel DISP, link lines extending from a plurality of touch routing lines disposed in the display area AA or link lines which are electrically connected to a plurality of touch routing lines disposed in the display area AA, and pads which are electrically connected to the link lines may be disposed. The pads disposed in the non-display area NA may be bonded or electrically connected with the touch driving circuit TDC. In the non-display area NA, a part of an outermost touch electrode among a plurality of touch electrodes disposed in the display area AA expands or one or more electrodes (touch electrodes) formed of the same material as the plurality of touch electrodes disposed in the display area AA may be further disposed. For example, all the plurality of touch electrodes disposed in the display panel DISP may be disposed in the display area AA or some (for example, an outermost touch electrode) among the plurality of touch electrodes disposed in the display panel DISP may be disposed in the non-display area NA. Some (for example, an outermost touch electrode) among the plurality of touch electrodes disposed in the display panel DISP may be disposed in both the display area AA and the non-display area NA. FIG.3is an exemplary view of a touch panel in a display device according to an exemplary embodiment of the present disclosure. InFIG.3, for the convenience of description, among various configurations of the display device100, only an encapsulation unit ENCAP, a touch electrode TE, a routing line TL, and a touch pad TP are illustrated. On the encapsulation unit ENCAP, a plurality of first touch electrode lines X-TEL and a plurality of second touch electrode lines Y-TEL are disposed. Each of the plurality of first touch electrode lines X-TEL is disposed in a first direction X and each of the plurality of second touch electrode lines Y-TEL may be disposed in a second direction Y intersecting the first direction X. Each of the plurality of first touch electrode lines X-TEL includes a plurality of first touch electrodes X-TE which are electrically connected, and each of the plurality of second touch electrode lines Y-TEL includes a plurality of second touch electrodes Y-TE which are electrically connected. The plurality of first touch electrodes X-TE which configure the plurality of first touch electrode lines X-TEL may be driving touch electrodes and the plurality of second touch electrodes Y-TE which configure the plurality of second touch electrode lines Y-TEL may be sensing touch electrodes and vice versa. The plurality of touch routing lines TL may include at least one first-touch routing line X-TL connected to each of the plurality of first touch electrode lines X-TEL and at least one second touch routing line Y-TL connected to each of the plurality of second touch electrode lines Y-TEL. Referring toFIG.3, each of the plurality of first touch electrode lines X-TEL may include a plurality of first touch electrodes X-TE disposed in the same row (or column) and at least one first touch connection electrode X-CL which electrically connects the plurality of first touch electrodes. Each of the plurality of second touch electrode lines Y-TEL may include a plurality of second touch electrodes Y-TE disposed in the same column (or row) and at least one second touch connection electrode Y-CL which electrically connects the plurality of second touch electrodes. InFIG.3, the first touch connection electrode X-CL which connects adjacent two first touch electrodes X-TE is metal which is connected to two adjacent first touch electrodes X-TE through a contact hole. Further, the second touch connection electrode Y-CL which connects adjacent two second touch electrodes Y-TE is metal which is integrated with two adjacent second touch electrodes Y-TE, but it is not limited thereto. In a region (a touch electrode line intersecting region) where the first touch electrode line X-TEL and the second touch electrode line Y-TEL intersect, the first touch connection electrode X-CL and the second touch connection electrode Y-CL may intersect. When the first touch connection electrode X-CL and the second touch connection electrode Y-CL intersect in the touch electrode line intersecting region, the first touch connection electrode X-CL and the second touch connection electrode Y-CL need to be located on different layers. Each of the plurality of first touch electrode lines X-TEL is electrically connected to a corresponding first touch pad X-TP by means of one or more first touch routing lines X-TL. For example, a first touch electrode X-TE which is disposed at the outermost side, among the plurality of first touch electrodes X-TE included in one first touch electrode line X-TEL, is electrically connected to a corresponding first touch pad X-TP by means of the first touch routing line X-TL. Each of the plurality of second touch electrode lines Y-TEL is electrically connected to a corresponding second touch pad Y-TP by means of one or more second touch routing lines Y-TL. For example, a second touch electrode Y-TE which is disposed at the outermost side, among the plurality of second touch electrodes Y-TE included in one second touch electrode line Y-TEL, is electrically connected to a corresponding second touch pad Y-TP by means of the second touch routing line Y-TL. Each of the plurality of first touch routing lines X-TL which is electrically connected to the plurality of first touch electrode lines X-TEL may extend to a portion where the encapsulation unit ENCAP is not provided while being disposed on the encapsulation unit ENCAP to be electrically connected to the plurality of first touch pads X-TP. Each of the plurality of second touch routing lines Y-TL which is electrically connected to the plurality of second touch electrode lines Y-TEL may extend to a portion where the encapsulation unit ENCAP is not provided while being disposed on the encapsulation unit ENCAP to be electrically connected to the plurality of second touch pads Y-TP. Here, the encapsulation unit ENCAP may be located in the display area AA and in some cases, may extend to the non-display area NA. FIG.4is a cross-sectional view taken along IV-IV′ ofFIG.3. Referring toFIG.4, when the touch panel TSP is embedded in the display panel DISP and the display panel DISP is implemented as an organic light emitting display panel, the touch panel TSP may be located on the encapsulation unit ENCAP in the display panel DISP. In other words, the plurality of touch electrodes TE may be located on the encapsulation layer ENCAP in the display panel DISP. The first transistor T1which is a driving transistor in each sub pixel SP in the display area AA is disposed on the substrate SUB. The first transistor T1includes a first node electrode NE1corresponding to a gate electrode, a second node electrode NE2corresponding to a source electrode or a drain electrode, a third node electrode NE3corresponding to a drain electrode or a source electrode, and a semiconductor layer SEMI. The first node electrode NE1and the semiconductor layer SEMI may overlap with a gate insulating layer GI therebetween. The second node electrode NE2is formed on an insulating layer INS to be in contact with one side of the semiconductor layer SEMI and the third node electrode NE3is formed on the insulating layer INS to be in contact with the other side of the semiconductor layer SEMI. The light emitting diode ED may include a first electrode E1corresponding to an anode electrode (or a cathode electrode), an emission layer EL formed on the first electrode E1, and a second electrode E2corresponding to a cathode electrode (or an anode electrode) formed on the emission layer EL. The first electrode E1is electrically connected to the second node electrode NE2of the first transistor T1which is exposed through a pixel contact hole which passes through the planarization layer PLN. The emission layer EL is formed on the first electrode EL of the emission area provided by the bank BANK. The emission layer EL may be formed by laminating a hole related layer, an emission layer, and an electron related layer on the first electrode EL in this order or in a reverse order. The second electrode E2is disposed to be opposite to the first electrode E1with the emission layer EL therebetween. The encapsulation unit ENCAP blocks moisture or oxygen from the outside from permeating into the light emitting diode ED which is vulnerable to the moisture or oxygen from the outside. Such an encapsulation unit ENCAP may be formed as one layer or as illustrated inFIG.9, may be formed by a plurality of layers PAS1, PCL, and PAS2. For example, when the encapsulation unit ENCAP is formed of a plurality of layers PAS1, PCL, and PAS2, the encapsulation unit ENCAP may include one or more inorganic encapsulation layers PAS1and PAS2and one or more organic encapsulation layer PCL. As a specific example, the encapsulation unit ENCAP may have a structure in which a first inorganic encapsulation layer PAS1, an organic encapsulation layer PCL, and a second inorganic encapsulation layer PAS2are sequentially laminated. Here, the organic encapsulation layer PCL may further include at least one organic encapsulation layer or at least one inorganic encapsulation layer. The first inorganic encapsulation layer PAS1is formed on the substrate SUB on which the second electrode E2corresponding to the cathode electrode is formed so as to be most adjacent to the light emitting diode ED. The first inorganic encapsulation layer PAS1is formed of an inorganic insulating material on which low-temperature deposition is allowed, such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, or aluminum oxide Al2O3, but it is not limited thereto. Since the first inorganic encapsulation layer PAS1is deposited in a low temperature atmosphere, the first inorganic encapsulation layer PAS1may suppress the damage of the emission layer EL including an organic material which is vulnerable to a high temperature atmosphere during a deposition process. The organic encapsulation layer PCL may be formed to have a smaller area than the first inorganic encapsulation layer PAS1and in this case, the organic encapsulation layer PCL may be formed to expose both ends of the first inorganic encapsulation layer PAS1. The organic encapsulation layer PCL may serve as a buffer which relieves a stress between layers due to the bending of the display device with an integrated touch screen which is an organic light emitting display device and also serve to enhance a planarization performance. The organic encapsulation layer PCL may be formed of an organic insulating material, such as acrylic resin, epoxy resin, polyimide, polyethylene, or silicon oxy carbon (SiOC). The organic encapsulation layer PCL may be formed by an inkjet method. When the organic encapsulation layer PCL is formed by the inkjet method, at least one dam DAM may be formed in a boundary area of the non-display area NA and the display area AA or in a partial area in the non-display area NA. For example, as illustrated inFIG.4, a first primary dam DAM1which is located between a pad area in which the plurality of first touch pads X-TP and the plurality of second touch pads Y-TP in the non-display area and the display area AA and is adjacent to the display area AA and a secondary dam DAM2adjacent to the pad area may be disposed. One or more dams DAM may suppress the liquid type organic encapsulation layer PCL from flowing to the non-display area NA to invade the pad area until a liquid type organic encapsulation layer PCL is dropped in the display area AA and then hardened. The primary dam DAM1and/or the secondary dam DAM2may be formed with a single layer or multi-layered structure. For example, the primary dam DAM1and/or the secondary dam DAM2may be formed simultaneously with the same material as at least one of the bank BANK and the spacer (not illustrated). In this case, the dam structure may be formed without having the mask adding process and increasing the cost. Further, the primary dam DAM1and/or the secondary dam DAM2may be formed with a structure in which a first inorganic encapsulation layer PAS1and/or a second inorganic encapsulation layer PAS2are laminated on the bank BANK. Further, the organic encapsulation layer PCL including an organic material, as illustrated inFIG.5, may be located only on an inner surface of the primary dam DAM1. In contrast, the organic encapsulation layer PCL including an organic material may also be located above at least a part of the primary dam DAM1and the secondary dam DAM2. For example, the organic encapsulation layer PCL may be located above the primary dam DAM1. The second inorganic encapsulation layer PAS2may be formed on the substrate SUB on which the organic encapsulation layer PCL is formed so as to cover an upper surface and a side surface of the organic encapsulation layer PCL and the first inorganic encapsulation layer PAS1. The second inorganic encapsulation layer PAS2may minimize or block the permeation of the external moisture or oxygen into the first inorganic encapsulation layer PAS1and the organic encapsulation layer PCL. The second inorganic encapsulation layer PAS2is formed of an inorganic insulating material, such as silicon nitride SiNx, silicon oxide SiOx, silicon oxynitride SiON, or aluminum oxide Al2O3, but it is not limited thereto. A touch buffer layer T-BUF may be disposed on the encapsulation unit ENCAP. The touch buffer layer T-BUF may be located between a touch sensor metal including second touch electrodes X-TE and Y-TE and second touch connection electrodes X-CL and Y-CL and the second electrode E2of the light emitting diode ED. The touch buffer layer T-BUF may be designed to maintain a distance between the touch sensor metal and the second electrode E2of the light emitting diode ED to a predetermined minimum distance (for example, 1 m). Accordingly, the touch buffer layer T-BUF may reduce or suppress parasitic capacitance formed between the touch sensor metal and the second electrode E2of the light emitting diode ED to suppress the touch sensitivity degradation due to the parasitic capacitance. The touch sensor metal including the plurality of touch electrodes X-TE and Y-TE and the plurality of touch connection electrodes X-CL and Y-CL may be disposed on the encapsulation unit ENCAP without providing the touch buffer layer T-BUF. Further, the touch buffer layer T-BUF may suppress the permeation of a chemical solution (a developer or an etchant) used for a manufacturing process of a touch sensor metal disposed on the touch buffer layer T-BUF or moisture from the outside into the emission layer EL including an organic material. By doing this, the touch buffer layer T-BUF may suppress the damage of the emission layer EL which is vulnerable to chemical solutions or moisture. The touch buffer layer T-BUF may be formed of an organic insulating material which is formed at a temperature lower than a predetermined temperature (for example, 100° C.) to suppress the damage of the emission layer EL including an organic material which is vulnerable to a high temperature. The organic insulating material has a low permittivity of 1 to 3. For example, the touch buffer layer T-BUF may be formed of acrylic, epoxy, or siloxane based material, but it is not limited thereto. The touch buffer layer T-BUF which is formed of an organic insulating material and has a planarization performance may suppress the damage of the encapsulation layers PAS1, PCL, and PAS2which configure the encapsulation unit ENCAP in accordance with the bending of the organic light emitting display device. Further, the touch buffer layer T-BUF may suppress the crack of the touch sensor metal formed on the touch buffer layer T-BUF. According to a mutual-capacitance based touch sensor structure, the first touch electrode line X-TEL and the second touch electrode line Y-TEL are disposed on the touch buffer layer T-BUF and the first touch electrode line X-TEL and the second touch electrode line Y-TEL may be disposed to intersect each other. The second touch electrode line Y-TEL may include a plurality of second touch electrodes Y-TE and a plurality of second touch connection electrodes Y-CL which electrically connect the plurality of second touch electrodes Y-TE. Referring toFIGS.3and4together, the plurality of second touch electrodes Y-TE may be spaced apart from each other with a regular interval along the second direction Y. Each of the plurality of second touch electrodes Y-TE may be electrically connected to another second touch electrode Y-TE adjacent thereto in the second direction Y through the second touch connection electrode Y-CL. The second touch connection electrode Y-CL is disposed on the same plane as the second touch electrode Y-TE to be electrically connected to two second touch electrodes Y-TE adjacent in the second direction Y without a separate contact hole or integrated with two second touch electrodes Y-TE adjacent in the second direction Y. The second touch connection electrode Y-CL may be disposed so as to overlap the bank BANK. Accordingly, the degradation of an aperture rate due to the second touch connection electrode Y-CL may be suppressed. The touch electrode line X-TEL may include a plurality of first touch electrodes X-TE and a plurality of first touch connection electrodes X-CL which electrically connect between the plurality of first touch electrodes X-TE. The plurality of first touch electrodes X-TE may be spaced apart from each other along a first direction X with a regular interval on the touch insulating layer ILD. Each of the plurality of first touch electrodes X-TE may be electrically connected to another first touch electrode X-TE adjacent thereto in the first direction X through the first touch connection electrode X-CL. Referring toFIG.4, the plurality of first touch electrodes X-TE and the plurality of first touch connection electrodes X-CL may be located on different layers with the touch insulating layer ILD therebetween. The first touch connection electrode X-CL is formed on the touch buffer layer T-BUF and is exposed through a touch contact hole which passes through the touch insulating layer ILD to be electrically connected to two first touch electrodes X-TE adjacent in the first direction X. The first touch connection electrode X-CL may be disposed so as to overlap the bank BANK. Accordingly, the degradation of an aperture rate due to the first touch connection electrode X-CL may be suppressed. In the meantime, the second touch electrode line Y-TEL may be electrically connected to the touch driving circuit TDC by means of the second touch routing line Y-TL and the second touch pad Y-TP. Similarly, the first touch electrode line X-TEL may be electrically connected to the touch driving circuit TDC by means of the first touch routing line X-TL and the first touch pad X-TP. A pad cover electrode which covers the first touch pad X-TP and the second touch pad Y-TP may be further disposed. The first touch pad X-TP may be separately formed from the first touch routing line X-TL or may be formed by extending the first touch routing line X-TL. The second touch pad Y-TP may be separately formed from the second touch routing line Y-TL or may be formed by extending the second touch routing line Y-TL. When the first touch pad X-TP is formed by extending the first touch routing line X-TL and the second touch pad Y-TP is formed by extending the second touch routing line Y-TL, the first touch pad X-TP, the first touch routing line X-TL, the second touch pad Y-TP, and the second touch routing line Y-TL may have the same first conductive material. Here, for example, the first conductive material may be formed to have a single layer or a multi-layered structure using a metal having high corrosion resistance, acid resistance, and good conductivity such as aluminum (Al), titanium (Ti), copper (Cu), or molybdenum (Mo), but is not limited thereto. For example, the first touch pad X-TP, the first touch routing line X-TL, the second touch pad Y-TP, and the second touch routing line Y-TL which are formed of the first conductive material may be formed with a triple-layered structure such as titanium (Ti)/aluminum (Al)/titanium (Ti) or molybdenum (Mo)/aluminum (Al)/molybdenum (Mo), but are not limited thereto. The pad cover electrode which covers the first touch pad X-TP and the second touch pad Y-TP may be configured with a second conductive material which is the same material as the first and second touch electrodes X-TE and Y-TE. Here, the second conductive material may be formed of a transparent conductive material having a strong corrosion resistance and acid resistance such as indium tin oxide (ITO) or indium zinc oxide (IZO). Such a pad cover electrode is formed to be exposed by the touch buffer layer T-BUF to be bonded with the touch driving circuit TDC or bonded with a circuit film in which the touch driving circuit TDC is mounted. The touch buffer layer T-BUF is formed to cover the touch sensor metal to suppress the corrosion of the touch sensor metal due to the moisture from the outside. For example, the touch buffer layer T-BUF may be formed of an organic insulating material or formed in the form of a circular polarizer or an epoxy or acrylic film. Such a touch buffer layer T-BUF may not be provided on the encapsulation unit ENCAP. For example, the touch buffer layer T-BUF may be omitted depending on the structure of the display device. The second touch routing line Y-TL may be electrically connected to the second touch electrode Y-TE through a touch routing line contact hole or may be integrally formed with the second touch electrode Y-TE. Such a second touch routing line Y-TL extends to the non-display area NA and passes through an upper portion and a side surface of the encapsulation unit ENCAP and an upper portion and a side surface of the dam DAM to be electrically connected to the second touch pad Y-TP. Accordingly, the second touch routing line Y-TL may be electrically connected to the touch driving circuit TDC by means of the second touch pad Y-TP. The second touch routing line Y-TL may transmit a touch sensing signal from the second touch electrode Y-TE to the touch driving circuit TDC or may be supplied with the touch driving signal from the touch driving circuit TDC to transmit the touch driving signal to the second touch electrode Y-TE. The first touch routing line X-TL may be electrically connected to the first touch electrode X-TE through a touch routing line contact hole or may be integrally formed with the first touch electrode X-TE. The first touch routing line X-TL extends to the non-display area NA and passes through an upper portion and a side surface of the encapsulation unit ENCAP and an upper portion and a side surface of the dam DAM to be electrically connected to the first touch pad X-TP. Accordingly, the first touch routing line X-TL may be electrically connected to the touch driving circuit TDC by means of the first touch pad X-TP. The first touch routing line X-TL may be supplied with the touch driving signal from the touch driving circuit TDC to transmit the touch driving signal to the first touch electrode X-TE or may transmit a touch sensing signal from the first touch electrode X-TE to the touch driving circuit TDC. The placement of the first touch routing line X-TL and the second touch routing line Y-TL may be changed in various manners depending on a panel design specification. A touch protection layer PAC may be disposed on the first touch electrode X-TE and the second touch electrode Y-TE. The touch protection layer PAC extends before or after the dam DAM to be disposed also on the first touch routing line X-TL and the second touch routing line Y-TL. FIG.5is an enlarged view of an area A ofFIG.2.FIG.6is a cross-sectional view taken along the line VI-VI′ ofFIG.5.FIG.7is an enlarged view of an area B ofFIG.5.FIG.8is a cross-sectional view taken along the line VIII-VIII′ ofFIG.5.FIG.9is a cross-sectional view taken along the line VIIII-VIIII′ ofFIG.5.FIG.5is a view schematically illustrating a touch electrode TE. Referring toFIG.5, the display area AA includes a camera area CA. The camera area CA may include an opening area CA1which passes through the substrate SUB and a boundary area CA2in the outer periphery of the opening area OA, for example, between the display area AA and the opening area CA1. The encapsulation unit ENCAP may cover the display area AA and the boundary area CA2and a plurality of first touch electrodes X-TE extending in a first direction X and a plurality of second touch electrodes Y-TE extending in a second direction Y may be disposed on the encapsulation unit ENCAP. The plurality of first touch electrodes X-TE include a plurality of touch blocks X-TB. For example, the plurality of first touch electrodes X-TE are gathered to form one block unit. Each of the plurality of touch blocks X-TB may be spaced apart from each other on the same plane. At this time, an area between the plurality of touch blocks X-TB is an area in which a plurality of first touch connection electrodes X-CL are and is referred to as a connection unit CLP. Even though inFIG.5, it is illustrated in two first touch connection electrodes X-CL which electrically connect two different touch blocks X-TB adjacent to each other in the first direction X are disposed in each connection unit CLP, the number of first touch connection electrodes X-CL is not limited thereto. Referring toFIG.6, the first touch connection electrodes X-CL is not disposed on the same plane as the plurality of touch blocks X-TB and electrically connects two first touch electrodes X-TE adjacent to each other through a contact hole with a touch insulating layer ILD therebetween. The first touch connection electrode X-CL may be disposed below the plurality of touch electrodes TE, but may also be disposed above the plurality of touch electrodes. Referring back toFIG.5, the plurality of second touch electrodes Y-TE are disposed in a remaining area excluding an area in which the plurality of touch blocks X-TB are disposed and the camera area CA. The plurality of second touch electrodes Y-TE adjacent to the camera area CA, among the plurality of second touch electrodes Y-TE, may correspond to a rounded shape of the camera area CA. The second touch connection electrode Y-CL is disposed on the same plane as the second touch electrode Y-TE to be electrically connected to two second touch electrodes Y-TE adjacent in the second direction Y without a separate contact hole or integrated with two second touch electrodes Y-TE adjacent in the second direction Y. Referring toFIGS.5and6together, the second touch connection electrode Y-CL extending in the second direction Y may be disposed in an area overlapping the first touch connection electrode X-CL extending in the first direction X. For example, the first touch connection electrode X-CL and the second touch connection electrode Y-CL are not disposed on the same layer, but may be insulated from each other with the touch insulating layer ILD therebetween. A crack detecting electrode CDE may be disposed in the boundary area CA2between the opening area CA1and the touch block X-TB. The crack detecting electrode CDE may be disposed on the same layer as the plurality of touch electrodes TE. The crack detecting electrode CDE is insulated from the plurality of touch electrodes TE and at least some may be disposed on the same layer as the plurality of touch electrodes TE. The crack detecting electrode CDE is disposed between an outer periphery of the opening area CA1and an area in which the touch electrode TE is disposed to detect a crack which may be caused during a process of forming the opening area CA1. The crack detecting electrode CDE may include a first crack detecting electrode CDE1enclosing the camera area CA and a second crack detecting electrode CDE2which is disposed in the display area AA and extends in the second direction Y. The first crack detecting electrode CDE1and the second crack detecting electrode CDE2are disposed on the same plane to be electrically connected to each other. Even though not illustrated in the drawing, the second crack detecting electrode CDE2is connected to a third crack detecting electrode to be connected to a pad disposed in an edge of the display panel DISP opposite to an edge of the display panel DISP adjacent to the camera area CA. The third crack detecting electrode may be disposed in the display area AA and the non-display area NA so as to enclose the display area AA disposed inside from the camera area CA. When the crack is generated in the boundary area CA2, the first crack detecting electrode CDE1or the second crack detecting electrode CDE2are shorted to detect the crack of the boundary area CA2. The plurality of touch blocks TB may be divided into a first touch block X-TB1adjacent to the camera area CA, a second touch block X-TB2adjacent to the second crack detecting electrode CDE2, and a third touch block X-TB3which is a remaining touch block excluding the first touch block X-TB1and the second touch block X-TB2. The first touch block X-TB1is disposed to be adjacent to the camera area CA so that a part of the first touch block X-TB1corresponds to a round shape of the camera area CA. For example, the first touch block X-TB1may have a shape obtained when a part of the third touch block X-TB3is cut by the camera area CA. The third touch block X-TB3refers to a touch block which has a different size from that of the first touch block X-TB1and is disposed so as not to be adjacent to the camera area CA and the second crack detecting electrode CDE2. For example, the plurality of touch blocks TB disposed in a display area AA located inside from a display area AA adjacent to the camera area CA may be the third touch block X-TB3. The second touch block X-TB2has a size different from the third touch block X-TB3. The second crack detecting electrode CDE2is disposed on the same plane as the second touch block X-TB2to be disposed between two adjacent second touch blocks X-TB2and intersect the first touch connection electrode X-CL. However, the second crack detecting electrode CDE2is not electrically connected to any of the touch block TB or the touch connection electrode CL. In other words, the second crack detecting electrode CDE2is disposed to intersect between at least one third touch block X-TB3so that one third touch block X-TB3may be changed to two second touch blocks X-TB2and the second crack detecting electrode CDE2is disposed between the second touch blocks X-TB2. The first touch connection electrode X-CL which electrically connects between the third touch blocks X-TB3may be additionally disposed between the second touch blocks X-TB2to electrically connect the second touch blocks X-TB2disposed on both ends of the second crack detecting electrodes CDE2. Referring toFIGS.7and8, the plurality of first touch electrodes X-TE are formed with a mesh pattern. For example, the first touch electrode X-TE may be an electrode metal EM which is patterned to have a mesh type to have two or more openings OA between two adjacent first touch electrodes X-TE. Each of two or more openings OA in each touch electrode TE may correspond to an emission area of one or more sub pixels SP. For example, the plurality of openings OA may serve as a path on which light emitted from the plurality of sub pixels SP disposed therebelow passes. In order to form a plurality of touch electrodes TE, the electrode metal ME is broadly formed to be a mesh type and then the electrode metal EM is cut to have a predetermined pattern to electrically separate the electrode metals EM. Consequently, a plurality of touch electrodes TE may be created. For example, the plurality of touch blocks TB may be formed by cutting an outer periphery of a pattern corresponding to a touch block shape which is the plurality of first touch electrodes X-TE. Even though inFIG.5, the touch block TB is illustrated as a quadrangular shape, various shapes such as a rhombus, a diamond, a triangle, and a pentagon are possible. Referring toFIG.7, the plurality of second touch electrodes Y-TE is formed with a mesh pattern. For example, each second touch electrode may be an electrode metal EM which is patterned to have a mesh type to have two or more openings OA between adjacent second touch electrodes Y-TE. Further, the plurality of second touch electrodes Y-TE are not connected to the plurality of touch blocks TB which are the plurality of first touch electrodes X-TE so that the edge of the plurality of second electrodes Y-TE adjacent to the plurality of touch blocks TB may be cut. Accordingly, the opening is also formed between the plurality of second touch electrodes Y-TE and the plurality of first touch electrodes X-TE. Referring toFIGS.7and9together, the second crack detecting electrode CDE2has a zigzag pattern. The second crack detecting electrode CDE2may be an electrode metal EM having two or more openings OA which is disposed between two adjacent second touch blocks X-TB2and patterned as a mesh type similarly to the plurality of touch electrodes TE. Here, the second crack detecting electrode CDE2is disposed on the same plane as the plurality of touch blocks TB so that an edge of the second crack detecting electrode CDE2may have a zigzag pattern corresponding to the edge of the second touch block X-TB2. The first touch connection electrode X-CL intersecting the second crack detecting electrode CDE2is disposed so as to overlap the electrode metal EM of the second crack detecting electrode CDE2. For example, the first touch connection electrode X-CL is not disposed on the same plane as the second crack detecting electrode CDE2or the plurality of touch electrodes TE so that it overlaps the second crack detecting electrode CDE2. Generally, when the crack detecting electrode intersecting the touch electrode is formed to have a pattern having a linear shaped edge, there is a problem in that the cracking detecting line is visible from the outside, unlike the adjacent touch electrode having a mesh pattern. In the display device according to the exemplary embodiment of the present disclosure, the shapes of the touch electrode and the second crack detecting electrode CDE2are unified, so that the external visibility of the second crack detecting electrode CDE2may be minimized. Specifically, the second crack detecting electrode CDE2is formed by cutting along the zigzag shape of the edge of the mesh pattern formed by the plurality of touch electrodes TE so that the edge of the second crack detecting electrode CDE2has a zigzag shape, to minimize the external visibility of the second crack detecting electrode CDE2. FIG.10is an enlarged view of a display device according to another exemplary embodiment of the present disclosure.FIG.10is a view schematically illustrating a touch electrode TE. A display device1000ofFIG.10has substantially the same configurations as the display device100ofFIGS.1to9except for a size of a second touch block X-TBE2′ and an arrangement position of a second crack detecting electrode CDE2′, so that a redundant description will be omitted. Referring toFIG.10, the second touch block X-TB2′ has the same size as the third touch block X-TB3. The second crack detecting electrode CDE2′ is disposed on the same plane as the second touch block X-TB2′ to be disposed between two adjacent second touch blocks X-TB2′ and intersect the first touch connection electrode X-CL. The second crack detecting electrode CDE2′ is not electrically connected to any of the touch block TB or the touch connection electrode CL. In other words, the second crack detecting electrode CDE2′ does not intersect between at least one third touch block X-TB3, but may be disposed in a connection unit which is an area between adjacent third touch blocks X-TB3. For example, the second crack detecting electrode CDE2′ may be disposed so as to overlap an area in which the first touch connection electrode X-CL electrically connecting the adjacent third touch blocks X-TB3is disposed. Accordingly, in the display device1000according to another exemplary embodiment of the present disclosure, the shapes of the touch electrode TE and the second crack detecting electrode CDE2′ are unified, so that the external visibility of the second crack detecting electrode CDE2′ may be minimized. Further, the first touch connection electrode X-CL is regularly disposed so that the possibility of the first touch connection electrode X-CL which is visible from the outside may be minimized. The exemplary embodiments of the present disclosure can also be described as follows: According to an aspect of the present disclosure, a display device includes a substrate including a camera area in which a camera is disposed, a display area which surrounds the camera area and includes a plurality of sub pixels, and a non-display area located at the outer periphery of the display area; a plurality of touch electrode disposed in the display area; a touch connection electrode which connects two adjacent touch electrodes which are spaced apart from each other, among the plurality of touch electrodes; and a crack detecting electrode which surrounds the camera area and is disposed on the same layer as the plurality of touch electrodes, the plurality of touch electrodes include a plurality of first touch electrodes extending in a first direction and a plurality of second touch electrodes extending in a second direction intersecting the first direction, wherein the crack detecting electrode includes a first crack detecting electrode which surrounds the camera area and a second crack detecting electrode which is disposed in the display area and extends in the second direction, and the second crack detecting electrode has a zigzag pattern. The first touch electrode may include a plurality of touch blocks and the plurality of touch blocks have a first touch block adjacent to the camera area, a second touch block adjacent to the second crack detecting electrode, and a third touch block excluding the first touch block and the second touch block. The second touch block may have the same size as the third touch block. The second touch block may have a different size different from the third touch block. The second crack detecting electrode may be disposed in a connection unit disposed between two adjacent touch blocks among the plurality of touch blocks. The plurality of touch electrodes may have a mesh pattern. According to another aspect of the present disclosure, a display device includes a substrate including a camera area in which a camera is disposed, a display area which surrounds the camera area and includes a plurality of sub pixels, and a non-display area located at the outer periphery of the display area; an encapsulation unit which covers the display area; a plurality of first touch electrodes and a plurality of second touch electrodes disposed on the encapsulation unit to intersect in different directions; a touch connection electrode which connects two adjacent first touch electrodes among the plurality of first touch electrodes; and a crack detecting electrode which surrounds the camera area and is disposed on the same layer as the plurality of first touch electrodes and the plurality of second touch electrode, the crack detecting electrode includes a first crack detecting electrode which surrounds the camera area and a second crack detecting electrode which is disposed in the display area and extends in the second direction, and the second crack detecting electrode has a zigzag pattern. The first touch electrode may include a plurality of touch blocks and the plurality of touch blocks have a first touch block adjacent to the camera area, a second touch block adjacent to the second crack detecting electrode, and a third touch block excluding the first touch block and the second touch block. The second touch block may have the same size as the third touch block. The second touch block may have a different size different from the third touch block. The second crack detecting electrode may have a mesh pattern and intersects the touch connection electrode. Although the exemplary embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the exemplary embodiments of the present disclosure are provided for illustrative purposes only but not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described exemplary embodiments are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. | 51,630 |
11861101 | Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. DETAILED DESCRIPTION The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of functions and constructions that are known in the art may be omitted for increased clarity and conciseness. The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application. Throughout the specification, when one element is described as being “connected to” or “coupled to” another element, the one element may be directly “connected to” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when one element is described as being “directly connected to” or “directly coupled to” another element, there can be no other element intervening therebetween. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. Although terms such as “first,” “second,” and “third” may be used herein to describe various elements, these elements are not to be limited by these terms. Rather, these terms are only used to distinguish one element from another element. Thus, a first element referred to in examples described herein may also be referred to as a second element without departing from the teachings of the examples. The terminology used herein is for describing various examples only, and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof. The features of the examples described herein may be combined in various ways as will be apparent after an understanding of the disclosure of this application. Further, although the examples described herein have a variety of configurations, other configurations are possible as will be apparent after an understanding of the disclosure of this application. Use herein of the term “may” with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists in which such a feature is included or implemented while all examples and embodiments are not limited thereto. FIG.1is a diagram illustrating an example of an exterior of an electronic device, andFIG.2is a diagram illustrating another example of an exterior of an electronic device. Referring toFIGS.1and2, an electronic device10may include a side unit50and a touch switch unit TSW. The side unit50may include a frame51which is a conductor, a cover52which is a non-conductor, and a glass53which is a non-conductor. The frame51may be a metal frame forming a central structure of the electronic device10. The cover52may be a non-conductor disposed on a rear surface of the frame51, and a material of the cover52may be glass or plastic, for example. The glass53may be a front display glass disposed on a front surface of the frame51, but is not limited thereto. As an example, the electronic device10may include the side unit50having a three-layer structure formed by the glass53, the frame51, and the cover52. As another example, the side unit50of the electronic device10may have a two-layer structure formed by the frame51and the cover52, and in this case, the frame51may be disposed on a center of the electronic device10, and the cover52may be disposed on the rear surface thereof. Referring toFIG.1, the touch switch unit TSW may include a first touch member TM1formed on the side unit50of the electronic device10to replace a mechanical button. Referring toFIG.2, the touch switch unit TSW may include a first touch member TM1and a second touch member TM2formed on the side unit50of the electronic device10to replace two mechanical buttons. For example, each of the first touch member TM1and the second touch member TM2may be a respective portion of the cover52. For example, referring toFIGS.1and2, the electronic device10may be implemented by a portable device such as a smartphone or a wearable device such as a smartwatch. However, the electronic device10is not limited to any specific device, and the electronic device10may be implemented by any portable or wearable electronic device, or any electronic device having a switch for operation control. A touch may include a touch corresponding to brief contact and a touch corresponding to pressing. Brief contact may refer to a simple contact which does not involve a pressing force, and pressing may refer to pressing force following the contact. Therefore, if not particularly specified, a touch may include both contact and force (pressing), or may be either one thereof. Referring toFIGS.1and2, the first touch member TM1and the second touch member TM2may not be externally exposed, and the touch members may be configured to have a structure which may not be visible from the outside through various passivation processes. FIG.1illustrates the example in which the touch switch unit TSW includes a single first touch member TM1, andFIG.2illustrates the example in which the touch switch unit TSW includes two first and second touch members TM1and TM2, but these examples are not limited thereto. The touch sensing device may have a structure in which a sensing electrode is disposed in the cover52formed of a non-conductive material such as glass, and differently from the general configuration in which a plurality of touch sensors are disposed in a metal case, the disadvantage in which it is difficult to identify a plurality of touch switches in the metal frame may be addressed. Descriptions of the elements having the same reference numeral and the same function will not be repeated, and only differences will be described. FIG.3is a cross-sectional diagram illustrating an example of an electronic device and a touch sensing device taken along the line III-III′ inFIG.1. Referring toFIGS.1and3, the electronic device10may include a side unit50, a touch switch unit TSW, and a touch sensing device100. The touch sensing device100may include a first touch sensing unit TSP1and a circuit unit800. As described above, the side unit50may include a cover52which is a non-conductor and a frame51which is a conductor and coupled to the cover52. As an example, the frame51may include an internal structure51-S disposed in the electronic device. The touch switch unit TSW may include a first touch member TM1which is a portion of the cover52. The first touch member TM1may refer to an active area of the cover52in which touch sensing may be available. The first touch sensing unit TSP1may include a first sensing electrode SE1, a first sensing coil LE1, and a first connection wire W10. The first sensing electrode SE1may be a conductive electrode, and may be disposed in the first touch member TM1to generate a parasitic capacitance between the first touch member TM1and a human body when the human body touches the first touch member TM1. The first sensing coil LE1may be electrically connected to the first sensing electrode SE1and may be mounted on a substrate200disposed in the electronic device10. The first connection wire W10may include one end T1connected to the first sensing electrode SE1and the other end T2connected to the first sensing coil LE1through a connection pad200-P, and may electrically connect the first sensing electrode SE1to the first sensing coil LE1. As an example, the first connection wire W10may improve freedom of placement of the first sensing coil LE1. The connection pad200-P may be configured as a conductive pad for electrically connecting the substrate200, the first sensing electrode SE1, and the first sensing coil LE1to one another. The circuit unit800may be mounted on the substrate200, may be connected to the first sensing coil LE1through the substrate200, and may detect whether a touch is applied to the first touch member TM1based on a first oscillation signal having a resonant frequency that changes in response to the first touch member TM1being touched. For example, the circuit unit800may detect whether a touch is applied to the first touch member TM1based on a first oscillation signal having a first resonant frequency when the first touch member TM1is not being touched, and a second resonant frequency different from the first resonant frequency when the first touch member TM1is being touched. InFIG.3, a support member300may support the substrate200and may be supported by the internal structure51-S of the frame51. FIG.4is a cross-sectional diagram illustrating the example of an electronic device and a touch sensing device ofFIG.3taken along the line IV-IV′ inFIG.1. Referring toFIGS.1and4, the first sensing electrode SE1may be spaced apart from an inner surface TM1-F of the first touch member TM1by a fixed distance, and may be disposed to oppose the inner surface TM1-F of the first touch member TM1. The gap between the first sensing electrode SE1and the inner surface TM1-F of the first touch member TM1may be a gap that results in a parasitic capacitance being generated when a human body1touches the first touch member TM1. As an example, the first sensing coil LE1may be spaced apart from an inner surface51-F of the frame51by a gap D1and may be disposed to oppose the inner surface51-F of the frame51, and the gap D1between the first sensing coil LE1and the inner surface51-F of the frame51may change when the frame51is pressed by a touch. Accordingly, an inductance of the first sensing coil LE1may change. In detail, when a touch (e.g., a pressing) is applied through the frame51, an inductance of the first sensing coil LE1may change according to a change in the gap D1between the first sensing coil LE1and the frame51. When the gap D1between the first sensing coil LE1and the frame51changes by the pressing while a current flows in the first sensing coil LE1, the inductance of the first sensing coil LE1may change (e.g., be reduced) by application of an eddy current generated by the change in the gap D1, and a resonant frequency based on the inductance may increase. Referring toFIGS.3and4, when a human body1(e.g., a hand) touches the touch member TM1, by disposing the first sensing electrode SE1for generating a parasitic capacitance in the cover52, the touch sensing may be available using the first touch member TM1, which is a portion of the side unit50, without a physical key on the side unit50of the electronic device10. The first sensing coil LE1may be mounted on the substrate200, and may be electrically connected to the circuit unit800(seeFIGS.9A,9B, and13) of the substrate200through the connection pad200-P. The first connection conductor W10may be a conductor wire or a conductor line using a flexible PCB, but is not limited thereto, and any conductor line which may be electrically connected may be used. Also, because the first sensing electrode SE1is connected to the first sensing coil LE1through the connection wire such as the first connection wire W10, the position in which the first sensing coil LE1is disposed may not be limited such that the placement of the first sensing coil LE1may be determined freely. For example, the substrate200may be configured as a rigid PCB or a flexible PCB, but is not limited thereto, and may have various shapes. For example, the first sensing coil LE1may be variously implemented as a device having an inductance such as a chip inductor or a pattern having an inductance, in addition to a PCB coil. Also, the touch sensing device100may include a capacitance circuit included in the circuit unit800, or a capacitor circuit or a capacitor component mounted on the substrate200as an external component of the circuit unit800. The circuit unit800may be configured as an integrated circuit (IC). A shape of the first sensing electrode SE1may not be limited to any particular shape, and may have various shapes such as a circular shape or a quadrangular shape. A change in a capacitance may be sensed using the first sensing electrode SE1to detect a change in the frequency of the first oscillation circuit, but a method of sensing a change in a capacitance is not limited thereto, and a method of sensing a capacitance without a sensing electrode may be used. For example, one or more touch sensors may be implemented, and when a plurality of touch members are included, the configuration may be applied to implementation of a slide using touch sensing. FIG.5is a cross-sectional diagram illustrating an example of an electronic device and a sensing electrode of a touch sensing device taken along the line IV-IV′ inFIG.1. The difference between the example inFIG.5and the example inFIG.4may be the placement of the first sensing electrode SE1. Referring toFIG.5, the first sensing electrode SE1may be disposed on the inner surface TM1-F of the first touch member TM1. As an example, the first sensing electrode SE1may have a structure in which a printing metal is printed on the cover52, but the structure of the first sensing electrode SE1is not limited thereto. FIG.6is a cross-sectional diagram illustrating an example of a shielding material between a metal frame and a wire taken along the line IV-IV′ inFIG.1. A difference between the example inFIG.6and the example inFIG.4is that the touch sensing device100in the example inFIG.6may further include a shielding material610. Referring toFIG.6, the shielding material610may be disposed between the first connection wire W10and an internal conductor (e.g., a frame) of the electronic device. For example, the shielding material610may be disposed on the first connection conductor W10opposing the frame51. For example, the frame51, which is a conductor, may form a structure of an electronic device such as a mobile phone, and may include an internal structure51-S formed in the electronic device, and the first sensing electrode SE1may be electrically connected to the substrate200through the first connection wire W10. In this case, when a human body1(e.g., a hand) touches the frame51, which is a conductor, an unwanted parasitic capacitance may be formed with the first connection wire W10connected to the frame51. For example, a parasitic capacitance does not occur only when a human body1touches the first touch member TM1, but may also occur between the frame51and the first connection wire W10when a human body1touches the frame51, such that malfunctioning such as affecting a frequency of the first oscillation circuit may occur. To prevent such malfunctioning, the electronic device in may include the shielding material610performing an electrical shielding function between the first connection wire W10and the frame51, a conductor, to implement shielding between the first connection wire W10and the frame51, a conductor, such that a parasitic capacitance may not be generated between the first connection wire W10and the frame51, a conductor. FIG.7is a cross-sectional diagram illustrating another example of a shielding material between a metal frame and a wire taken along the line IV-IV′ inFIG.1. The difference between the example inFIG.7and the example inFIG.4is the shielding material620. The shielding material620inFIG.7may be configured to surround the entire first connection wire W10by coating the first connection wire W10with an insulating material to insulate the internal conductor (e.g., the internal structure51-S of the frame) of the electronic device10from the first connection wire W10. In this case, it may be possible to prevent a parasitic capacitance from being generated between the first connection wire W10and the frame51, a conductor, and also between the first connection wire W10and another conductive material other than the frame, such that shielding for the first connection wire W10may be ensured. As described above, examples of a shielding material between the first connection conductor W10and the metal frame51are illustrated inFIGS.6and7, but the shielding material is not limited thereto. FIG.8is a cross-sectional diagram illustrating an example of a placement of a sensing coil taken along the line IV-IV′ inFIG.1. The difference between the example inFIG.8and the example inFIG.4is the placement of the first sensing coil LE1. Referring toFIG.8, as an example, the first sensing coil LE1illustrated inFIG.4may be disposed in a first position P1. As another example, as illustrated inFIG.8, the first sensing coil LE1may be disposed in a second position P2different from the first position P1. As described above, the first connection wire W10may be formed of a flexible wire electrically connecting one end connected to the first sensing electrode to the other end connected to the first sensing coil, and in this case, when the first connection wire W10, which is a flexible wire, is used, the first sensing coil LE1may be connected to the first sensing electrode SE1regardless of the placement of the first sensing coil LE1. Accordingly, the first sensing coil LE1may be disposed in the second position P2other than the first position P1of the frame51, and thus, the first sensing coil LE1may be disposed in any position as long as the first sensing coil LE1may be connected using the first connection wire W10even if the first sensing coil LE1is not disposed in the first position P1of the frame51. Referring toFIGS.4and8, the first sensing coil LE1may be disposed freely without being limited to the structure or the position of the frame, and accordingly, efficient placement and design may be available, and freedom of placement of the first sensing coil LE1may improve. FIG.9Ais a diagram illustrating an example of an equivalent first oscillation circuit when a touch is not applied, andFIG.9Bis a diagram illustrating an example of an equivalent first oscillation circuit when a touch is applied. Referring toFIGS.9A and9B, the circuit unit800may include a first oscillation circuit810which may generate a first oscillation signal having a resonant frequency that changes in response to the first touch member TM1being touched. For example, the first oscillation circuit unit800generate a first oscillation signal having a first resonant frequency when the first touch member TM1is not being touched, and a second resonant frequency different from the first resonant frequency when the first touch member TM1is being touched. As an example, the first oscillation circuit831may include an inductance circuit831-L including a first sensing coil LE1, a capacitance circuit831-C including two capacitor elements each having a capacitance2C mounted on the substrate200, and an amplifier circuit831-A for maintaining a resonance state in the first oscillation circuit831. However, the amplifier circuit831-A is not limited to a function of amplification. For example, the amplifier circuit831-A may be an inverter or an amplifier. Also, the amplification circuit831-A may have a negative resistance such that the resonant circuit maintains a resonance state and oscillates, thereby generating an oscillation signal having a corresponding resonant frequency. Referring toFIG.9A, when the human body (e.g., a hand) does not touch the first touch member TM1, the inductance circuit831-L may provide an inductance L, and the capacitance circuit831-C may provide a capacitance C (C=2C∥2C). In this case, the resonant frequency may be expressed by Equation 1 below. f=1/[2π*sqrt(L*C)],C≈2C∥2C(1) Referring toFIG.9B, when a human body (e.g., a hand) touches the first touch member TM1, the inductance circuit831-L may provide an inductance L, and the capacitance circuit831-C may provide a capacitance C (C=2C∥(2C+CT)) which is varied by the parasitic capacitance. In this case, the resonant frequency may be expressed by Equation 2 below. f=1/[2π*sqrt(L*C)],C≈2C∥(2C+CT),CT≈(Cpa∥Cg) (2) Referring toFIG.9B, when a human body (e.g., a hand) touches the first touch member TM1, a parasitic capacitance may be generated between the first sensing electrode SE1in the cover52and the human body such that the magnitude of the equivalent capacitance C of the first oscillation circuit831may be changed by the parasitic capacitance. For example, referring to Equation 2, the magnitude of the equivalent capacitance C of the first oscillation circuit831may increase, which may decrease the resonant frequency, and by sensing the decrease, a touch of the first touch member TM1may be recognized. The first oscillation circuit831inFIG.9Bmay further include a parasitic capacitance Cpa and a ground return capacitance Cg. Accordingly, the first oscillation circuit831inFIG.9Bmay generate a first oscillation signal having a frequency that is varied by the parasitic capacitance Cpa and the ground return capacitance Cg added according to a touch of the touch member TM1. That is, the touch sensing of when the first touch member TM1is touched by the human body may depend on a change in a capacitance, rather than a change in an inductance, among the components of the first oscillation circuit831. The magnitude of the frequency change caused by such a touch is very small, but if the first oscillation signal is amplified and digitally processed, it is possible to distinguish between when a touch is actually applied and when a touch is not applied. In Equations 1 and 2, ≈ denotes sameness or similarity, and the term “similarity” means that other values may be further included. In other words, there may be other parameters affecting the resonant frequency f that may be included in Equations 1 and 2. In Equations 1 and 2, “a∥b” indicates that capacitances “a” and “b” are connected in series, and an equivalent capacitance thereof is calculated as “(a*b)/(a+b).” In Equation 2, “Cpa” may be a parasitic capacitance present between the human body and the first sensing element SE1in the cover52and between the cover52and the first sensing coil LE1, and “Cg” is a ground return capacitance between a circuit ground and earth. When comparing Equation 1 (when no touch is applied) and Equation 2 (when a touch is applied), the capacitance (2C) of Equation 1 increases to the capacitance (2C+CT) of Equation 2, and accordingly, a first resonant frequency without a touch may decrease to a second resonant frequency with a touch. FIG.10is a diagram illustrating an example of an internal structure of an electronic device. Referring toFIG.10, the electronic device10may include a side unit50, a touch switch unit TSW, and a touch sensing device100. The touch sensing device100may include a first touch sensing unit TSP1, a first force sensing unit FSP1, and a circuit unit800. As described above, the side unit50may include the cover52, a non-conductor, and the frame51, a conductor, coupled to the cover52. The touch switch unit TSW may include a first touch member TM1that is a portion of the cover52and a first force member FM1that is a portion of the frame51. The first touch member TM1may refer to an active area of the cover52in which touch sensing may be available, and the first force member FM1may refer to an active area of the frame51in which the force sensing may be available. The first touch sensing unit TSP1may include a first sensing electrode SE1disposed in the cover52and a first sensing coil LE1, which are electrically connected to each other. When a touch (e.g., a contact) of the human body is applied through the first touch member TM1, in the first touch sensing unit TSP1, a capacitance may vary according to the parasitic capacitance generated between the first sensing electrode SE1, the first touch member TM1, and the human body by the touch (e.g., a contact). The first force sensing unit FSP1may include the first sensing coil LE1disposed to be spaced apart from the inner surface of the frame51by a gap D1. In the first force sensing unit FSP1, when a touch (e.g., a pressing) of the human body is applied through the first force member FM1, an inductance of the first sensing coil LE1may be varied according to a change in the gap D1between the first sensing coil LE1and the first force member FM1. The first touch sensing unit TSP1and the first force sensing unit FSP1may share a single first sensing coil LE1and may perform hybrid sensing including touch sensing and force sensing. The circuit unit800may be mounted on the substrate200, may be connected to the first sensing coil LE1through the substrate200, and may detect whether the first touch member TM1is being touched and whether the first force member FM1is being pressed based on a first oscillation signal having a resonant frequency that changes in response to the first touch member TM1being touched (either contact or pressing), and changes in response to the first force member FM1being pressed. FIG.11is a diagram illustrating another example of an internal structure of an electronic device. The electronic device10illustrated inFIG.11may further include a second touch sensing unit TSP2and a second force sensing unit FSP2in addition to the electronic device10illustrated inFIG.10. The second touch sensing unit TSP2may include a second sensing electrode SE2disposed in the cover52and a second sensing coil LE2, which are electrically connected to each other. When a touch (e.g., a contact) of the human body1is applied through the second touch member TM2which is a portion of the cover52, a capacitance of the second touch sensing unit TSP2may vary according to a parasitic capacitance generated between the second sensing electrode SE2, the second touch member TM2, and the human body by the touch (e.g., a contact). The second force sensing unit FSP2may include the second sensing coil LE2disposed to be spaced apart from the internal surface of the frame51by a gap D2. When a touch (e.g., a pressing) of the human body is applied through the second force member FM2, which is a portion of the frame51, an inductance of the second sensing coil LE2may vary according to a change in the gap D2between the second sensing coil LE2and the second force member FM2. The second touch sensing unit TSP2and the second force sensing unit FSP2may share a single second sensing coil LE2to perform hybrid sensing including touch sensing and force sensing. The circuit unit800may be mounted on the substrate200and may be connected to the first sensing coil LE1and the second sensing coil LE2through the substrate200, and may detect whether the first touch member TM1is being touched (contact or pressing) and whether the first force member FM1is being pressed based on a first oscillation signal having a resonant frequency that changes in response to the first touch member TM1being touched (contact or pressing), and changes in response to the first force member FM1being pressed. Also, the circuit unit800may detect whether the second touch member TM2is being touched (contact or pressing) and whether the second force member FM2is being pressed based on a second oscillation signal having a resonant frequency that changes in response to the second touch member TM2being touched (contact or pressing), and changes in response to the second force member FM2being pressed. Referring toFIGS.10and11, the first touch sensing unit TSP1may include a first connection wire W10electrically connecting the first sensing electrode SE1to the first sensing coil LE1. The first connection wire W10may include one end T1connected to the first sensing electrode SE1and the other end T2connected to the first sensing coil LE1through a connection pad200-P and electrically connecting the second sensing electrode SE2to the second sensing coil LE2. The first force sensing unit FSP1may include a first support member300-10. The first support member300-10may include a first body member300-1and first pillar members300-11and300-12. The first body member300-1may be supported by the internal structure51-S of the frame51and may support the substrate200on which the first sensing coil LE1is mounted. The first pillar members300-11and300-12may be supported by the first body member300-1and may be attached to the frame51at two points on the frame51adjacent to opposite ends of the first force member FM1. AlthoughFIGS.10and11may appear to show that the first pillar members300-11and300-12are supported by the substrate200, this is because the substrate200obscures the bottoms of the first pillar members300-11and300-12. The force sensing unit FSP2may include a second support member300-20. The second support member300-20may include a second body member300-2and second pillar members300-21and300-22. The second body member300-2may be supported by the internal structure51-S of the frame51, and may support the substrate200on which the second sensing coil LE2is mounted. The second pillar members300-21and300-22may be supported by the second body member300-2, and may be attached to the frame51at two points on the frame51adjacent to opposite ends of the second force member FM2. AlthoughFIG.11may appear to show that the second pillar members300-21and300-22are supported by the substrate200, this is because the substrate200obscures the bottoms of the second pillar members300-21and300-22. Referring toFIGS.4,10, and11, the first sensing electrode SE1may be spaced apart from the inner surface TM1-F of the first touch member TM1by a fixed distance, and may be disposed to oppose the inner surface TM1-F of the first touch member TM1. In this case, the first sensing coil LE1may be spaced apart from the inner surface51-F of the frame51by a gap D1and may be disposed to oppose the inner surface51-F of the frame51, and the gap D1between the first sensing coil LE1and the inner surface51-F of the frame51may change when the frame51is pressed by a touch. Referring toFIGS.5,10, and11, the first sensing electrode SE1may be disposed on the inner surface TM1-F of the first touch member TM1. InFIGS.5,10, and11, the first and second connection wires W10and W20may be a conductor wire or a conductor line using a flexible PCB, but are not limited thereto, and any conductor line which may be electrically connected may be used. Also, because the sensing electrode and the sensing coil are connected through a connection wire such as the first and second connection wires W10and W20, the position in which the sensing coil is disposed may not be limited to any particular position such that there may be freedom in determining the placement of the corresponding sensing coil. Also, the touch sensing device100may include a substrate200and a support member300. Each of the first sensing coil LE1and the second sensing coil LE2may be mounted on the substrate200. The support member300may include first and second support members300-10and300-20installed on the frame51and supporting the substrate200. As an example, referring toFIG.11, the support member300and the substrate200may support the first sensing coil LE1so that the first sensing coil LE1is spaced apart from the inner surface of the first force member FM1by a gap D1, and may support the second sensing coil LE2so that the second sensing coil LE2is spaced apart from the inner surface of the second force member FM2by a gap D2. In the electronic device having the touch sensing device illustrated inFIGS.10and11, for example, the first sensing coil LE1may be spaced apart from the frame51by a gap D1, and when a pressing force is applied to the frame51, an inductance of the first sensing coil LE1may change according to a change in a gap D1between the frame51and the first sensing coil LE1, such that force sensing may be available. Also, the first sensing coil LE1and the first sensing coil LE1may be electrically connected to each other through the first connection wire W10, and when the first touch member TM1of the cover52is touched, a parasitic capacitance between the first sensing electrode SE1disposed in the cover52and the human body may change such that touch sensing may be available. According to the examples inFIGS.10and11, touch sensing and force sensing may be simultaneously detected using a single sensing coil. This structure may be extended to several channels having the same structure. In the examples inFIGS.10and11, since both touch sensing and force sensing may be performed using a single first sensing coil, there is an advantage in terms of cost and structural aspects. FIG.12is a diagram illustrating a modified example of the electronic device illustrated inFIG.10. Referring toFIGS.10and12, the first connection wire W10inFIG.10may be insulated from the frame51, a conductor, and the internal structure51-S by a shielding material620. For example, the shielding material620may be configured to surround the first connection wire W10by coating the first connection wire W10with an insulating material to insulate the internal conductor (e.g., a frame) from the first connection wire W10. FIG.13is a diagram illustrating an example of a circuit unit illustrated inFIG.11. Referring toFIGS.11and13, the touch sensing device100may include a circuit unit800. The circuit unit800may be connected to the first touch sensing unit TSP1, the first force sensing unit FSP1, the second touch sensing unit TSP2, and the second force sensing unit FSP2, and may be mounted on the substrate200. The circuit unit800may include a first oscillation circuit831, a second oscillation circuit832, and a touch detection circuit850. The first oscillation circuit831may be connected to the first touch sensing unit TSP1and the first force sensing unit FSP1, and may generate a first oscillation signal Sd1having a resonant frequency that changes in response to the first touch member TM1being touched, and changes in response to the first force member FM1being pressed. For example, the first oscillation signal Sd1may have a first resonant frequency in response to the first touch member TM1not being touched and the first force member FM1not being pressed, a second resonant frequency in response to the first touch member TM1being touched and the first force member FM1not being pressed, a third resonant frequency in response to the first touch member TM1not being touched and the first force member FM1being pressed, and a fourth resonant frequency in response to the first touch member TM1being touched and the first force member FM1being pressed. The first resonant frequency, the second resonant frequency, the third resonant frequency, and the fourth resonant frequency may be different from one another. The second oscillation circuit832may be connected to the second touch sensing unit TSP2and the second force sensing unit FSP2, and may generate a second oscillation signal Sd2having a resonant frequency that changes in response to the second touch member TM2being touched, and changes in response to the second force member FM2being pressed. For example, the second oscillation signal Sd2may have a fifth resonant frequency in response to the second touch member TM2not being touched and the second force member FM2not being pressed, a sixth resonant frequency in response to the second touch member TM2being touched and the second force member FM2not being pressed, a seventh resonant frequency in response to the second touch member TM2not being touched and the second force member FM2being pressed, and an eighth resonant frequency in response to the second touch member TM2being touched and the second force member FM2being pressed. The fifth resonant frequency, the sixth resonant frequency, the seventh resonant frequency, and the eighth resonant frequency may be different from one another. The touch detection circuit850may detect whether the first touch member TM1is being touched and whether the first force member FM1is being pressed based on the resonant frequency of the first oscillation signal Sd1, and may detect whether the second touch member TM2is being touched and whether the second force member FM2is being pressed based on the resonant frequency of the second oscillation signal Sd2. The touch detection circuit850may include a first detection circuit to process the first oscillation signal Sd1, and a second detection circuit to process the second oscillation signal Sd2. In this case, the fifth resonant frequency may be equal to the first resonance frequency, the sixth resonant frequency may be equal to the second resonance frequency, the seventh resonant frequency may be equal to the third resonance frequency, and the eighth resonant frequency may be equal to the fourth resonance frequency. However, this is just an example, and different ones of the fifth resonant frequency, the sixth resonant frequency, the seventh resonant frequency, and the eighth resonant frequency may be equal to different ones of the first resonant frequency, the second resonant frequency, the third resonant frequency, and the fourth resonant frequency. Alternatively, each of the fifth resonant frequency, the sixth resonant frequency, the seventh resonant frequency, and the eighth resonant frequency may be different from each of the first resonant frequency, the second resonant frequency, the third resonant frequency, and the fourth resonant frequency. Alternatively, the touch detection circuit850may include a single detection circuit to process both the first oscillation signal Sd1and the second oscillation signal Sd2. In this case, each of the fifth resonant frequency, the sixth resonant frequency, the seventh resonant frequency, and the eighth resonant frequency may be different from each of the first resonant frequency, the second resonant frequency, the third resonant frequency, and the fourth resonant frequency. FIG.14is a diagram illustrating an example of a first sensing coil,FIG.15is a diagram illustrating another example of a first sensing coil, andFIG.16is a diagram illustrating another example of a first sensing coil. Referring toFIG.14, the first sensing coil LE1may be a coil component, and in this case, the coil component may be connected to the connection pad200-P and may be mounted on the substrate200. Referring toFIG.15, the first sensing coil LE1may be a PCB pattern coil, and in this case, the PCB pattern coil may be connected to the connection pad200-P and may be printed on a portion of a surface of the substrate200. Referring toFIG.16, the first sensing coil LE1may be an embedded coil, and in this case, the embedded coil may be connected to the connection pad200-P and be embedded in the substrate200. Referring toFIGS.14to16, the first sensing coil LE1may be various types of coils, but is not limited to any particular type of coil. Also, the first connection wire W10may be a rigid conductor or may be a flexible conductor, and because the first sensing electrode SE1may be electrically connected to the first sensing coil LE1by the first connection wire W10, the first sensing coil LE1connected to the first sensing electrode SE1may be disposed in a space in which the first sensing coil LE1is disposed without limitation. FIG.17is a cross-sectional diagram illustrating another example of an electronic device and a touch sensing device taken along the line IV-IV′ inFIG.1. The touch sensing structure illustrated inFIG.17may be different from the touch sensing structure illustrated inFIG.4in that the touch sensing structure may be installed in the internal structure51-S of the frame51and may further include a dielectric member51D on which the first touch member TM1is disposed. The dielectric member51D may be a member having a predetermined dielectric constant disposed on a portion of the frame51, and may be implemented by glastic formed by synthesizing glass and plastic, the dielectric member51D is not limited thereto as along as the dielectric member51D is a member having a dielectric constant that may generate a parasitic capacitance by a touch from a human body. Accordingly, the first sensing electrode SE1may be disposed on a surface of the dielectric member51D. The first sensing coil LE1may be electrically connected to the first sensing electrode SE1and may be mounted on the substrate200disposed in the electronic device. The first connection wire W10may include the one end T1connected to the first sensing electrode SE1and the other end T2connected to the first sensing coil LE1and may electrically connect the first sensing electrode SE1to the first sensing coil LE1. With respect toFIG.17, the descriptions of the elements having the same reference numerals and the same functions as the elements inFIG.4have not been repeated, and only differences have been described. In the electronic device, instead of a capacitance sensing structure including a sensing electrode disposed in the cover and a sensing inductor, a sensor of a different sensing method may be disposed, and a sensor of a different sensing method may be an ultrasonic sensor, a temperature sensor, or any other suitable sensor, for example. As described above, the examples described herein may be applied to and used as a switch (e.g., a mobile side switch) of a mobile or wearable equipment. The examples described above were devised to replace a volume button or a power button on a side unit of a mobile phone, and each example may be used for an application having a cover (e.g., a conductor) structure of a rear surface. Also, the structure in each example may be different from a sensing method used in a touch screen of a front display glass. There may also be a general structure in which a coil is attached to an inner surface of glass, but in the examples described herein, a coil is not attached to glass. Also, in the conventional case, a coil of 16 mm or more may be needed but in the examples described herein, a sensing efficiency may increase such that a small sensing coil may be used. Accordingly, a smaller inductance may be sensed. Furthermore, a conventional capacitance sensing technique of performing LC oscillation and recognizing a touch using a variable capacitance by deflection of a metal, a touch target surface may be present, but the capacitance sensing method in the examples described herein is not a method of using changes in eddy current based on a change in a distance a coil and a metal caused by pressing the metal, and may be a capacitance sensing method of detecting, when a human hand touches the glass, a change in a parasitic capacitance generated between a sensing electrode of a conductor present in glass and the human hand. According to the examples described herein, regardless of the position in which the sensing coil is disposed, the sensing coil may be connected to the sensing electrode, such that freedom in the placement of the sensing coil may be improved. Also, by arranging the sensing electrode in the cover (e.g., a back glass) which is a non-conductor, a low recognition rate in a conductor case may be addressed such that identification of each touch switch in multiple touches may improve. Also, by using a single sensing coil, capacitance sensing and inductance sensing may simultaneously operate when a touch (e.g., a pressing) from a human body is applied, and capacitance sensing may not operate when twisting or non-human touch (e.g., a contact) is applied, thereby improving touch sensing identification. Accordingly, through hybrid sensing in which both capacitance sensing and inductance sensing operate using a single sensing coil, the problem of malfunctioning caused by distortion of the applied electronic device may be resolved. While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure. | 45,923 |
11861102 | DETAILED DESCRIPTION OF THE INVENTION Referring now to the figures of the drawings in detail and first, particularly, toFIG.1thereof, there is seen a basic configuration of a capacitive touch-sensitive switch shown in a very simplified way. The capacitive touch-sensitive switch10has in particular a capacitive sensor element12, for example in the form of an electrode which, together with, for example, a finger of a user and a touch panel acting as a dielectric, for example, of an operating faceplate in between, forms a capacitor having a capacitance which is variable corresponding to the actuation of the touch-sensitive switch10, i.e. the touching or not-touching of the touch panel associated with the capacitive sensor element12. The capacitive sensor element12is connected to a sensor circuit14, which detects the changes in capacitance at the sensor element for example during charge or discharge phases of the capacitor. A control unit16is connected to the sensor circuit14. The control unit16drives the sensor circuit14in order to switch it on or off, for example, and to preset a scanning frequency for detecting the changes in capacitance for it. The measurement signals detected by the sensor circuit14are transmitted to the control unit16, which evaluates the measurement signals in order to identify touching or not-touching of the touch panel at the capacitive sensor element12by a user. FIG.2shows the sequence of an operating method according to the invention for such a capacitive touch-sensitive switch10. First, in a step S10, a suitable scanning frequency is adjusted for the sensor circuit14of the touch-sensitive switch10, which scanning frequency offers a good signal-to-noise ratio and avoids critical alias effects as a result of noise signals. Then, in step S20, the capacitive touch-sensitive switch10is operated at the scanning frequency fixed in step S10in order to detect a switch actuation by a user. If the operating duration T of the touch-sensitive switch10exceeds a preset limit value Tx (Yes in step S22), step S10is repeated. In other words, the adjustment of the scanning frequency is repeated periodically at time intervals Tx. This is advantageous since noise signals can change over time and the scanning frequency is then correspondingly adapted. Step S10of adjusting the scanning frequency is in addition preferably performed in an initialization procedure when the touch-sensitive switch10is first brought into operation. FIG.3shows, by way of example, an exemplary embodiment of the adjustment method from step S10. In the adjustment method S10, first, in a step S102, the touch-sensitive switch10or its sensor circuit14is operated at a first scanning frequency (No. x) and using a first measurement method A. The first scanning frequency No. x is selected from a group of available scanning frequencies; the first scanning frequency No. x is, for example, 111 kHz. The first measurement method A is preferably a measurement method having a charge-charge cycle or a measurement method having a discharge-discharge cycle. Then, in a step S104, the corresponding measurement signals are detected by the sensor circuit14over a predetermined time period (for example 64 measurements) and a corresponding amplitude/time graph is generated by the control unit16, as is illustrated by way of example as the top graphs inFIGS.4A and4B. Then, the control unit16, in a step S106, converts the generated amplitude/time graph into an amplitude/frequency graph by using a fast Fourier transform (FFT), as is illustrated by way of example as the bottom graphs inFIGS.4A and4B. The control unit16then checks, in a step S108, whether the amplitude/frequency graph contains a peak. Such a peak is caused by an alias effect which is generated by a noise signal. If the control unit16identifies a peak in the amplitude/frequency graph (Yes in S108), the control unit then checks, in a step S110, whether this peak is in the critical range. For example, the control unit16checks whether the frequency fp of this peak is below a predetermined threshold value fs. The threshold value fs is, for example, a preset frequency value or half the present scanning frequency. If the peak in the amplitude/frequency graph is in the critical range below the threshold value fs (Yes in step S110), as is illustrated by way of example inFIG.4A, the presently tested scanning frequency is not suitable for the detection operating mode of the touch-sensitive switch since this low-frequency alias effect cannot be safely distinguished from a touch of the switch10in an evaluation of the measurement signals and is therefore critical. The presently tested scanning frequency is therefore not set as a selection frequency for the detection operating mode of the touch-sensitive switch; instead, the method in this case continues directly with a step S118, explained further below. If, on the other hand, the peak in the amplitude/frequency graph is in the uncritical range above the threshold value fs (No in step S110), as is illustrated by way of example inFIG.4B, the presently tested scanning frequency is suitable for the detection operating mode of the touch-sensitive switch since this high-frequency alias effect can be identified as such during an evaluation of the measurement signals and can be distinguished from a touch of the switch10which generates a low-frequency measurement signal. The method therefore continues in this case with a step S112. In step S112, a check is performed as to whether a suitable scanning frequency has already been set as selection frequency in the scanning frequency adjustment method. If this is not the case (No in step S112) because, for example, no suitable scanning frequency has been found yet or because it is the first tested scanning frequency, the method continues with step S114in order to set the present scanning frequency, which has been judged to be suitable in step S110, as the selection frequency for the detection operating mode of the touch-sensitive switch10. If, on the other hand, a selection frequency has already been set in the scanning frequency adjustment method (Yes in step S112), the method continues with a step S116. In this step S116, the control unit16compares the suitability of the present scanning frequency with the suitability of the set selection frequency. A scanning frequency is, for example, better suited to the detection operating mode of the touch-sensitive switch10when the uncritical alias effect has a higher frequency and therefore generates a peak at a higher frequency fp in the amplitude/frequency graph. If the previously set selection frequency is better suited than the present scanning frequency (No in step S116), the method continues with the step S118and the previously set selection frequency remains set as the selection frequency. If, on the other hand, the present scanning frequency is better suited than the previously set selection frequency (Yes in step S116), the method continues with the step S116in order to reset the present scanning frequency as the selection frequency. Then, the method (Yes in step S110, No in step S116or after step S114) continues with the step S118. In this step S118, a check is performed as to whether now all of the available frequencies have been tested in the above-described way for their suitability as scanning frequency for the detection operating mode of the touch-sensitive switch10. If not all of the available frequencies have yet been tested (No in step S118), the method continues with a step S120, in which the next scanning frequency No. x+1 is selected from the group of available scanning frequencies. The new scanning frequency No. x+1 is, for example, 80 kHz. The method then returns to the step S102in order to perform the above-described suitability check for the new scanning frequency No. x+1 after step S102. If, on the other hand, all of the available frequencies have now been tested (Yes in step S118), the method continues with a step S122, in which the frequency that was set last as the selection frequency in step S114is fixed as the scanning frequency for the detection operating mode of the touch-sensitive switch, which is preferably performed by using a second measurement method B, which differs from the first measurement method A. As is illustrated inFIG.3, the method in addition goes from step S108directly to step S118if no peak is identified in the amplitude/frequency graph. That is to say that although no critical alias effect is identified, the present scanning frequency is not judged as being suitable and in particular is also not set as the selection frequency. The reason for this is that, in this case, it cannot be ruled out that the measurement signals could contain a critical alias effect which, however, is not identifiable in the amplitude/frequency graph. Particularly critical are alias effects with a frequency value fp close to zero or equal to zero. If the alias frequency value fp is equal to zero, the amplitude/frequency graph is, however, flat over the entire frequency range and no peak is identifiable. That is to say that when the control unit16does not identify a peak in the amplitude/frequency graph in step S108, this can mean that actually no alias effect is present or alternatively that a critical alias effect is not identifiable. As mentioned, in the suitability test method S10, a first measurement method A is used, which is preferably a measurement method having a charge-charge cycle or a discharge-discharge cycle. In the detection operating mode of the touch-sensitive switch S20at the scanning frequency (S122) fixed in the suitability test method S10, a second measurement method B is preferably then used which differs from the first measurement method A. The second measurement method B is preferably a measurement method with a discharge-charge cycle or a charge-discharge cycle. | 9,943 |
11861103 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized in other embodiments without specific recitation. Suffixes may be attached to reference numerals for distinguishing identical elements from each other. The drawings referred to herein should not be understood as being drawn to scale unless specifically noted. Also, the drawings are often simplified, and details or components omitted for clarity of presentation and explanation. The drawings and discussion serve to explain principles discussed below, where like designations denote like elements. DETAILED DESCRIPTION In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding. However, it will be apparent to one of ordinary skill in the art that embodiments may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. In general, embodiments are directed to mitigation of display artifacts caused by beacon signals. To synchronize with a capacitive pen, the input-display device transmits a beacon signal via sensing electrodes. The transmission of the beacon signal can cause a display artifact on a displayed image. When the display is frequently updated (e.g., at high image frame rate), the display artifact may not be detectable by a human user. On the other hand, when the image frame rate has an equal or lower frequency than the beacon signal rate, the display artifact caused by the beacon signal can be detected. One or more embodiments are directed to minimizing the effects of display artifacts caused by beacon signals by synchronizing between proximity sensing controller and the display driver. In some embodiments, the beacon signal is transmitted during a non-refresh period of the display. In such a scenario, either the display driver synchronizes the display update at a different time than the beacon signal is transmitted, or the proximity sensing controller transmits the beacon signal at a different time than the display update. For example, either the display driver or the proximity sensing controller may delay the respective action, (i.e., display update or beacon signal) by a time period after the vertical synchronization (Vsync) signal. A Vsync signal is a signal that is transmitted after the entire display frame is transferred. The Vsync signal indicates that an entire display frame is transmitted. Because, in such embodiments, the beacon signal is not transmitted at the same time as the display update, the display artifact is mitigated by not existing. In other embodiments, which may be combined with the above technique, the system performs a transition to frame skip operation. In the transition to frame skip operation, an additional display refresh is performed before entering a non-refresh period and after the beacon signal is transmitted. In such embodiments, the beacon signal may still cause a display artifact. However, the minimization of the display artifact is achieved because the display is quickly refreshed prior to the period in which the display is not refreshed. Turning to the figures,FIG.1Ashows a diagram of a system in accordance with one or more embodiments. Specifically,FIG.1Ashows a diagram of an input-display device (1000). Input-display devices, such as shown inFIG.1A, are adapted to both image displaying and proximity sensing. An input device refers to at least an input portion of the input-display device. Input-display devices are often used as user-interfaces of electronic systems. The term “electronic system” broadly refers to any system capable of electronically processing information. Some non-limiting examples of electronic systems include personal computers of all sizes and shapes, such as desktop computers, laptop computers, netbook computers, tablets, web browsers, e-book readers, and personal digital assistants (PDAs). Other examples include automotive user interfaces configured to give drivers user interface capabilities. An input-display device may include a display panel (100) and a proximity sensing panel (300) having sensor electrodes disposed neighboring or integrated in the display panel (100). The input-display device (1000) may be configured to display an image on the display panel (100) while sensing one or more input objects located on or near the display panel (100) based on resulting signals received from the sensor electrodes. In addition to the display panel (100) and proximity sensing panel (300), the input-display device (1000) includes a display driver (200) and a proximity sensing controller (400). The display panel (100) is coupled to the display driver (200), and the proximity sensing panel (300) is coupled to the proximity sensing controller (400). The display driver (200) and the proximity sensing controller (400) are further coupled to a processing system (125). Examples of the processing system (125) include an application processor, a central processing unit (CPU), a special purpose processor, and other types of processors. Although shown skewed inFIG.1A, as shown inFIG.1B, the proximity sensing panel (300) is disposed on or near the display panel (100) and at least partially overlapping the display panel (100). The proximity sensing panel (300) defines the sensing region (150) where input objects may be detected. Returning toFIG.1A, one type of input object is a capacitive pen (175) (i.e., stylus or active pen). The capacitive pen (175) transmits the capacitive pen signals responsive to the capacitive pen (175) detecting a beacon signal from the input-display device. The capacitive pen signals are signals that originate from the capacitive pen (175) and alter the capacitance detected by the proximity sensing panel (300). An example of a capacitive pen (175) is an active pen that complies with the Universal Stylus Initiative (USI) protocol. FIG.2shows an example configuration of the display panel (100), according to one or more embodiments. The display panel (100) may be any type of dynamic display capable of displaying a visual interface to a user. Examples of the display panel (100) include organic light emitting diode (OLED) display panels, micro light emitting diode (LED) display panels and liquid crystal display (LCD) panels. In the shown embodiment, the display panel (100) includes display elements (110) (e.g., pixel circuits), gate lines (120) (also referred to as scan lines), source lines (130) (also referred to as data lines), and a gate scan driver (140). Each display element (110) may include an OLED pixel, a micro LED pixel, an LCD pixel, or a different type of pixel. Each display element (110) is coupled to the corresponding gate line (120) and source line (130). The source lines (130) may be configured to provide data voltages to display elements (110) of the display panel (100) to update (or program) the display elements (110) with the data voltages. The gate lines (120) are used to select rows of display elements (110) to be updated with the data voltages. Thus, when display elements (110) of a selected row is to be updated, the gate scan driver (140) asserts the gate line (120) coupled to the display elements (110) of the selected row. The source lines (130) may each have a significant capacitance since the source lines (130) almost traverse the display panel (100) in the vertical direction. The display panel (100) may further include other components and signal lines depending on the display technology. In embodiments where an OLED display panel is used as the display panel (100), for example, the display panel (100) may further include emission lines that control light emission of the display elements (110) and power lines that delivers a power supply voltage to the respective display elements (110). The display driver (200) is configured to drive the source lines (130) of the display panel (100) based on image data (260) received from the processing system (125). The image data corresponds to an image to be displayed on the display panel (100). The image data may include gray levels of the respective display elements (110) of the display panel (100). The display driver (200) is configured to generate data voltages for the respective display elements (110) based on the image data received from the processing system (125) and provide the generated data voltages to the respective display elements (110) via the source lines (130). The display driver (200) includes a data interface (I/F) (210), an image processing circuit (220), driver circuitry (230), a controller (CTRL) (240), and a proximity sensing controller interface (I/F) (250). The data interface (210) is configured to receive image data (260) from the processing system (125) and forward the image data (260) to the image processing circuit (220). The image processing circuit (220) may be configured to perform image processing to adjust the image, such as adjust luminance of individual pixels in the image data to account for information about the pixel circuits and the display panel. The driver circuitry (230) is configured to drive the source lines (130) based on the processed image data from the image processing circuit (220). The controller (240) is configured to receive configuration information from the processing system (125) via the data interface (210). For example, the configuration information may include the image refresh rate that identifies the rate at which the display is to be updated in accordance with one or more embodiments. The controller (240) is configured to output a Vsync signal, horizontal synchronization (Hsync), and a clock (CLK) signal. The Vsync signal is a trigger for the start of each Vsync period. The Hsync signal is a trigger for the start of each Hsync period. Additionally, the controller (240) outputs display information (info.). The image processing circuit (220), driver circuitry (230), and sensing controller interface (I/F) (250) receive the Vsync, Hsync, and clock signal, while the sensing controller interface (250) also receives the display information. The display information may include the display configuration including the current display frame rate. The sensing controller interface (250) is an interface that is connected to the proximity sensing controller (400) and is configured to transmit on the VSOUT and HSOUT link to the proximity sensing controller (400). The VSOUT link is a connection that transmits the Vsync signal and the HSOUT link is a connection that transmits the Hsync signal. In some embodiments, the sensing controller also output the current display frame rate to the proximity sensing controller (400). FIG.3shows an input device portion of an input-display device. In the shown embodiment, the proximity sensing panel (300) includes an array of sensor electrodes (310) disposed over the display panel (100). The sensor electrodes (310) are used for proximity sensing to detect one or more input objects located on or near the proximity sensing panel (300). As used herein, proximity sensing includes touch sensing (e.g., contact on the proximity sensing panel (300) and/or the display panel (100). Examples of input objects include user's fingers and styli, including capacitive pens. While twelve sensor electrodes (310) are shown inFIG.3, the proximity sensing panel (300) may include more or less than twelve sensor electrodes (310). Further, whileFIG.3shows the sensor electrodes (310) are rectangular, the sensor electrodes (310) may be shaped in a different shape, such as triangular, square, rhombic, hexagonal, irregular, or other shapes. Further, sensor electrodes may be configured in a variety of different configuration patterns, including bars that span vertically and/or horizontally across the panel. The proximity sensing controller (400) is configured to sense one or more input objects based on resulting signals received from the sensor electrodes (310) and generate positional information of the one or more sensed input objects. “Positional information” as used herein broadly encompasses absolute position, relative position, velocity, acceleration, and other types of spatial information. Historical data regarding one or more types of positional information may also be determined and/or stored, including, for example, historical data that tracks position, motion, or instantaneous velocity over time. The generated positional information is sent to the processing system (125). In one or more embodiments, the proximity sensing controller (400) is configured to sense one or more input objects through capacitive proximity sensing. Some capacitive proximity sensing implementations utilize “absolute capacitance” (also often referred to as “self-capacitance”) sensing methods based on changes in the capacitive coupling between the sensor electrodes (310) and an input object. In various embodiments, an input object near the sensor electrodes (310) alters the electric field near the sensor electrodes (310), thus changing the capacitive coupling. The resulting signals acquired from the sensor electrodes (310) include effects of the changes in the capacitive coupling. In one implementation, an absolute capacitance sensing method operates by modulating the sensor electrodes (310) with respect to a reference voltage, e.g., system ground, and by detecting the capacitive coupling between the sensor electrodes (310) and input objects. Some capacitive proximity sensing implementations utilize “transcapacitance” (also often referred to as “mutual capacitance”) sensing methods based on changes in the capacitive coupling between transmitter electrodes (not shown) and the sensor electrodes (310). In various embodiments, an input object near the sensor electrodes (310) alters the electric field between the transmitter electrodes and the sensor electrodes (310), thus changing the capacitive coupling. In one implementation, a transcapacitance sensing method operates by detecting the capacitive coupling between one or more transmitter electrodes and one or more sensor electrodes (310). The coupling may be reduced when an input object coupled to a system ground approaches the sensor electrodes (310). Transmitter electrodes may be modulated relative to a reference voltage, e.g., system ground. The transmitter electrodes may be a subset of the sensor electrodes (310) or separate sensor electrodes. Further, which sensor electrodes are used as transmitter electrodes and which sensor electrodes are used as receiver electrodes may change. The receiver electrodes (310) may be held substantially constant relative to the reference voltage or modulated relative to the transmitter electrodes to facilitate receipt of resulting signals. The proximity sensing panel is further configured to operate with a capacitive pen. The capacitive pen may be a stylus that has the transmitter electrode for transcapacitance sensing. Specifically, rather than using transmitter signals from the transmitter electrodes in the input-display device, the transmitter signals originate from the capacitive pen. The sensor electrodes (310) receive resulting signals from the transcapacitive coupling with the transmitter electrode in the capacitive pen. The resulting signals may not identify positional information, but also transmit additional information, such as configuration or state information. For example, the capacitive pen may have one or more buttons that may be used by a user to control an aspect of the user interface (e.g., color used in the interface or other aspect). In order to communicate via the transcapacitive coupling, synchronization is performed with the input device. The synchronization is in the form of a beacon signal from the sensor electrodes (310) of the proximity sensing panel that is received by the capacitive pen when the capacitive pen is in the sensing region. For example, the input device transmits the beacon signal on sensor electrodes (310) that a sensor in the tip of a capacitive pen detects. The detection circuit in the capacitive pen uses the body of the capacitive pen as a reference. Responsive to the beacon signal, the capacitive pen transmits the capacitive signals for interpretation by the proximity sensing controller. Because the capacitive pen may be randomly removed from the sensing region, the beacon signal is repetitively transmitted. For example, the beacon signal may be transmitted at a defined rate, such as once every 16.6 milliseconds. The rate of transmission of the beacon signal is the beacon signal rate. As the source lines of the display panel may extend to almost traverse the display panel, a capacitive coupling may exist between the source lines and sensor electrodes disposed neighboring or integrated in the display panel. The capacitive coupling between the source lines and the sensor electrodes may cause electromagnetic interference during an image refresh when the display elements are updated if the image refresh is performed concurrently with the sensor electrodes being driven with the beacon signal. The electromagnetic interference may result in a display artifact. A display artifact is a distortion in the image being displayed. Continuing with the proximity sensing controller (400), the proximity sensing controller (400) includes a display driver interface (320) connected to a proximity sensing circuit (330). In one or more embodiments, the display driver interface (320) is a general purpose I/O interface (GPIO) that is connected to the VSOUT link and HSOUT link from the display driver (200). The display driver interface (320) is configured to communicate with a processing circuit (350) in the proximity sensing circuit (330). In one or more embodiments, the proximity sensing circuit (330) includes an analog front end (AFE) (340), a processing circuit (350), and a beacon circuit (360). The AFE (340) is configured to receive resulting signals from the sensor electrodes (310) and generate analog-to-digital conversion (ADC) data corresponding to the resulting signals. Generating the ADC data may include conditioning (filtering, baseline compensation, and/or other analog processing) of the resulting signals and analog-to-digital conversion of the conditioned resulting signals. In embodiments where the resulting signals from the sensor electrodes (310) are acquired in a time divisional manner, the AFE (340) may be configured to provide guarding voltage Vguard to sensor electrodes (310) from which resulting signals are not currently acquired. In embodiments where the proximity sensing is achieved through transcapacitive sensing from the transmitter electrodes in the proximity sensing panel (300), the AFE (340) may be configured to provide transmitter signals to the transmitter electrodes. The operation of the AFE (340) may be controlled based on one or more register values received from the processing circuit (350) and beacon circuit (360). When a capacitive pen is not present, the AFE is configured to drive the sensor electrodes with capacitive sensing signals, and receive resulting signals from the sensor electrodes, whereby the resulting signals result from the capacitive sensing signals. The processing circuit (350) is configured to process the resulting signals and determine a presence of an input object. The processing circuit (350) is configured to generate positional information of one or more input objects in the sensing region based on the resulting signals acquired from the sensor electrodes (310). In one implementation, the processing circuit (350) may be configured to process the ADC data, which correspond to the resulting signals acquired from the sensor electrodes (310), to generate the positional information. The processing circuit (350) may also be configured to communicate with the capacitive pen. The processing circuit (350) may include a processor, such as a micro control unit (MCU), a central processing unit (CPU) and other types of processors, and firmware. The processing circuit (350) may be further configured to control the overall operation of the proximity sensing controller (400), including controlling the AFE (340) and the beacon circuit (360). The beacon circuit (360) is configured to trigger driving the sensor electrodes (310) through the AFE (340) with a beacon signal. In particular, the beacon circuit (360) controls the timing of the driving of the sensor electrodes (310) with the beacon signal at the beacon signal rate. The beacon circuit (360) and/or the processing circuit (350) may have a timer for delaying a beacon signal. The timer may be a hardware-based timer or a software-based timer. The amount of the delay may be controlled by the processing circuit (350) based on the Vsync signal. Different types of techniques may be used to mitigate display artifacts due to beacon signals. The timing diagrams ofFIGS.4,5, and6show different ways to mitigate for display artifacts. FIG.4shows an example timing diagram (401) of how timing of the various components are triggered by a Vsync signal. The Vsync signal defines the timing of the Vsync period (420) exists on the display panel. In the example shown inFIG.4, the Vsync period (420) is at a 60 Hertz (Hz) frequency. Although 60 Hz frequency is shown, other Vsync frequencies may be used, such as Vsync frequency like 120 Hz, 30 Hz, 20 Hz, 15 Hz ,10 Hz, 1 Hz, etc. The Vsync signal is transmitted from the controller to the image processing circuit to trigger the Vsync period (420) on the display driver. The Vsync signal is concurrently transmitted to the sensing controller interface on the display driver. Thus, the sensing controller interface outputs the Vsync signal on the VSOUT link (430) at the same frequency and concurrently to the proximity sensing controller as the Vsync period (420). The Vsync period (420) corresponds to the output of the VSOUT link (430). Additional Vsync periods may exist on the display DDI that are not output on the VSOUT link (430). The proximity sensing controller triggers the beacon sensing frame (410) based on the Vsync signal on the VSOUT link (430). The beacon sensing frame (410) includes a beacon signal and a proximity sensing frame. The beacon signal (denoted by B inFIG.4) is transmitted at a defined frequency as triggered by the Vsync signal on VSOUT link (430). For example, the beacon signal may be transmitted at a 60 Hz frequency beacon sensing frame (410). Between transmissions of the beacon signal, the proximity sensing controller performs a proximity sensing frame. The proximity sensing frame may include detecting positional information for an input object and receiving data from a capacitive pen. Continuing withFIG.4, the timing diagram (401) shows the timings for two different image refresh rates (i.e., 120 Hz and 60 Hz). The particular rates are for example purposes only and other rates may be used, such as 30 Hz, etc. In one or more embodiments, image refresh rates are alternatives of each other as the display or portion thereof is only updated according to one refresh rate at any particular point in time. The input-display device may switch between refresh rates. InFIG.4, the image refresh rate relates to the time to update the display. Namely, the image refresh periods are in succession and last the duration of time as defined by the image refresh rate. Thus, the duration of time of an image refresh frame for an image refresh rate at 60 Hz is twice the duration of time of an image refresh frame for an image refresh rate at 120 Hz. Similarly, although not shown, the duration of time of an image refresh frame for an image refresh rate at 30 Hz is twice the duration of time of an image refresh frame for an image refresh rate at 60 Hz. The duration of the image refresh frame may be controlled by the length of time of the Hsync blanking periods (not shown) and the Vsync blanking periods. The lower frequencies may be performed to reduce electricity usage. When the beacon signal is being transmitted as part of the beacon sensing frame (410), the image refresh frame may include display artifacts caused by the beacon signal as shown by the “star character” inFIG.4. When the image refresh rate is 120 Hz (440), the display is frequently updated as compared to the beacon signal rate. In the example in which the beacon signal rate is 60 Hz and the image refresh rate is 120 Hz, only half of the image refresh frames have a display artifact. Because of the frequency of image refresh, any display artifact may not be detectable to a human user. However, when the image refresh rate is at the lower frequency of 60 Hz (450), the same image is displayed on the display panel for a longer period of time. Further, when the image refresh rate is at the equal or lower frequency than the beacon signal rate, then each display image includes a display artifact. Further, during the frame skip operation, the frame refresh period is replaced non-refresh period (460), and, thus, the display artifact remains because the display is not updated. At the transition to frame skip time (402), the display switches from continually updating the display to non-refresh period(s). In other words, an image refresh frame, when the display is updated, is skipped and a non-refresh period exists. The non-refresh periods are periods when the image on the display is not refreshed. Non-refresh periods may be referred to as vbias periods. During the non-refresh periods (460), the same image is displayed without update. Thus, the display artifact from the immediately preceding image refresh period (470) remains shown on the display. Mitigating for such display artifacts may be performed using the technique shown inFIG.5.FIG.5shows a timing diagram (500) for when transitioning to frame skip to mitigate for display artifacts. In the timing diagram ofFIG.5, the timings of the beacon sensing frame (410), Vsync period (420), proximity sensing Vsync on VSout (430), and the source image at 120 Hz (440) remain the same. However, the image refresh for the source image at 60 Hz (510) is modified so that the display artifact does not remain during the non-refresh period. Specifically, in the refresh period (470) immediately preceding the non-refresh periods (460), an additional image refresh frame (520) is performed. As shown in exampleFIG.5, the additional image refresh frame (520) is performed when the following conditions exist. A first condition is that the image refresh rate is at an equal or lower frequency than the beacon signal rate. The second condition is that the non-refresh period occurs immediately after a next Vsync signal that causes the beacon signal. In terms of timing, the additional image refresh frame (520) is performed after the last beacon signal and before the next Vsync signal completes transmission. Therefore, the additional image refresh frame (520) does not include a display artifact. For example, directly before the transition to frame skip or the last Vsync period before the non-refresh period, the display may switch to a 120 Hz update, thereby causing the additional image refresh frame. The same technique may be used for lower frequency updates. Another way to use less energy is to have the same duration of time for the image refresh frames but decrease the number of image refresh frames. Non-refresh periods are more frequent based on the image refresh rates. In such a scenario,FIG.6shows an example timing diagram (600) to mitigate for display artifacts from the beacon signal. InFIG.6, the timings of the beacon sensing frame (410), Vsync period (420), proximity sensing Vsync on VSout (430) are the same as shown inFIG.4andFIG.5. Further, the source image at 120 Hz (610) is approximately the same, but without a non-refresh period. Thus, at 120 Hz, every other frame may exhibit a display artifact. For the image refresh rates that are equal to or have a lower frequency than the beacon frame, mitigating for display artifacts using the technique inFIG.6is performed by not aligning the image refresh frames with the beacon signal. Rather, the non-refresh periods are aligned with the image refresh frames. For example, the source image at 60 Hz (620) may have alternating non-refresh periods and image refresh frames as shown inFIG.6. However, display driver delays the image refresh frame to after the Vsync period (420) and, correspondingly, after the beacon signal. Similarly, the source image at 30 Hz (630) may have three non-refresh periods between the image refresh frames. Thus, every four periods are an image refresh frame for source image at 30 Hz (630) in the example shown inFIG.6. Like the source image at 60 Hz, when transitioning, the display driver delays the image refresh frame to the period after the Vsync period (420) and correspondingly after the beacon signal. At the source image at 20 Hz (640), five non-refresh periods between the image refresh frames. Thus, every six periods is an image refresh frame for source image at 20 Hz (640) in the example shown inFIG.6. Like the source image at 60 Hz and 30 Hz, when transitioning, the display driver delays the image refresh frame to the period after the Vsync period (420) and correspondingly after the beacon signal. GeneralizingFIG.6, if the periods at which the beacon signal is transmitted is odd periods, then the image refresh frame is delayed so that the image refresh frames are on some of the even periods. FIG.7shows another timing diagram (700) for mitigating for display artifacts due to the beacon signal. InFIG.7, the beacon frame (710), Vsync period (720), proximity sensing Vsync on VSout (730), source image at 120 Hz (740), source image at 60 Hz (750), source image at 30 Hz (760), and source image at 20 Hz (770) are each similar to the beacon sensing frame (410), Vsync period (420), proximity sensing Vsync on VSout (430), source image at 120 Hz (610), source image at 60 Hz (620), source image at 30 Hz (630), and source image at 20 Hz (640), respectively, albeit with different delays. InFIG.7, the proximity sensing controller is modified to delay the beacon signal so as to not overlap with the Vsync signal. The display driver triggers the image refresh frames based on the Vsync signal and without delay. Because the proximity sensing controller performs the delay of the beacon signal to be delayed from the Vsync signal, the image refresh frame is not overlapping with the beacon signal while the display driver operates as normal according to the image refresh rate. AlthoughFIGS.4-7show specific image refresh rates and beacon signal rates, other rates not shown may be used without departing from the scope of the claims. For example, the various embodiments may support 15 Hz, 10 Hz, and 1 Hz in one or more embodiments. FIGS.8-10show example flowcharts in accordance with one or more embodiments. While the various steps in these flowcharts are presented and described sequentially, one of ordinary skill will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all of the steps may be executed in parallel. Furthermore, the steps may be performed actively or passively. FIG.8corresponds to the timing diagram ofFIG.5. In Block802, the transition of the image refresh rate to a lower frequency than the beacon signal rate is identified. For example, the processing system may send an instruction to transition to a power saving mode. Responsive to the transition, the controller on the display driver may trigger the switch to a lower image refresh rate. In Block804, the display driver transitions to lower frequency configuration that has an additional image refresh frame immediately prior to corresponding non-refresh period and after beacon signal completes. The lower frequency configuration display driver tracks which period immediately preceding the non-refresh period and triggers the additional image refresh frame. In Block806, responsive to the lower frequency, the display driver drives the source lines using image data immediately before a next Vsync signal and a corresponding non-refresh period. FIG.9corresponds to the timing diagram ofFIG.6. In Block902ofFIG.9, the transition of the image refresh rate that is equal or lower frequency than the beacon signal rate is identified. In Block904, the display driver is transitioned to the lower frequency configuration with a non-refresh period being performed when the beacon signal triggers. In Block906, at the time period defined by the lower frequency configuration, the image refresh frame is triggered. In the diagram ofFIG.9, image refresh frame is delayed from the Vsync signal by the display driver based on being in the lower frequency configuration. The controller, the driver circuitry, or the image processing circuit may cause the delay. The operations of the proximity sensing controller may remain unchanged. FIG.10corresponds to the timing diagram ofFIG.7.FIG.10is from the perspective of when the proximity sensing controller delays the beacon signal and the display driver remains unchanged. In Block1002, from the display driver, the proximity sensing controller receives an indication of an image refresh rate that is an equal or lower frequency than the beacon signal rate. The indication may be a signal transmitted from the display driver to the proximity sensing controller. In Block1004, the next image refresh frame is identified. The next image refresh frame may be identified as being triggered by the Vsync signal. In Block1006, the proximity sensing controller delays triggering the beacon signal until during a non-refresh period of the display. The delay may be, for example, equivalent to have of a beacon sensing frame. After the delay, the beacon signal and the corresponding proximity sensing frame is triggered in Block1008. Thus, in the configuration ofFIG.10, the proximity sensing controller manages the delay and the display driver does not change to mitigate for display artifacts caused by the beacon signal. In the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims. | 35,451 |
11861104 | DETAILED DESCRIPTION In the specification, it should be noted that like reference numerals already used to denote like elements in other drawings are used for elements wherever possible. In the following description, when a function and a configuration known to those skilled in the art are irrelevant to the essential configuration of the present disclosure, their detailed descriptions will be omitted. The terms described in the specification should be understood as follows. Advantages and features of the present disclosure, and implementation methods thereof will be clarified through following embodiments described with reference to the accompanying drawings. The present disclosure may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. Further, the present disclosure is only defined by scopes of claims. A shape, a size, a ratio, an angle, and a number disclosed in the drawings for describing embodiments of the present disclosure are merely an example, and thus, the present disclosure is not limited to the illustrated details. Like reference numerals refer to like elements throughout. In the following description, when the detailed description of the relevant known function or configuration is determined to unnecessarily obscure the important point of the present disclosure, the detailed description will be omitted. In a case where ‘comprise’, ‘have’, and ‘include’ described in the present specification are used, another part may be added unless ‘only’ is used. The terms of a singular form may include plural forms unless referred to the contrary. In construing an element, the element is construed as including an error range although there is no explicit description. It will be understood that, although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The term “at least one” should be understood as including any and all combinations of one or more of the associated listed items. For example, the meaning of “at least one of a first item, a second item, and a third item” denotes the combination of all items proposed from two or more of the first item, the second item, and the third item as well as the first item, the second item, or the third item. Features of various embodiments of the present disclosure may be partially or overall coupled to or combined with each other, and may be variously inter-operated with each other and driven technically as those skilled in the art can sufficiently understand. The embodiments of the present disclosure may be carried out independently from each other, or may be carried out together in co-dependent relationship. Hereinafter, a touch display apparatus according to the present disclosure will be described in detail with reference toFIGS.1to4. FIG.1is a block diagram of a touch display apparatus1000according to an embodiment of the present disclosure, andFIG.2is a timing diagram of a display period and a touch sensing period of the touch display apparatus1000.FIG.3is a block diagram of a touch driving device according to an embodiment of the present disclosure, andFIG.4is a diagram illustrating pieces of sensing data sensed by a touch driving device according to an embodiment of the present disclosure. Referring toFIG.1, the touch display apparatus (referred to as a display apparatus)1000according to an embodiment of the present disclosure may include a touch display panel100, a display driving device210, and a touch sensing device220. The display apparatus1000may perform a display function and a touch sensing function and may be implemented as a flat display apparatus such as a liquid crystal display (LCD) apparatus or an organic light emitting diode (OLED) display apparatus. The touch display panel100, as illustrated inFIG.2, may operate in a display period DP and a touch sensing period TP. The touch display panel100may display an image by using light irradiated from a backlight unit during the display period DP and may perform a function of a touch panel for touch sensing during the touch sensing period TP. According to an embodiment of the present disclosure, each touch sensing period TP may denote one frame where information about touch sensing is input. The touch display panel100may display an image having a certain gray level or may receive a touch. The touch display panel100may be an in-cell touch type display panel using a capacitance type. Alternatively, the touch display panel100may be an in-cell touch type display panel using a self-capacitance type or an in-cell touch type display panel using a mutual capacitance type. The touch display panel100may include a plurality of gate lines G1to Gm (where m is an integer of 2 or more), a plurality of data lines D1to Dn (where n is an integer of 2 or more), a plurality of pixels P, a plurality of touch sensors TE, and a plurality of touch lines T1to Tk. Each of the plurality of gate lines G1to Gm may receive a scan pulse in the display period DP. Each of the plurality of data lines D1to Dn may receive a data signal in the display period DP. The plurality of gate lines G1to Gm and the plurality of data lines D1to Dn may be arranged on a substrate to intersect with one another, thereby defining a plurality of pixel areas. Each of the plurality of pixels P may include a thin film transistor (TFT) (not shown) connected to a gate line and a data line adjacent thereto, a pixel electrode (not shown) connected to the TFT, and a storage capacitor (not shown) connected to the pixel electrode. Each of the plurality of touch sensors TE may perform a function of a touch electrode which senses a touch, or may perform a function of a common electrode of generating an electric field along with the pixel electrode to drive liquid crystal. That is, each of the plurality of touch sensors TE may be used as a touch electrode in the touch sensing period TP and may be used as the common electrode in the display period DP. Accordingly, each of the plurality of touch sensors TE may include a transparent conductive material. Each of the plurality of touch sensors TE may be used as a self-capacitance type touch sensor in the touch sensing period TP, and thus, should have a size which is greater than a minimum contact size between a touch object and the touch display panel100. Therefore, each of the plurality of touch sensors TE may have a size corresponding to one or more pixels P. The plurality of touch sensors TE may be arranged at a certain interval along a plurality of horizontal lines and a plurality of vertical lines. Each of the plurality of touch sensors TE may supply a common voltage to a corresponding touch sensor TE through a corresponding touch line of the plurality of touch lines T1to Tk in the display period DP. The plurality of touch lines T1to Tk may be respectively and individually connected to the plurality of touch sensors TE. The display driving device210may allow a data signal to be supplied to the plurality of pixels P included in the touch display panel100in the display period DP, and thus, may allow the touch display panel100to display an image. The display driving device210may include a timing controller211, a gate driving device212, and a data driving device213. The timing controller211may receive various timing signals including a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a data enable signal DE, and a clock signal CLK from an external system (not shown) to generate a gate control signal GCS for controlling the gate driving device212and a data control signal DCS for controlling the data driving device213. Also, the timing controller211may receive a video signal RGB from the external system, convert the video signal RGB into an image signal RGB′ having a type capable of being processed by the data driving device213, and output the image signal RGB′. Moreover, the timing controller211may compress an external data enable signal transmitted from a host system on the basis of the display period DP to generate an internal data enable signal iDE. The timing controller211may generate a touch synchronization signal Tsync for temporally dividing one frame period1F into the display period DP and the touch sensing period TP on the basis of a timing of the internal data enable signal and the vertical synchronization signal Vsync. The timing controller211may transfer the touch synchronization signal Tsync to the gate driving device212, the data driving device213, the touch driving device221, and the touch controller222. The host system may convert digital video data into a format suitable for displaying corresponding video data on the display panel100. The host system may transmit the digital video data and the timing signals to the timing controller211. The host system may be implemented as one of a television (TV) system, a set top box, a navigation system, a DVD player, a blue player, a personal computer (PC), a home theater system, and a phone system and may receive an input video. Moreover, the host system may receive touch input coordinates from the touch controller222and may execute an application program associated with the received touch input coordinates. The gate driving device212may receive the gate control signal GCS from the timing controller211during the display period DP. The gate control signal GCS may include a gate start pulse GSP, a gate shift clock GSC, and a gate output enable signal GOE. The gate driving device212may generate a gate pulse (or a scan pulse) synchronized with the data signal on the basis of the received gate control signal GCS and may shift the generated gate pulse to sequentially supply the shifted gate pulse to the gate lines G1to Gm. To this end, the gate driving device212may include a plurality of gate drive integrated circuits (ICs) (not shown). The gate drive ICs may sequentially supply the gate pulse synchronized with the data signal to the gate line G1to Gm on the basis of control by the timing controller211during the display period DP. The gate pulse may swing between a gate high voltage VGH and a gate low voltage VGL. The gate driving device212may not generate the gate pulse during the touch sensing period TP and may supply the gate low voltage VGL to the gate lines G1to Gm. Therefore, the gate lines G1to Gm may supply the gate pulse to the TFT of each pixel during the display period DP to sequentially select a data line, to which the data signal is to be applied, in the touch display panel100and may maintain the gate low voltage during the touch sensing period TP to prevent an output variation of the touch sensors. The data driving device213may receive the data control signal DCS and the image signal RGB′ from the timing controller211during the display period DP. The data control signal DCS may include a source start pulse SSP, a source sampling clock SSC, and a source output enable signal SOE. The source start pulse may control a data sampling start timing of each of n number of source drive ICs (SDIC) configuring the data driving device213. The source sampling clock may be a clock signal which controls a sampling timing of data in each of the source drive ICs SDIC. The source output enable signal may control an output timing of each of the source drive ICs SDIC. Moreover, the data driving device213may convert the received image signal RGB′ into an analog data signal and may supply the analog data signal to pixels P through the plurality of data lines D1to Dn. The touch sensing device220may sense a touch through the touch sensors TE in the touch sensing period TP. In detail, the touch sensing device220may supply a touch driving signal to the touch sensors TE to drive the touch sensor TE, and the touch sensing device220may sense a variation of a capacitance which is generated when the touch sensor TE is touched. When the touch display panel100is implemented as a mutual capacitance type, the readout IC ROIC may include a driving circuit, which generates the touch driving signal for driving the touch sensor TE and supplies the touch driving signal to the touch sensors TE through the touch lines T1to Tk, and a sensing circuit which senses a capacitance variation of the touch sensors TE through the touch lines T1to Tk to generate touch sensing data. Alternatively, when the touch display panel100is implemented as a self-capacitance type, the readout IC ROIC may supply the touch driving signal to the touch sensors TE by using one circuit and may obtain the touch sensing data from the touch sensors TE. Referring toFIGS.1and3, the touch sensing device220may include a touch driving device221and a touch controller222. The touch driving device221may drive the touch sensors TE during the touch sensing period TP, and thus, may receive a touch sensing signal from the touch sensors TE. The touch driving device221may convert the received touch sensing signal into touch sensing data and may transfer the touch sensing data to the touch controller222. As illustrated inFIGS.1and3, the touch driving device221may include a plurality of readout ICs ROIC1to ROICn. The readout ICs ROIC1to ROICn may supply the common voltage to the touch sensors TE through the touch lines T1to Tk during the display period DP. Therefore, the touch sensors TE may perform a function of the common electrode during the display period DP. Moreover, in the above-described embodiment, it is illustrated that the source drive IC SDIC and the readout ICs ROIC1to ROICn are implemented as separate elements, but the source drive IC SDIC and the readout ICs ROIC1to ROICn may be implemented as a type integrated into one chip. According to an embodiment of the present disclosure, each of the readout ICs ROIC1to ROICn may include an input unit221a, a correction unit221b, and an output unit221c. The input unit221amay receive a touch sensing signal ts from the touch sensor TE during the touch sensing period TP and may generate sensing data sd by using the input touch sensing signal ts. Particularly, the input unit221amay receive the touch sensing signal ts corresponding to at least one frame in the touch sensing period TP. Moreover, according to an embodiment of the present disclosure, the input unit221amay receive the touch sensing signal ts on the basis of a first frequency f1from the touch sensor TE. In this case, the first frequency f1may have a value which differs from that of a second frequency f2to be described below, and particularly, may have a value which is less than that of a second frequency f2. A relationship between the first frequency f1and the second frequency f2will be described below with reference toFIGS.4to6. The touch display panel100may include a first region A1where the touch sensor TE is uniformly formed and a second region A2where the touch sensor TE is not uniformly formed. A plurality of touch sensors TE disposed in the first region A1may have a uniform physical characteristic and thus may transfer a uniform touch sensing signal is to the touch sensing device, but a plurality of touch sensors TE disposed in the second region A2may cause a sensing defect due to a non-uniform physical characteristic. For example, the first region A1may be a center portion of the touch display panel100, and the second region A2may be an edge portion of the touch display panel100. Therefore, according to an embodiment of the present disclosure, the correction unit221bmay convert sensing data sd, corresponding to the touch sensor TE disposed in the second region A2among pieces of sensing data sd generated in the input unit221a, into correction data td. To this end, the correction unit221bmay classify pieces of sensing data sd corresponding to the touch sensors TE disposed in the second region A2on the basis of a position of each of the pieces of sensing data sd and may divide the pieces of sensing data sd, classified based on a position thereof, into two or more groups. For example, as illustrated inFIG.4, the correction unit221bmay divide pieces of sensing data sd, disposed adjacent to one another, into groups having similar values, or may divide the pieces of sensing data sd into groups on the basis of a predetermined boundary value. For example, some of pieces of sensing data illustrated inFIG.4may be divided into a first group Group1including pieces of sensing data having similar values ‘1244’ and ‘1629’ and a second group Group2including pieces of sensing data having similar values ‘3312’ and ‘3548’. Alternatively, some of the pieces of sensing data illustrated inFIG.4may be divided into the first group Group1including the pieces of sensing data having similar values ‘1244’ and ‘1629’ which are less values than a middle value ‘2047’ of sensing data and a second group Group2including pieces of sensing data having similar values ‘3312’ and ‘3548’ which are greater values than the middle value ‘2047’. According to an embodiment of the present disclosure, the correction unit221bmay apply different correction values to each group to generate correction data td. This will be described below in detail with reference toFIGS.4to6. Moreover, according to an embodiment of the present disclosure, the correction unit221bmay apply different correction values to different groups in each frame to generate pieces of correction data td and may combine and output the generated pieces of correction data td. Therefore, according to an embodiment of the present disclosure, pieces of sensing data sd corresponding to the touch sensors disposed in the second region A2may be input to the input unit221aat the first frequency f1and may output correction data at the second frequency f2which differs from the first frequency f1. This will be described below in detail with reference toFIGS.4to6. According to an embodiment of the present disclosure, the correction unit221bmay receive pieces of sensing data sd from the input unit221aat the first frequency f1, output pieces of sensing data sd corresponding to the touch sensors TE disposed in the first region A1of the touch display panel100at the first frequency f1, and correct and output pieces of sensing data sd corresponding to the touch sensors TE disposed in the second region A2of the touch display panel100at the second frequency f2. The output unit221cmay transfer correction data td, generated by the correction unit221b, to the touch controller222at the second frequency f2. Hereinafter, a driving method of a touch sensing device according to an embodiment of the present disclosure will be described in detail with reference toFIGS.4to6. FIG.4is a diagram illustrating pieces of sensing data sensed by a touch driving device according to an embodiment of the present disclosure, andFIG.5is a flowchart of a driving method of a touch driving device according to an embodiment of the present disclosure.FIG.6is a diagram illustrating a driving method of a portion ofFIG.4. As illustrated inFIG.4, pieces of sensing data sd may be divided into two or more groups including at least one of the pieces of sensing data sd corresponding to the touch sensors TE disposed in the second region A2of the touch display panel100. In detail, the pieces of sensing data sd corresponding to the touch sensors TE disposed in the second region A2of the touch display panel100may be divided into the first group Group1and the second group Group2. In this case, pieces of sensing data sd divided into one group may have similar values. According to an embodiment of the present disclosure, as described above, the correction unit221bmay divide pieces of sensing data sd into groups and may apply different correction values to pieces of sensing data of the groups to correct the sensing data. Accordingly, the correction unit221bmay not correct each sensing data sd and may apply the same correction value to the pieces of sensing data sd of the groups corresponding to a plurality of touch sensors TE to correct the pieces of sensing data sd, and thus, a size of each of the readout ICs ROIC1to ROICn may decrease, thereby reducing an area occupied by the readout ICs ROIC1to ROICn. Referring toFIGS.5and6, first, the input unit221amay receive sensing data sd in each of frames1frame and2frame (S501). Subsequently, the correction unit221bmay divide pieces of sensing data into n (where n is an integer of 2 or more) number of groups (S502). For example, the correction unit221bmay divide first to fourth sensing data sd1to sd4into n number of groups. That is, the correction unit221bmay divide the first to fourth sensing data sd1to sd4into a first group Group1and a second group Group2. In this case, the correction unit221bmay classify pieces of sensing data sd corresponding to touch sensors adjacent to one another and may divide the classified pieces of sensing data sd into groups having similar values or may divide the classified pieces of sensing data sd into groups on the basis of a predetermined boundary value. For example, as illustrated inFIG.6, first and second sensing data sd1and sd2may have similar values of 1000 to 2000 and thus may configure the first group Group1, and third and fourth sensing data sd3and sd4may have similar values of 3000 to 4000 and thus may configure the second group Group2. Alternatively, the first and second sensing data sd1and sd2may have a value of 2047 or less and thus may configure the first group Group1, and the third and fourth sensing data sd3and sd4may have a value of more than 2047 and thus may configure the second group Group2. Subsequently, the correction unit221bmay apply an ithcorrection value ti to an ithgroup Groupi in an i frame iframe (where i is an integer of 1 or more and n or less) to perform correction (S503). For example, the pieces of sensing data sd1to sd4may be divided into two (n=2) groups (the first and second groups) Group1and Group2, may apply a first correction value t1to each of the first and second sensing data sd1and sd2of the first group Group1in1frame1frame to generate first and second correction data td1and td2, and may apply a second correction value t2to each of the third and fourth sensing data sd3and sd4of the second group Group2in2frame2frame to generate third and fourth correction data td3and td4. That is, as illustrated inFIG.6, the correction unit221bmay add the first correction value t1‘800’ to the first sensing data sd1of the first group Group1having a value ‘1244’ in the1frame1frame to generate first correction data td1having a value ‘2044’, add the first correction value t1‘800’ to the second sensing data sd2of the first group Group1having a value ‘1629’ to generate second correction data td2having a value ‘2429’, add the second correction value t2‘−1400’ to the third sensing data sd3of the second group Group2having a value ‘3312’ in the2frame2frame to generate third correction data td1having a value ‘1912’, and add the second correction value t2‘−1400’ to the fourth sensing data sd4of the second group Group2having a value ‘3548’ to generate fourth correction data td4having a value ‘2148’. Therefore, the first and second sensing data sd1and sd2of the first group Group1in the1frame1frame and the third and fourth sensing data sd3and sd4of the second group Group2in the2frame2frame may be corrected based of different correction values, and thus, a deviation therebetween may decrease and the first and second sensing data sd1and sd2and the third and fourth sensing data sd3and sd4may be converted into the first and second correction data td1and td2and the third and fourth correction data td3and td4having similar values. Subsequently, the correction unit221bmay combine pieces of correction data of first to nthgroups Group1to Groupn of the first to nthframes1frame to nframe (S504). For example, as illustrated inFIG.6, the correction unit221bmay combine the first and second correction data td1and td2of the first group Group1in the1frame1frame and the third and fourth correction data td3and td4of the second group Group2in the2frame2frame. Therefore, an after-correction frequency may allow correction data to be output at a frequency which is 1/n of a before-correction frequency. For example, the correction unit221bmay output the first to fourth correction data td1to td4, corrected in two frames received at the first frequency f1of 60 Hz, to the touch controller222at the second frequency f2of 30 Hz. According to an embodiment of the present disclosure, the correction unit221bmay receive pieces of sensing data from the input unit221aat the first frequency f1, output sensing data corresponding to the first region A1of the touch display panel100at the first frequency f1, and correct sensing data sd corresponding to the touch sensor TE disposed in the second region A2of the touch display panel100to output the corrected sensing data at the second frequency f2which is 1/n of the first frequency f1. The touch sensing device and the driving method thereof according to the present disclosure may not correct each sensing data and may perform correction by applying the same correction value to pieces of sensing data of a group including a plurality of touch sensors, and thus, a size of a readout IC may be reduced, thereby decreasing an area occupied by the readout IC. Moreover, the touch sensing device and the driving method thereof according to the present disclosure may correct sensing data to reduce a touch sensing defect caused by a non-uniform physical characteristic of touch sensors. The above-described feature, structure, and effect of the present disclosure are included in at least one embodiment of the present disclosure, but are not limited to only one embodiment. Furthermore, the feature, structure, and effect described in at least one embodiment of the present disclosure may be implemented through combination or modification of other embodiments by those skilled in the art. Therefore, content associated with the combination and modification should be construed as being within the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the spirit or scope of the disclosures. Thus, it is intended that the present disclosure covers the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents. | 26,985 |
11861105 | DETAILED DESCRIPTION OF THE EMBODIMENTS The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive, and like reference numerals designate like elements throughout the specification. Because the size and thickness of each configuration shown in the drawings are arbitrarily shown for better understanding and ease of description, the present invention is not limited thereto, and the thicknesses of portions and regions are exaggerated for clarity. In the drawings, the thickness of layers, films, panels, regions, etc., are exaggerated for clarity. In addition, in the drawings, the thickness of some layers and regions is exaggerated for better understanding and ease of description. It will be understood that when an element such as a layer, film, region, or substrate is referred to as being “on” another element, it can be directly on the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. The word “on” or “above” means positioned on or below the object portion, and does not necessarily mean positioned on the upper side of the object portion based on a gravitational direction. In addition, unless explicitly described to the contrary, the word “comprise”, and variations such as “comprises” or “comprising” will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. Hereinafter, a touch apparatus and a driving method thereof according to exemplary embodiments will be described with reference to necessary drawings. FIG.1is a schematic top plan view of a part of a display device including a touch apparatus according to an exemplary embodiment, andFIG.2is a cross-sectional view ofFIG.1, taken along the line I-I′. Referring toFIG.1andFIG.2, a display panel200may display arbitrary visual information, for example, text, video, photos, 2D or 3D images, and the like through the entire side. The type of the display panel200is not particularly limited as long as it can display an image. In the exemplary embodiment, the display panel200is exemplarily illustrated as a panel having an organic light emitting diode as a light emitting element. However, the type of the display panel200is not particularly limited thereto, and any display panel may be used within the limits corresponding to the concept of the present invention. The display panel200may have various shapes. For example, the display panel200may be formed in the shape of a rectangle having two pair of sides that parallel with each other. For better understanding and ease of description, the display panel200is illustrated as a rectangle having a pair of long sides and a pair of short sides. However, the shape of the display panel200is not limited thereto, and the display panel200may have various shapes. For example, the display panel200may have various shapes such as a polygon of a closed shape including a side of a straight line, a circle, an ellipse, and the like including a side made of a curved line, and a semi-circle, a half oval, and the line including a side made of a straight line and a curved line. At least a part of corners of the display panel200may have a curved form. The display panel200may be wholly or at least partially flexible. The display panel200may display an image. The display panel200includes a display portion204, and the display portion204may include a display area DA where an image is displayed and a non-display area NDA that is disposed at at least one side of the display area DA. For example, the non-display area NDA may surround the display area DA. A plurality of pixels PX may be located in the display area DA, and a driver210(refer toFIG.3) that drives the plurality of pixel PX may be located in the non-display area NDA. The display area DA may have a shape corresponding to the shape of the display panel200. For example, like the shape of the display panel200, the display area DA may have various shapes such as a polygon of a closed shape including a side of a straight line, a circle, an ellipse, and the like including a side made of a curved line, and a semi-circle, a half oval, and the line including a side made of a straight line and a curved line. In the exemplary embodiment of the present invention, the display area DA is exemplarily formed in the shape of a rectangle. The display panel200may include a substrate202and the display portion204provided on the substrate202. The substrate202may be formed of various materials, for example, glass, a polymer metal, and the like. The substrate202may be an insulating substrate formed of a particularly high molecular organic material. An insulating substrate material including a polymer organic material includes polystyrene, polyvinyl alcohol, polymethyl methacrylate, polyethersulfone, polyacrylate, polyetherimide, polyethylene naphthalate, polyethylene terephthalate, polyphenylene sulfide, polyarylate, polyimide, polycarbonate, triacetate cellulose, cellulose acetate propionate, and the like. However, the material of the substrate202is not limited thereto, and for example, the substrate202may be formed of a fiberglass-reinforced plastic (FRP). The display portion204may be located on the substrate202. The display portion204may display user input information or information provided to a user as an image. The display portion204may include a plurality of pixels PX. The plurality of pixels PX may be organic light emitting elements including an organic layer, but this is not restrictive, and may be implemented in various forms, such as such as liquid crystal devices, electrophoretic devices, and electrowetting devices. Each pixel PX is a minimum unit displaying an image, and may include an organic light emitting element that emits light of white and/or colored light. Each pixel PX may emits light of any one of red, green, blue, and white, but this is not restrictive, and may emit light of cyan, magenta, yellow, and the like. Each pixel PX may include transistors (not shown) connected to a plurality of signal wires (not shown), and an organic light emitting diode electrically connected to the transistors. The touch panel100may be attached on the display portion204in the form of a separate panel or film, or may be integrally formed with the display portion204. The touch panel100may include a plurality of touch sensing units TS for detecting a location of a touch when there is a user's touch. The touch sensing unit TS may detect a touch using a mutual capacitance method or a self capacitance method. The touch panel100receives a driving signal from a touch controller102(refer toFIG.3). The touch controller102may receive a detection signal that is changed according to a user's touch, from the touch panel100. A window103may be disposed on the touch panel100. The window103may have a shape that corresponds to the shape of the display panel200, and may cover at least a part of the front side of the display panel200. For example, when the display panel200has a rectangular shape, the window103may also have a rectangular shape. Alternatively, when the display panel200has a circular shape, the window103may also have a circular shape. An image displayed on the display panel200is transmitted to the outside through the window103. The window103mitigates external impact to prevent damage or malfunction of the display panel200due to the external impact. External impact is a force from the outside, which can be expressed as pressure, stress, and the like, and may imply a force that causes a defect to occur in the display panel200. The window103may wholly or at least partially flexible. FIG.3is a block diagram of the display apparatus and a touch apparatus according to an exemplary embodiment. Referring toFIG.3, the display panel200is connected to a display driver210, and the touch panel100is connected to the touch controller102. The display driver210includes a scan driver and a data driver that supply signals to the pixels PX included in the display panel200. A signal controller220supplies a driving control signal and image data to the display driver210to control an image display operation of the display panel200. Specifically, the signal controller220may generate the driving control signal and the image data by using an image signal and data enable signal supplied from an external image source. For example, the signal controller220may receive an image signal and a control signal from an external source (not shown), and the control signal may include a vertical synchronization signal, which is a signal for distinguishing frame sections, a horizontal synchronization signal, which is a signal for distinguishing rows in one frame, and a data enable signal that is high level only for a section during which data is output, and clock signals. In addition, the driving control signal may include a scan driving control signal, a data driving control signal, and the like. The scan driver generates scan signals based on a scan driving control signal provided from the signal controller220, and outputs the scan signals to scan lines connected to the pixels PX. The data driver generates gray voltages according to the image data provided from the signal controller220based on the data driving control signal received from the signal controller220. The data driver outputs the gray voltages as data voltages to data lines connected to the pixels PX. Meanwhile, the scan driver may be simultaneously formed with the pixels PX through a thin film process. For example, the scan driver may be mounted in the non-display area NDA in the form of an amorphous silicon TFT gate driver circuit (ASG), or an oxide semiconductor TFT gate driver circuit (OSG). The touch controller102may generate a driving signal output to the touch panel100, and may receive a detection signal input from the touch panel100. In addition, the touch controller102may determine whether a touch is input to a touch screen, the number of touch inputs, and positions of the touch inputs using the driving signal and the detection signal. The touch controller102may receive a horizontal synchronization signal, a scan driving control signal, a data driving control signal, and the like from the signal controller220. The touch controller102may adjust a frequency of the driving signal provided to the touch panel100based on the horizontal synchronization signal. For example, the touch controller102may set the frequency of the driving signal by two or more integer times the frequency of the horizontal synchronization signal. In addition, the touch controller102may receive the detection signal from the touch panel100for a period during which the scan signal has a disable level based on at least one of the horizontal synchronization signal and the scan driving control signal. In addition, the touch controller102may receive the detection signal from the touch panel100for a period excluding a period during which the data signal is applied to the data line of the display portion200, based on at least one of the horizontal synchronization signal and the data driving control signal. In the exemplary embodiment ofFIG.3, the touch panel100and the display panel200are separated from each other, but the present invention is not limited thereto. For example, the touch panel100and the display panel200may be integrally manufactured. The touch panel100may be provided on at least an area of the display panel200. For example, the touch panel100may be provided to be overlapped with the display panel200on at least one side of the display panel200. For example, the touch panel100may be disposed on one side (e.g., a top surface) in a direction in which an image is emitted among both surfaces of the display panel200. In addition, the touch panel100may be directly formed on at least one side of both sides of the display panel200, or may be formed inside the display panel200. For example, the touch panel100may be directly formed on an external side of an upper substrate (or an encapsulation layer) or an external side of a lower substrate (e.g., the top surface of the upper substrate or the bottom surface of the lower substrate), or may be directly formed on an internal side of the upper substrate or an internal side of the lower substrate (e.g., the bottom surface of the upper substrate or the top surface of the lower substrate). When the touch panel100is directly formed on the encapsulation layer of the display panel200, the entire thickness of the encapsulation layer may be about 4 μm to about 10 μm. The touch panel100includes an active area AA where a touch input can be sensed, and an inactive area NAA that surrounds at least a part of the active area AA. Depending on exemplary embodiments, the active area AA may be disposed corresponding to the display area DA of the display panel200, and the inactive area NAA may be disposed corresponding to the non-display area NDA of the display panel200. For example, the active area AA of the touch panel100may overlap the display area DA of the display panel200, and the inactive area NAA of the touch panel100may overlap the non-display area NDA of the display panel200. Depending on exemplary embodiments, a plurality of touch sensing units TS are arranged in the active area AA. That is, the active area AA may be a touch sensing area where a user's touch input can be sensed. The plurality of touch sensing units TS include at least one touch electrode for detecting a touch input, for example, in case of mutual capacitance, and includes a plurality of first touch electrodes111-1to111-mofFIG.4and a plurality of second touch electrodes121-1to121-nofFIG.4. Specifically, the one touch sensing unit TS may be a unit for detecting a change in capacitance formed by crossing one first touch electrode and one second touch electrode. In case of self capacitance, the plurality of touch sensing units TS include a plurality of touch electrodes arranged in a matrix format. Specifically, one touch sensing unit TS may be a unit for detecting a change in capacitance of one touch electrode. Depending on exemplary embodiments, at least one touch electrode may be provided on the display area DA of the display panel200. In this case, at least one touch electrode may overlap at least one of electrodes and wires provided in the display panel200on a plane. For example, when the display panel200is provided as an organic light emitting display panel, at least one touch electrode may at least overlap a cathode, a data line, a scan line, and the like. When the display panel200is a liquid crystal display panel, at least one touch electrode may at least overlap a common electrode, a data line, a gate line, and the like. As described, when the touch panel100is coupled with the display panel200, parasitic capacitance is generated between the touch panel100and the display panel200. For example, at least one touch electrode of the touch panel100may be disposed to be overlapped with at least one of the electrodes and the wires of the display panel200on a plane, and accordingly, the parasitic capacitance is generated between the touch panel100and the display panel200. Due to coupling of the parasitic capacitance, a signal of the display panel200may be transmitted to the touch sensor, particularly, the touch panel100. For example, a noise signal due to a display driving signal (e.g., a data signal, a scan signal, a light emission control signal, and the like) applied to the display panel200may be introduced into the touch panel100. In the touch apparatus according to the exemplary embodiment, the display panel200may be an organic light emitting display panel having a thin film encapsulation layer, and the touch panel100may be formed of on-cell type of sensor electrodes such that at least one touch electrode is directly formed on one side (e.g., the top surface) of the thin film encapsulation layer. In this case, at least one of electrodes and wires provided in the organic light emitting display panel, and at least one touch electrode are disposed adjacent to each other. Accordingly, the noise signal according to display driving may be transmitted to the touch panel100with relatively high intensity. The noise signal transmitted to the touch panel100causes a ripple of the detection signal, and accordingly, sensitivity of the touch sensor may be deteriorated. Accordingly, in the present disclosure, various exemplary embodiments capable of improving the sensitivity of the touch sensor will be provided, and detailed description thereof will be described later. FIG.4is a schematic view of a touch apparatus according to an exemplary embodiment, andFIG.5shows an example in which a stylus pen touches the touch apparatus according to the exemplary embodiment. Referring toFIG.4, a touch apparatus10according to an exemplary embodiment includes a touch panel100, and a touch controller102that controls the touch panel100. The touch controller102may include first and second driver/receivers110and120that transmit and receive a signal to and from the touch panel100, and a controller130. The touch panel100includes a plurality of first touch electrodes111-1to111-mextending in a first direction, and a plurality of second touch electrodes121-1to121-nextending in a second direction that crosses the first direction. In the touch panel100, the plurality of first touch electrodes111-1to111-mmay be arranged along the second direction, and the plurality of second touch electrodes121-1to121-nmay be arranged in the first direction. InFIG.4, the touch panel100is illustrated to have a quadrangle shape, but this is not restrictive. As shown inFIG.5, the touch panel100includes a substrate105(e.g., an external side of an upper substrate (or an encapsulation layer) of the display panel200) and a window103. The plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-nmay be disposed on the substrate105. In addition, the window103may be disposed on the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-n. InFIG.5, the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-nare disposed in the same layer, but they may be disposed in different layers, and this is not restrictive. The plurality of first touch electrodes111-1to111-mare connected to the first driver/receiver110, and the plurality of second touch electrodes121-1to121-nare connected to the second driver/receiver120. InFIG.4, the first driver/receiver110, the second driver/receiver120, and the controller130are separated from each other, but they may be implemented as a single module, unit, and chip, and this is not restrictive. The first driver/receiver110may apply a driving signal to the plurality of first touch electrodes111-1to111-m. In addition, the first driver/receiver110may receive a detection signal from the plurality of first touch electrodes111-1to111-m. The second driver/receiver120may apply a driving signal to the plurality of second touch electrodes121-1to121-n. In addition, the second driver/receiver120may receive a detection signal from the plurality of second touch electrodes121-1to121-n. That is, the first driver/receiver110and the second driver/receiver120may be transceivers that transmit and receive signals, and may respectively include drivers and receivers. The driving signal may include a signal (e.g., a sine wave, a square wave, and the like) having a frequency that corresponds to a resonance frequency of the stylus pen20. The resonance frequency of the stylus pen20depends on a designed value of a resonance circuit portion23of the stylus pen20. The touch apparatus10may be used to sense a touch input (i.e., direct touch or proximity touch) by a touch object. As shown inFIG.5, a touch input of the stylus pen20that is close to the touch panel100may be sensed by the touch apparatus10. The stylus pen20may include a conductive tip21, a resonance circuit portion23, a ground25, and a body27. The conductive tip21may be at least partially formed of a conductive material (e.g., a metal, conductive rubber, fabric, conductive silicon, and the like), and may be electrically connected to the resonance circuit23. The resonance circuit23is an LC resonance circuit, and may resonate with a driving signal applied to all electrodes of at least one type of the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-nfrom at least one of the first driver/receiver110and the second driver/receiver120through the conductive tip21. A resonance signal generated from the resonance circuit23resonated with the driving signal may be output to the touch panel100through the conductive tip21. The resonance signal due to resonance of the resonance circuit23may be transmitted to the conductive tip21in a section during which the driving signal is applied to all electrodes of at least one type of the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-nand a section thereafter. The resonance circuit23is disposed in the body27, and may be electrically connected to the ground25. Such a stylus pen20generates a resonance signal in response to the driving signal applied to at least one of the touch electrodes111-1to111-mand121-1to121-nsuch that a touch input can be generated. Capacitance Cx is formed by at least one of the touch electrodes111-1to111-mand121-1to121-n, and the conductive tip21of the stylus pen20. The driving signal may be transmitted to the stylus pen20and the resonance signal may be transmitted to the touch panel100through the capacitance Cx formed between at least one of the touch electrodes111-1to111-mand121-1to121-n, and the conductive tip21. The touch apparatus10may detect a touch made by a touch object (e.g., a user's body part (finger, palm, etc.) or a passive or active type of stylus pen) in addition to the stylus pen20of the above-described type, which generates the resonance signal, but this is not restrictive. For example, the touch apparatus10detects a touch made by a stylus pen that receives an electrical signal and outputs the electrical signal as a magnetic field signal. For example, the touch apparatus10may further include a digitizer. A magnetic field signal, which is electromagnetically resonated (or induced by an electron group) by a stylus pen, is detected by the digitizer, whereby a touch can be detected. Alternatively, the touch apparatus10detects a touch by a stylus pen that receives a magnetic field signal and outputs the magnetic field signal as a resonated magnetic field signal. For example, the touch apparatus10may further include a coil that applies a current as a driving signal, and a digitizer. The stylus pen resonates with a magnetic field signal generated from the coil to which the current is applied. In the stylus pen, the magnetic field signal resonated with an electromagnetically resonated (or electromagnetically induced) signal is detected by the digitizer whereby a touch can be detected. The controller130controls the overall driving of the touch apparatus10, and may output touch information using detection signals transmitted from the first driver/receiver110and the second driver/receiver120. Next, referring toFIG.6, a driving method of the touch apparatus according to an exemplary embodiment will be described. FIG.6is a flowchart of a driving method of the touch apparatus according to an exemplary embodiment. In a first section, the touch apparatus10is driven in a first mode (S10). The first mode is a mode in which a driving signal for detection of a touch input by a touch object other than the stylus pen20is applied to the touch panel100. For example, in the first mode, the first driver/receiver110outputs a driving signal to the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120receives a detection signal according to a touch from the plurality of second touch electrodes121-1to121-n. The controller130determines whether the detection signal is a valid touch signal based on whether the intensity of the detection signal acquired in the first section exceeds a first threshold, and acquires touch coordinate information by using the valid touch signal. For example, the controller130calculates touch coordinates by using the detection signal when the intensity of the detection signal acquired in the first section excesses the first threshold. When the intensity of the detection signal acquired in the first section is less than the first threshold, the controller130does not calculate touch coordinates according to a detection signal of which intensity is less than the first threshold. In addition, when the intensity of the detection signal acquired in the first section exceeds the first threshold, the controller130may calculate a touch area by using the detection signal. The detection signal acquired in the first section includes at least one of a first detection signal by a user's body part (e.g., finger, palm, and the like) and a second detection signal by the stylus pen20and a passive type of a stylus pen. The first threshold value may be set such that the first detection signal is determined as a valid touch signal and the second detection signal is filtered. In a first sub-section of a second section, the touch apparatus10is driven in a second mode (S12). The second mode is a mode in which a driving signal for detecting a touch input by the stylus pen20is applied to the touch panel100. For example, the first driver/receiver110simultaneously applies a driving signal to all of the plurality of first touch electrodes111-1to111-m. Although it is described that the first driver/receiver110applies the driving signal to all of the plurality of first touch electrodes111-1to111-min the first sub-section, this is not restrictive. For example, the first driver/receiver110may apply the driving signal to at least one of the plurality of first touch electrodes111-1to111-min the first sub-section of the second section. Alternatively, the first driver/receiver110may simultaneously apply the driving signal to all of the plurality of first touch electrodes111-1to111-min the first sub-section of the second section. Alternatively, the second driver/receiver120may simultaneously apply the driving signal to at least one of the plurality of second touch electrodes121-1to121-nin the first sub-section of the second section. Alternatively, the second driver/receiver120may simultaneously apply the driving signal to all of the plurality of second touch electrodes121-1to121-nin the first sub-section of the second section. Alternatively, the first driver/receiver110and the second driver/receiver120may simultaneously apply the driving signal to at least one of the plurality of first touch electrodes111-1to111-mand at least one of the plurality of second touch electrodes121-1to121-nin the first sub-section of the second section. The first driver/receiver110and the second driver/receiver120may simultaneously apply the driving signal to all of the plurality of first touch electrodes111-1to111-mand all of the plurality of second touch electrodes121-1to121-n. When the first driver/receiver110and the second driver/receiver120simultaneously apply the driving signal to all of the plurality of first touch electrodes111-1to111-mand all of the plurality of second touch electrodes121-1to121-n, the driving signal applied to the plurality of first touch electrodes111-1to111-mand the driving signal applied to the plurality of second touch electrodes121-1to121-nmay have the same phase or different phases. A frequency of the driving signal applied to the touch panel100in the first section is assumed to be lower than a frequency of the driving signal applied to the touch panel100in the first sub-section. In addition, a frequency of the driving signal applied to the touch panel100may be two or more integer times a frequency of the horizontal synchronization signal of the signal controller in the first sub-section. In a second sub-section of the second section, the touch apparatus10receives a detection signal that is resonated based on a driving signal at least once (S14). For example, the resonance circuit portion23of the stylus pen20is resonated with the driving signal such that a resonance signal is generated and transmitted to the touch panel100through the conductive tip21. In the exemplary embodiment, the first driver/receiver110receives a at least one detection signal transmitted from the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120also receives at least one detection signal transmitted from the plurality of second touch electrodes121-1to121-n. In this case, the first driver/receiver110and the second driver/receiver120may receive the detection signals at the same timing. In addition, the first driver/receiver110and the second driver/receiver120may process the received detection signals and transmit the processed detection signals to the controller130. In the above description, in the second sub-section, the first driver/receiver110receives the detection signals transmitted from the plurality of first touch electrodes111-1to111-mand the second driver/receiver120also receives the detection signals from the plurality of second touch electrodes121-1to121-n, but in the second sub-section of the second section, the first driver/receiver110may receive a detection signal from at least one of the plurality of first touch electrodes111-1to111-mand the second driver/receiver120may also receive a detection signal from at least one of plurality of second touch electrodes121-1to121-n, or in the second sub-section of the second section, only the first driver/receiver110may receive a detection signal from at least one of the plurality of first touch electrodes111-1to111-m, or in the second sub-section of the second section, only the second driver/receiver120may receive a detection signal from at least one of the plurality of second touch electrodes121-1to121-n, and detection signal receiving operation of the first driver/receiver110and the second driver/receiver120is not limited thereto. Alternatively, in the second sub-section, the first driver/receiver110may receive a detection signal from at least one of the plurality of first touch electrodes111-1to111-mor may receive detection signals from all of the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120may also receive a detection signal from at least one of the plurality of second touch electrodes121-1to121-nor may receive detection signals from all of the plurality of second touch electrodes121-1to121-n. The controller130generates touch information by using some detection signals received in a period determined in response to a horizontal synchronization signal among detection signals received at least once by the first driver/receiver110and the second driver/receiver120. In another exemplary embodiment, the first driver/receiver110is synchronized with the horizontal synchronization signal and receives detection signals transmitted from the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120is also synchronized with the horizontal synchronization signal and receive detection signals transmitted from the plurality of second touch electrodes121-1to121-n. In addition, the first driver/receiver110and the second driver/receiver120process the received detection signals and may transmit the processed signals to the controller130. The controller130is synchronized with the horizontal synchronization signal and generates touch information by using the detection signals received by the first driver/receiver110and the second driver/receiver120. The controller130determines whether the detection signal is a valid touch signal based on whether intensity of the detection signal acquired in the second sub-period exceeds a second threshold value, and may acquire touch coordinate information at a location where a touch of the stylus pen20is made by using the valid touch signal. For example, the controller130calculates touch coordinates by using a detection signal acquired in the second sub-section when the intensity of the detection signal exceeds the second threshold value. When the intensity of the detection signal acquired in the second sub-section is less than the second threshold value, the controller130does not calculate touch coordinates according to a detection signal of which intensity is less than the second threshold value. In addition, when the intensity of the detection signal acquired in the second sub-section exceeds the second threshold value, the controller130may calculate a touch area by using the detection signal. Next, referring toFIG.7, driving signals applied in first and second sections, a resonance signal of the stylus pen20, and detection signals will be described. FIG.7is an exemplary timing diagram of a horizontal synchronization signal Hsycn and a driving signal according to the driving method ofFIG.6. One touch report frame period according to a touch report rate includes a first section T1and a second section T2. The touch report rate implies a speed or frequency (Hz) in which the touch apparatus10outputs touch data obtained by driving touch electrodes to an external host system. In the first section T1, the first driver/receiver110outputs driving signals to touch electrodes of at least one type of the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-n. When the first driver/receiver110outputs the driving signal to the plurality of first touch electrodes111-1to111-m, the second driver/receiver120may receive detection signals from the plurality of second touch electrodes121-1to121-n. The controller130may acquire touch coordinate information based on signal intensity of the detection signal. In a first sub-section T21of the second section T2, the first driver/receiver110simultaneously applies driving signals to the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120simultaneously applies driving signals to the plurality of second touch electrodes121-1to121-n. In the first sub-section T21, frequencies of the driving signals applied to the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-ncorrespond to a resonance frequency of the stylus pen20. For example, during the first sub-section T21, the frequency of the driving signal output to the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121-1to121-nmay be two or more integer times a frequency of the horizontal synchronization signal. On the contrary, in the first section T1, the frequency of the driving signal output to the plurality of first touch electrodes111-1to111-mis different from the frequency of the horizontal synchronization signal. Such frequency setting of the driving signal is an example, and it may be set to a different value from the above-stated value. Specifically, the controller130may receive a horizontal synchronization signal Hsync, a scan driving control signal, a data driving control signal, and the like from the signal controller220. Then, the controller130sets a frequency of a driving signal supplied to the touch panel100based on the horizontal synchronization signal Hsync, and may synchronize the driving signal with the horizontal synchronization signal Hsync. For example, the controller130may set the frequency of the driving signal to two or more integer times the frequency of the horizontal synchronization signal Hsync. Then, the resonance frequency of the stylus pen20may be designed to have two or more integer times the frequency of the horizontal synchronization signal Hsync. The controller130may synchronize the driving signal at pulses of the horizontal synchronization signal Hsync. In a second sub-section T22of the second section T2, the first driver/receiver110is synchronized with each pulse of the horizontal synchronization signal Hsync and thus receives detection signals from the plurality of first touch electrodes111-1to111-m, and the second driver/receiver120is synchronized with each pulse of the horizontal synchronization signal Hsync and thus receives detection signals from the plurality of second touch electrodes121-1to121-n. In addition, in the second sub-section T22, the first driver/receiver110and the second driver/receiver120may receive detection signals at least once. In the second sub-section T22where the driving signal is no longer applied, the resonance signal output by the resonance circuit23of the stylus pen20may be received by at least one of the plurality of first touch electrodes111-1to111-mand the plurality of second touch electrodes121. A pulse cycle of the horizontal synchronization signal Hsync is 1 horizontal period 1H required for writing data to pixels PX of one row. After each pulse of the horizontal synchronization signal Hsync is generated, a data signal may be written into pixels during a data writing period TA. The data writing period implies a period during which a data signal is applied to the data line for writing the data signal to the pixels PX, and a scan signal is applied to the scan line. Since the data line and the scan line form parasitic capacitance with touch electrodes, a voltage applied to the data line and scan line during the data writing period TA causes noise in the detection signal transmitted to the touch electrode. In the exemplary embodiment, the controller130may generate touch information by using a detection signal received in a noise free period TB, excluding the data writing period TA. The data writing period TA and the noise free period TB may be set differently according to the driving method of the display device and the display device. Specifically, at each of a plurality of sampling times during the second sub-section T22, the first driver/receiver110receives detection signals from the plurality of first touch electrodes111-1to111-mand the second driver/receiver120receives detection signals from the plurality of second touch electrodes121-1to121-n. The controller130generates a receiving signal by using detection signals received at sampling times in the noise free period TB. For example, when the controller130receives only the horizontal synchronization signal Hsync, the controller130may determine from a time when a pulse of a horizontal synchronization signal Hsync occurs to a predetermined data writing period TA from a predetermined first time to a predetermined second time, where the predetermined second time exceeds the predetermined first time, and this is not restrictive and can be variously set according to the driving method of the display device. Then, the controller130generates a receiving signal by using signals other than detection signals sampled during the data writing period TA. Alternatively, when the controller130receives a scan driving control signal, the controller130may determine a period in which the detection signal has a disable level from the scan driving control signal as the data writing period TA. Then, the controller130generates a receiving signal by using signals other than detection signals sampled during the data writing period TA. Alternatively, when the controller130receives a data driving control signal, the controller130may determine a period in which the data signal is applied to the data line from the data driving control signal as the data writing period TA. Then, the controller130generates a receiving signal by using signals other than detection signals sampled during the data writing period TA. In another exemplary embodiment, the first driver/receiver110and the second driver/receiver120preferably receive detection signals during the noise free period TB, excluding the data writing period TA. Specifically, the first driver/receiver110receives detection signals from the plurality of first touch electrodes111-1to111-mduring the noise free period TB, excluding the data writing period TA. Similarly, the second driver/receiver120receives detection signals from the plurality of second touch electrodes121-1to121-n. That is, the controller130may receive a detection signal from the touch panel100for a period during which the scan signal has a disable level, based on at least one of the horizontal synchronization signal Hsync and the scan driving control signal. When the controller130receives a scan driving control signal, the controller130may determine a period during which the scan signal has a disable level. When the controller130receives only the horizontal synchronization signal Hsync, the controller130may determine a period from a time at which pulses of the horizontal synchronization signal Hsync are generated to after a predetermined fourth time from a predetermined third time as a period during which the scan signal has the disable level, and the predetermined fourth time exceeds the predetermined third time, but this is not restrictive and may be variously set according to a driving method of the display device10. In addition, the controller130may receive detection signals from the touch panel100during a period excluding a period during which the data signal is applied to the data line of the display portion200based on at least one of the horizontal synchronization signal Hsync and the data driving control signal. When the controller130receives the data driving control signal, the controller130may determine a period during which the data signal is applied to the data line from the data driving control signal. When the controller130receives only the horizontal synchronization signal Hsync, the controller130may determine a period from a time at which pulses of the horizontal synchronization signal Hsync are generated to after a predetermined sixth time from a predetermined fifth time as a period during which the scan signal has the disable level, and the predetermined fifth time exceeds the predetermined sixth time, but this is not restrictive and may be variously set according to a driving method of the display device10. The second section T2includes a first sub-section T21and a second sub-section T22. For example, in the second section T2, a combination of the first sub-section T21and the second sub-section T22may repeat eight times. In the above, it was described that the second section T2exists after the first section T1, but the first section T1may exist after the second section T2, time lengths of the first section T1and the second section T2may be changed during a plurality of touch report frames, and the driving method of the touch apparatus10of the present exemplary embodiment is not limited thereto. Next, referring toFIG.8toFIG.10, the first and second driver/receivers110and120of the touch apparatus10will be described in detail. FIG.8shows the touch apparatus10operating in the first section T1in more detail. As shown inFIG.8, a first driver1110of the first driver/receiver110includes a plurality of amplifiers112-1to112-m. The plurality of amplifiers112-1to112-mare connected with the plurality of first touch electrodes111-1to111-mand output a first driving signal. A second receiver1200includes a plurality of amplifiers123-1to123-n, an ADC125, and a signal processor (DSP)127. The second driver/receiver1200may sequentially receive detection signals of the plurality of second touch electrodes121-1to121-nas a single second touch electrode unit. Alternatively, the second driver/receiver1200may simultaneously receive the detection signals through the plurality of second touch electrodes121-1to121-n. detection Each of the plurality of amplifiers123-1to123-nis connected to a corresponding second touch electrode among the plurality of second touch electrodes121-1to121-n. Specifically, each of the plurality of amplifiers123-1to123-nmay be implemented as an amplifier of which one of two input ends is connected with the ground or a DC voltage and a detection signal is input to the other input end. Each of the plurality of amplifiers123-1to123-namplifies and outputs detection signals transmitted from the plurality of second touch electrodes121-1to121-nin parallel. The ADC unit125converts the amplified detection signal into a digital signal. In addition, the signal processor127processes a plurality of amplified signals, which are converted into digital signals, and then transmit the processed signals to the controller130. Next,FIG.9shows the touch apparatus10operating in a first sub-section T21of the second section T2. As shown in the drawing, a plurality of amplifiers112-1to112-mof the first driver1110are connected to the plurality of first touch electrodes111-1to111-mand output second driving signals. A second driver1210also includes a plurality of amplifiers122-1to122-n. The plurality of amplifiers122-1to122-nare connected to the plurality of first touch electrodes121-1to121-nand output third driving signals. Next,FIG.10shows the touch apparatus10operating in a second sub-section T22of the second section T2. As shown in the drawing, a first receiver1100includes a plurality of differential amplifiers113-1to113-i, an ADC unit115, and a signal processor (DSP)117. A second receiver1200includes a plurality of differential amplifiers123-1to123-j, an ADC unit125, and a signal processor (DSP)127. The plurality of differential amplifiers113-1to113-land123-1to123-jmay be formed by changing connections of input ends of the plurality of amplifiers123-1to123-n. That is, i+j≤n. Specifically, an input end connected with the ground or a DC voltage among two input ends of an amplifier123-1is connected to a second touch electrode121-4, and an input end connected with the ground or the DC voltage among two input ends of an amplifier123-1is connected to a second touch electrode121-5, such that two touch electrodes may be connected to one amplifier. An input end of each of the differential amplifiers113-1to113-iand123-1to123-jis connected with two touch electrodes that are disposed apart from each other by at least one touch electrode. Each of the differential amplifiers113-1to113-iand123-1to123-jmay differentially amplify two detection signals transmitted from touch electrodes and output the differentially amplified signals. Since each of the each differential amplifiers113-1to113-iand123-1to123-jreceive detection signals from two touch electrodes and differentially amplify the received signals, the differential amplifiers113-1to113-iand123-1to123-jare not saturated even through a driving signal is simultaneously applied to a plurality of touch electrodes. Each of the differential amplifiers113-1to113-iand123-1to123-jmay receive detection signals from two touch electrodes that are separated from each other rather than two touch electrodes that are adjacent to each other. For example, each of the differential amplifiers113-1to113-iand123-1to123-jreceives detection signals from two touch electrodes that are disposed apart from each other, while disposing one or more touch electrodes therebetween. InFIG.10, a differential amplifier113-1receives touch signals from a touch electrode111-1and a touch electrode111-5. When the differential amplifier113-1receives detection signals from two touch electrodes that are adjacent to each other (e.g., a first touch electrode111-1and a first touch electrode111-2), detection signals by a touch in an area between the first touch electrode111-1and the first touch electrode111-2do not have sufficiently large values even though they are differentially amplified by the differential amplifier113-1. Therefore, when the differential amplifier113-1is connected to two touch electrodes that are adjacent to each other, touch sensitivity is deteriorated. However, since the differential amplifier113-1receives the detection signals from the first touch electrode111-1and the first touch electrode111-5, the differential amplifier113-1can differentially amplify so that the detection signal by the touch electrode at the position where the touch is input has a sufficiently large value, and the touch sensitivity can be improved. Each of the ADC units115and125convert the differentially amplified detection signal to a digital signal. In addition, each of the signal processors117and127processes a plurality of differentially amplified signals converted into digital signals, and transmits the processed signal to the controller130. Next, referring toFIG.11toFIG.13, one aspect of the display device will be described. FIG.11schematically illustrates one aspect of the display device ofFIG.3,FIG.12shows a pixel of the display device ofFIG.11, andFIG.13is a timing diagram that shows an example of a driving signal that drives the display device ofFIG.11. As shown inFIG.11, a display device includes a display panel200including a plurality of pixels PX, a data driver211, a scan driver212, and a signal controller221. The display panel200includes a plurality of pixels PX arranged substantially in a matrix format. Although it is not specifically limited, a plurality of scan lines S1to Si extend in a row direction in the alignment format of the pixels PX and they are almost parallel with each other, and a plurality of data lines D1to Dj extend substantially in a column direction and they are almost parallel with each other. Each of the plurality of pixels PX is connected to a corresponding signal among the plurality of scan lines S1to Si connected to the display panel200and a corresponding data line among the plurality of data lines D1to Dj connected to the display panel200. In addition, although it is not directly illustrated in the display panel200ofFIG.11, each of the plurality of pixels PX is connected with a power source connected to the display panel200and receives a first power source voltage ELVDD and a second power source voltage ELVSS. Each of the plurality of pixels PX emits light by a driving current supplied to an organic light emitting diode according to a corresponding data signal transmitted through a corresponding data line among the plurality of data lines D1to Dj. The scan driver212generates and transmits a scan signal corresponding to each pixel through a corresponding scan line among the plurality of scan lines S1to Si. That is, the scan driver212transmits a scan signal to each of the plurality of pixels included in each pixel row through a corresponding scan line. The scan driver212generates a plurality of scan signals by receiving a scan driving control signal CONT2from the signal controller221, and sequentially supplies scan signals to the plurality of scan lines S1to Sj connected to each pixel row. In addition, the scan driver212generates a common control signal, and supplies the common control signal to common control lines connected to the plurality of pixels PX. The data driver211transmits a data signal to each pixel through a corresponding data line among the plurality of data lines D1to Dj. The data driver211receives a data driving control signal CONT1from the signal controller221, and supplies a data signal through a corresponding data line among the plurality of data lines D1to Dj connected to the plurality of pixels included in each pixel row. The signal controller221converts an image signal transmitted from the outside into image data DATA, and transmits the image data DATA to the data driver211. The signal controller221receives an external control signal such as a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a clock signal, a data enable signal, and the like, generates control signals for controlling driving of the scan driver212and the data driver211, and transmits the control signals to the scan driver212and the data driver211, respectively. That is, the signal controller221generates and transmits a scan driving control signal CONT2that controls the scan driver212and a data driving control signal CONT1that controls the data driver211. As shown inFIG.12, a pixel PX_lk may include an organic light emitting diode OLED, a first transistor TR1, a second transistor TR2, and a storage capacitor Cst. The pixel PX_lk may be located at an l-th pixel row and a k-th pixel column. Each transistor will be exemplarily described as a PMOS transistor for better understanding and ease of description. The first transistor TR1may be a driving transistor. In the exemplary embodiment, the first transistor TR1may include a gate connected to a first node N1, a source connected to a first power source voltage ELVDD, and a drain connected to an anode of the organic light emitting diode OLED. The driving current corresponds to a voltage difference between the gate and the source of the first transistor TR1, and the driving current is changed corresponding to a voltage according to a data signal applied to a data line Dl. The second transistor TR2may be turned on according to a level of a scan signal applied to a scan line Sk, and may connect the first node N1and the data line Dl. In the exemplary embodiment, the second transistor TR2may include a gate connected to the scan line Sk, a source connected to the data line Dl, and a drain connected to the first node N1. The second transistor TR2transmits a data voltage according to a data signal D[l] transmitted through an I-th data line Dl to the first node N1in response to a scan signal S[k] transmitted to the k-th scan line Sk. The storage capacitor Cst is connected between the first power source voltage ELVDD and the first node N1. In the exemplary embodiment, the storage capacitor Cst may include one electrode connected to the first power source voltage ELVDD and the other electrode connected to the first node N1. The organic light emitting diode may emit light by the driving current flowing from the first transistor TR1. In the exemplary embodiment, the organic light emitting diode OLED may include an anode connected to the drain of the first transistor TR1and a cathode connected to the second power source voltage ELVSS. As shown inFIG.13, a pulse cycle of the vertical synchronization signal Vsync may be one frame period 1 FRAME of the display panel200according to a display frame rate. During one frame period 1 FRAME, the data driver211is synchronized by the horizontal synchronization signal Hsync and thus may apply an enable-level data signal to the plurality of data lines D1to Dj. For example, at every pulse of the horizontal synchronization signal Hsync, the data driver211applies a data signal corresponding to a pixel connected with a scan line to which a scan signal having a low level voltage L is applied, to all the plurality of data lines D1to Dj. During one frame period 1 FRAME, the scan driver212is synchronized with the horizontal synchronization signal Hsync, and may sequentially apply scan signals [1], S[2], . . . , S[k−1], and S[k] having a low level voltage L to the plurality of scan lines S1to Si. For example, the scan driver212applies a scan signal of a low level voltage L to one corresponding scan line at every pulse of the horizontal synchronization signal Hsync. A period dwp during which a data signal is applied to a data line and a period sp during which a scan signal is a low level voltage L are included in 1 horizontal period, that is, one pulse cycle of the horizontal synchronization signal Hsync. Related to the period dwp and the period sp, pixels connected to the scan line Sk and the data line Dl will be exemplarily described. At t00, the 1 horizontal period 1H starts. At t01, a data signal DATA[k] is applied to the data line Dl. At t10, a scan signal S[k] applied to the scan line Sk is changed to the low level voltage L. The time t10at which the scan signal S[k] is changed to the low level voltage L and the time t01at which the data signal DATA[k] starts to be applied to the data line Dl may be the same as or different from each other. For example, considering RC delay of the data line Dl, the data signal DATA[k] may be applied to the data line Dl before the scan signal S[k] is changed to the low level voltage L. At t11, the scan signal S[k] is changed to a high level voltage H. At t12, the application of the data signal DATA[k] to the data line Dl stops. At t22, the 1 horizontal period 1H is terminated. The time t11at which the scan signal S[k] is changed to the high level voltage H and the time t12at which the application of the data signal DATA[k] to the data line Dl may be the same as or different from each other. For example, the application of the data signal DATA[k] to the data line Dl may be stopped after the scan signal S[k] is changed to the high level voltage H. A data writing period TA described with reference toFIG.7includes the period dwp and the period sp. Specifically, the data writing period TA may be from an earlier time among a time at which the period dwp starts and a time at which the period sp starts to a later time among a time at which the period dwp terminates and a time at which the period sp terminates, and for example, the data writing period TA may be a period from t01to t12. Operation of the touch panel100coupled to the display panel200of such a display device20will be described with reference toFIG.14andFIG.15. FIG.14andFIG.15are timing diagrams of a time at which a touch apparatus according to an exemplary embodiment is synchronized with the horizontal synchronization signal of the display device ofFIG.11and thus receives a detection signal according to the driving method ofFIG.6. As shown inFIG.14, frequencies of driving signals D_111and D_121at a first sub-period T21may be two times the frequency of the horizontal synchronization signal Hsync. Corresponding to the frequencies of the driving signals D_111and D_121applied at the first sub-period T21, the first driver/receiver110and the second driver/receiver120may sample detection signals within a second sub-period T22. For example, the first driver/receiver110and the second driver/receiver120may sample detection signals at at least one of sampling times s00, s01, s02, s03, s10, s11, s12, s13, and . . . according to a clock signal having a predetermined frequency. As shown inFIG.14, the clock signal for sampling the detection signal has a frequency that is four times the frequencies of the driving signals D_111and D_121. At least one of the sampling times s00, s01, s02, s03, s10, s11, s12, s13, and in the present disclosure may be a random timing that may be periodically set in relation with the frequencies of the driving signals D_111and D_121. After the driving signal is synchronized with the pulse of the horizontal synchronization signal Hsync, a cycle of the horizontal synchronization signal Hsync may be changed due to an interface delay and the like between the signal controller220and the touch controller102. In this case, a mismatch may occur between a periodically set sampling time (e.g., a clock signal for sampling a detection signal has a frequency of four times frequencies of the driving signals111and D_121) according to the frequencies of the driving signals D_111and D_121and 1 horizontal period 1H according to the horizontal synchronization signal Hsync of which the cycle is changed. For example, when the cycle of the horizontal synchronization signal Hsync is changed after being synchronized with a first pulse of the horizontal synchronization signal Hsync, timings of sampling times in the 1 horizontal period 1H are changed because the clock signal for sampling the detection signal is synchronized with the first pulse. Then, it is difficult to distinguish whether the detection signals sampled within the 1 horizontal period 1H are detection signals sampled within the period dwp and the period sp or detection signals sampled within periods other than the period dwp and the period sp. Thus, the driving signal may be synchronized by at least one of the pulse of the horizontal synchronization signal Hsync and a pulse of a vertical synchronization signal Vsync. That is, the timing of the driving signal may be refreshed for each horizontal frame of a predetermined period or a frame of a predetermined period. For example, the driving signal may be synchronized to the pulses of the horizontal synchronization signal Hsync of a predetermined period. For example, after a pulse of the driving signal is initiated by being synchronized with the first pulse of the horizontal synchronization signal Hsync, the pulse of the driving signal may be initiated by being synchronized again with an i-th pulse of the horizontal synchronization signal. Accordingly, the sampling times periodically set according to the frequencies of the driving signals D_111and D_121may be desired times within the 1 horizontal period 1H even through the cycle of the horizontal synchronization signal Hsync is changed. Alternatively, the driving signal may be synchronized with the pulse of the vertical synchronization signal Vsync for each frame of a predetermined period. As shown inFIG.13, the pulse of the vertical synchronization signal Vsync may be changed to an enable level H at the same timing as the pulse of the horizontal synchronization signal Hsync of one horizontal period 1H. Therefore, it is possible to prevent distortion between the horizontal synchronization signal Hsync and the sampling time in the corresponding frame by synchronizing the pulse of the vertical synchronization signal Vsync and the driving signal for each frame. For example, a pulse of a driving signal may be initiated after being synchronized with a pulse of a vertical synchronization signal Vsync of a first frame, and then the pulse of the driving signal may be initiated by being synchronized again with a pulse of a vertical synchronization signal Vsync of a second frame. Accordingly, the predetermined sampling times may be desired times within the 1 horizontal period 1H within a frame synchronized with the vertical synchronization signal Vsync even through the cycle of the horizontal synchronization signal Hsync is periodically changed according to the frequencies of the driving signals D_111and D_121. In addition, in the present disclosure, at least one of sampling times s00, s01, s02, s03, s10, s11, s12, s13, and . . . may include at least two times of which phases are opposite to each other in one cycle of the frequencies of the driving signals D_111and D_121. However, the present disclosure is not limited thereto. In addition, in the present disclosure, at least one sampling times s00, s01, s02, s03, s10, s11, s12, s13, and . . . may include at least two times of which phases are changed within one cycle of the frequencies of the driving signals D_111and D_121. However, the above-description is not restrictive. The controller130generates touch information by using detection signals sampled in a period other than the period dwp and the period sp in the 1 horizontal period 1H. That is, the controller130may generate touch information that indicates touch coordinates, touch intensity, and the like by using the detection signal sampled by the first driver/receiver110and the second driver/receiver120at at least one of sampling times s10, s11, s12, s13, and . . . . In this case, the controller130may acquire intensity of the detection signal, that is, amplitude, by using a difference value between a signal value sampled at the first sampling time s10and a signal value sampled at the third sampling time s12. In addition, the controller130may acquire intensity of the detection signal by using a difference value between a signal value received at the second sampling time s11and a signal value received at the fourth sampling time s13. The controller130may determine whether or not a touch is made, touch coordinates, and the like according to the signal intensity of the detection signal. Alternatively, in the 1 horizontal period 1H, the controller130may control the first driver/receiver110and the second driver/receiver120to sample detection signals during a period other than the period dwp and the period sp. As shown inFIG.15, the frequencies of the driving signals D_111and D_121in the first sub-period T21may be three times the frequency of the horizontal synchronization signal Hsync. According to the exemplary embodiment, the controller130selects some of the detection signals sampled at least once within the second sub-period T22based on the horizontal synchronization signal, and generates touch information using the selected detection signals. That is, the controller130uses detection signals sampled in a period other than the period dwp and the period sp as touch information within 1 horizontal period 1H in the second sub-section T22. Within the 1 horizontal period 1H, the controller130uses detection signals sampled during a period excluding the period dwp during which a data signal is applied to the data line and the period sp during which the scan signal is the low level voltage L such that a detection signal where noise is generated according to signals applied to the data line and the scan line, which may form parasitic capacitance with touch electrodes, is not used as touch information, thereby improving the SNR. According to another exemplary embodiment, in a period other than the period dwp and the period sp in the 1 horizontal period 1H of the second sub-period T22, the first driver/receiver110receives detection signals from the plurality of first touch electrodes111-1to111-mand the second driver/receiver120receives detection signals from the plurality of second touch electrodes121-1to121-n. In the 1 horizontal period 1H, for a period excluding the period dwp during which a data signal is applied to the data line and a period sp during which the scan signal is the low level voltage L, the detection signals are sampled by the first driver/receiver110and the second driver/receiver120such that noise of the detection signals according to signals applied to the data line and the scan line, which may form parasitic capacitance with touch electrodes, can be prevented. Next, referring toFIG.16andFIG.17, other aspects of the display device will be described, and operation of a touch panel coupled to a display panel of the display device ofFIG.16will be described with reference toFIG.18. FIG.16is a block diagram of another aspect of the display device ofFIG.3,FIG.17shows pixels of the display device ofFIG.16, andFIG.18is a timing diagram of a time at which a touch apparatus according to an exemplary embodiment is synchronized with the horizontal synchronization signal of the display device ofFIG.16and thus receives a detection signal according to the driving method ofFIG.6. As shown inFIG.16, a display device includes a display panel201including a plurality of pixels PX, a data driver213, a scan driver214, a light emission driver215, and a signal controller222. The display panel201includes a plurality of pixels PX arranged approximately in a matrix format. Although it is not particularly limited, a plurality of scan lines S0to Si and a plurality of light emission control lines E1to Ei extend approximately in a row direction, while opposing each other, in the alignment format of the pixels and are almost parallel with each other, and a plurality of data lines D1to Dj extend approximately in a column direction and are almost parallel with each other. Each of the plurality of pixels PX is connected to two corresponding scan lines among a plurality of scan lines S0to Si connected to the display panel201, one corresponding light emission control line among the plurality of light emission control lines E1to Ei, and one corresponding data line among the plurality of data lines D1to Dj. In addition, although it is not directly illustrated in the display panel201ofFIG.16, each of the plurality of pixels PX is connected with a power source that is connected to the display panel201and thus receives a first power source voltage ELVDD, a second power source voltage ELVSS, and an initialization voltage VINT. Each of the plurality of pixels PX of the display panel201is connected to two corresponding scan lines. That is, each of the plurality of pixels PX is connected to a scan line corresponding to a pixel row in which the corresponding pixel is included and a scan line that corresponds to the previous pixel row of the pixel row. A plurality of pixels included in the first pixel row may be respectively connected to the first scan line S1and a dummy scan line S0. In addition, a plurality of pixels included in an i-th pixel row are respectively connected with an i-th scan line Si that corresponds to the i-th pixel row, which is the corresponding pixel row, and an (i−1)th scan line S(i−1) that corresponds to an (i−1)th pixel row, which is the previous pixel row. Each of the plurality of pixels PX emits light of predetermined luminance by a driving current supplied to an organic light emitting diode according to a corresponding data signal transmitted through the plurality of data lines D1to Dj. The scan driver214generates and transmits a scan signal corresponding to each pixel PX through the plurality of scan lines S0to Si. That is, the scan driver214transmits a scan signal to each of a plurality of pixels included in each pixel row through corresponding scan lines. The scan driver214receives a scan driving control signal CONT2from the signal controller222and generates a plurality of scan signals, and sequentially supplies the plurality of scan signals to the plurality of scan lines S0to Si connected to each pixel row. The data driver211transmits a data signal to each pixel through the plurality of data lines D1to Dj. The data driver211receives a data driving control signal CONT1from the signal controller222, and supplies a corresponding data signal to the plurality of data lines D1to Dj connected to each of the plurality of pixels included in each pixel row. The light emission driver215is connected with a plurality of light emission control lines E1to Ei connected to the display panel201that includes the plurality of pixels PX that are arranged in the matrix format. That is, the plurality of light emission control lines E1to Ei that extend almost in parallel with each other, while opposing each of the plurality of pixels approximately in a row direction, respectively connect the plurality of pixels PX to the light emission driver215. The light emission driver215generates and transmits a light emission control signal corresponding to each pixel through the plurality of light emission control lines E1to Ei. Each pixel to which the light emission control signal is transmitted is controlled to emit light of an image according to an image data signal in response to the control of the light emission control signal. That is, in response to the light emission control signal transmitted through the corresponding light emission control line, the operation of the light emission control transistors TR5and TR6(refer toFIG.17) included in each pixel is controlled, and accordingly, the organic light emitting diode OLED connected to the light emission control transistor may or may not emit light with luminance according to a driving current corresponding to a data signal. Each pixel PX of the display panel201is supplied with a first power source voltage ELVDD, a second power source voltage ELVSS, and an initialization voltage VINT. The first power source voltage ELVDD may be a predetermined high level voltage, and the second power source voltage ELVSS may be a lower voltage than the first power source voltage ELVDD or may be a ground voltage. The initialization voltage VINT may be set to a voltage value that is lower than or the same as the second power source voltage ELVSS. Voltage values of the first power source voltage ELVDD, the second power source voltage ELVSS, and the initialization voltage VINT are not particularly limited to any values. The signal controller222converts a plurality of image signals transmitted from the outside and transmits the converted image signals to the data driver211. The signal controller222receives a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, and a clock signal, generates control signals for controlling driving of the scan driver214, the light emission driver215, and the data driver211, and transmits the generated control signals to the scan driver214, the light emission driver215, and the data driver211, respectively. That is, the signal controller222generates a data driving control signal CONT1that controls the data driver211, a scan driving control signal CONT2that controls the scan driver214, and a light emission driving control signal CONT3that controls operation of the light emission driver215, and transmits the generated signals to the drivers, respectively. As shown inFIG.17, a pixel PX_ab includes an organic light emitting diode OLED, a storage capacitor Cst, and first to seventh transistors TR1to TR7. The pixel PX_ab may be located on an a-th pixel row and a b-th pixel column. Each transistor will be described as a PMOS transistor for better understanding and ease of description. The first transistor TR1includes a gate connected to a first node N1, a source connected to a second node N2to which a drain of the fifth transistor T5is connected, and a drain connected to a third node N3. A driving current flows through the first transistor TR1according to a corresponding data signal D[b]. The driving current is a current that corresponds to a voltage difference between the source and the gate of the first transistor TR1, and is changed corresponding to a data voltage according to the applied data signal D[b]. The second transistor TR2includes a gate connected to an a-th scan line Sa, a source connected to a b-th data line Db, and a drain connected to the second node N2to which the source of the first transistor TR1and the drain of the fifth transistor TR5are commonly connected. The second transistor TR2transmits a data voltage according to the data signal D[b] transmitted through the b-th data line Db to the second node N2in response to a scan signal S[j] transmitted through the a-th scan line Sa. The third transistor TR3includes a gate connected to the a-th scan line Sa, and opposite ends respectively connected to the gate and the drain of the first transistor TR1. The third transistor TR3operates in response to the corresponding scan signal S[j] transmitted through the a-th scan line Sa. The turned-on third transistor TR3diode-connects the first transistor TR1by connecting the gate and the drain of the first transistor TR1. When the first transistor TR1is diode-connected, a voltage that is compensated as much as a threshold voltage of the first transistor TR1from a data voltage applied to the first transistor TR1is applied to the gate of the first transistor TR1. Since the gate of the first transistor TR1is connected to one electrode of the storage capacitor Cst, the voltage is maintained by the storage capacitor Cst. Since the voltage that is compensated as much as a threshold voltage of the first transistor TR1is applied to the gate and maintained, the driving current flowing through the first transistor TR1is not affected by the threshold voltage of the first transistor TR1. The fourth transistor TR4includes a gate connected to an (a−1)th scan line Sa−1, a source connected to the initialization voltage VINT, and a drain connected to the first node N1. The fourth transistor TR4transmits the initialization voltage VINT applied through the initialization voltage VINT to the first node N1in response to an (a−1)th scan signal S[a−1] transmitted through the (a−1)th scan line Sa−1. The fourth transistor TR4may transmit the initialization voltage VINT to the first node N1before application of the data signal D[b] in response to the (a−1)th scan signal S[a−1] that is transmitted in advance to the (a−1)th scan line S(a−1) that corresponds to the previous pixel row of a j-th pixel row where the corresponding pixel PX_ab is included. In this case, although a voltage value of the initialization voltage VINT is not particularly limited, it may be set to have a low level voltage such that the gate voltage of the first transistor TR1can be sufficiently lowered for initialization. That is, the gate of the first transistor T1is initialized to the initialization voltage for a period during which the (a−1)th scan signal S[a−1] is transmitted to the gate of the fourth transistor TR4with a gate-on voltage level The fifth transistor TR5includes a gate connected to a j-th light emission control line Ej, a source connected to the first power source voltage ELVDD, and a drain connected to the second node N2. The sixth transistor TR6includes a gate connected to the j-th light emission control line Ej, a source connected to the third node N3, and a drain connected to an anode of the organic light emitting diode OLED. The fifth transistor TR5and the sixth transistor TR6operate in response to a j-th light emission control signal E[j] transmitted through the j-th light emission control line Ej When the fifth transistor TR5and the sixth transistor TR6are turned on in response to the j-th light emission control signal E[j], a current path is formed in a direction toward the organic light emitting diode OLED from the first power source voltage ELVDD for flowing of the driving current. Then, the organic light emitting diode OLED emits light according to the driving current such than an image of the data signal is displayed. The storage capacitor Cst includes one electrode connected to the first node N1and the other electrode connected to the first power source voltage ELVDD. As previously described, since the storage capacitor Cst is connected between the gate of the first transistor TR1and the first power source voltage ELVDD, the voltage applied to the gate of the first transistor TR1can be maintained. The seventh transistor TR7includes a gate connected to an (a−1)th scan line Sa−1, a source connected to the anode of the organic light emitting diode OLED, and a drain connected to a power source of the initialization voltage VINT. The seventh transistor TR7may transmit the initialization voltage VINT to the anode of the organic light emitting diode in response to an (a−1)th scan signal S[a−1] that is transmitted in advance to the (a−1)th scan line Sa−1 that corresponds to the previous pixel row of the j-th pixel row where the corresponding pixel PX_ab is included. The anode of the organic light emitting diode OLED is reset to a sufficiently low voltage by the initialization voltage VINT transmitted thereto. Driving operation of the pixel PX_ab according to the timing diagram ofFIG.18and operation of the touch apparatus for receiving a detection signal will now be described based on the circuit diagram of the pixel PX_ab ofFIG.17. As shown inFIG.18, frequencies of driving signals D_111and D_121in the first sub-period T21may be two times a frequency of the horizontal synchronization signal Hsync. First, driving operation of the pixel PX_ab will be described. The fourth transistor TR4and the seventh transistor TR7are turned on by a low level voltage L of the (a−1)th scan signal S[a−1] transmitted through the (a−1)th scan line Sa−1. Then, the initialization voltage VINT that initializes a voltage at the gate of the first transistor TR1is transmitted to the first node N1through the fourth transistor TR4. During a period sp, the second transistor TR2and the third transistor TR3are turned on by the low level voltage L of the a-th scan signal S[a] transmitted through the a-th scan line Sa. Then, a data signal DATA[a] is transmitted to the first node N1through the turned-on second transistor TR2and the turned-on third transistor TR3. At t31, the fifth transistor TR5and the sixth transistor TR6are turned on by the light emission control signal Ej of the low level voltage L. Then, a driving current by a voltage stored in the storage capacitor Cst is transmitted to the organic light emitting diode OLED such that the organic light emitting diode OLED emits light. Next, operation for the touch apparatus to receive a detection signal will be described. A period dwp during which a data signal is applied to a data line and a period sp during which a scan signal is a low level voltage L are included in 1 horizontal period 1H, that is, one pulse cycle of the horizontal synchronization signal Hsync. In addition, the light emission control signal is changed to the low level voltage L in the 1 horizontal period 1H. A first driver/receiver110may sample a detection signal from a plurality of first touch electrodes111-1to111-mand a second driver/receiver120may sample a detection signal from a plurality of second touch electrodes121-1to121-nin at least one of sample times s00, s01, s02, s03, s10, s11, s12, s13, and . . . in the second sub-period T22. According to the exemplary embodiment, a controller130selects at least a part of a detection signal that has been sampled at least once in the second sub-period T22based on the horizontal synchronization signal, and generates touch information by using the selected part of the detection signal. That is, the controller130uses a detection signal sampled in a period other than the period dwp and the period sp as touch information in the 1 horizontal period 1H in the second sub-period T22. In the 1 horizontal period 1H, detection signals sampled during a period excluding the period dwp during which a data signal is applied to a data line and the period sp during which a scan signal is the low level voltage L are used such that a detection signal where noise is generated according to signals applied to the data line and the scan line, which may form parasitic capacitance with touch electrodes, is not used as touch information, thereby improving the SNR. According to another exemplary embodiment, in a period other than the period dwp and the period sp in the 1 horizontal period 1H of the second sub-period T22, the first driver/receiver110receives detection signals from the plurality of first touch electrodes111-1to111-mand the second driver/receiver120receives detection signals from the plurality of second touch electrodes121-1to121-n. In the 1 horizontal period 1H, for a period excluding the period dwp during which a data signal is applied to the data line and the period sp during which the scan signal is the low level voltage L, the detection signals are sampled such that noise of the detection signals according to signals applied to the data line and the scan line, which may form parasitic capacitance with touch electrodes, can be prevented. Additionally, at least one of times s10, s11, s12, and s13is included in a period of the 1 horizontal period 1H in the second sub-period T22, excluding a time t31at which the light emission control signal E[a] is changed to the low level voltage L. That is, in the 1 horizontal period 1H, detection signals sampled during a period excluding the time t31at which the light emission control signal E[a] is changed to the low level voltage L in the 1 horizontal period 1H may be used, or sample signals are sampled during a period excluding the time t31at which the light emission control signal E[a] is changed to the low level voltage L in the 1 horizontal period 1H such that noise of detection signals according to signals applied to the light emission control line, which may form parasitic capacitance with touch electrodes, can be prevented. While this invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. | 84,755 |
11861106 | DETAILED DESCRIPTION Hereinafter, a detection device according to embodiments is described with reference to the drawings. Embodiment 1 A detection device10according to the present embodiments is described with reference toFIGS.1to16. The detection device10detects a non-contact target (for example, a user's gesture). First, the overall configuration of the detection device10is described. As illustrated inFIG.1, the detection device10includes a sensor20and a controller50. As illustrated inFIG.2, the sensor20includes a light-transmissive substrate22, a driving electrode24, and detection electrodes26ato26e. The driving electrode24and the detection electrodes26ato26eare formed on the light-transmissive substrate22. The controller50detects a non-contact target from signal waveforms acquired from the detection electrodes26ato26eby applying a voltage to the driving electrode24, the signal waveforms each indicating a temporal change in the signal strength of a signal representing capacitance. In the present specification, in order to facilitate understanding, the following description is given on the assumption that inFIG.2, the right direction (right direction of the paper surface) of the sensor20is a +X direction, an upward direction (upward direction of the paper surface) is a +Y direction, and a direction perpendicular to the +X direction and the +Y direction (front direction of the paper surface) is a +Z direction. The signal representing capacitance is also referred to as “signal”, and the signal strength of the signal representing the capacitance is also referred to as “signal strength”. As illustrated inFIG.3, the detection device10constitutes a display unit200together with a display device100. The display unit200is mounted on a smartphone, a laptop computer, an information display, or the like. The display device100includes a display panel110and a display controller120. The display panel110displays characters, images, or the like. The display panel110is a liquid crystal display panel, an organic electroluminescence (EL) display panel, or the like. The display controller120controls the display of the display panel110. The display controller120and the controller50of the detection device10are connected to each other. The sensor20of the detection device10is provided on a display surface side of the display panel110via an adhesive layer (not illustrated). In this case, the driving electrode24of the sensor20is located above a display area of the display panel110, and the detection electrodes26ato26eof the sensor20are located above the outer periphery of the display area of the display panel110. Furthermore, a protective cover202made of resin is provided on the sensor20via an adhesive layer (not illustrated). The detection device10detects a non-contact target located in a detection space on the sensor20. As a result, the detection device10serves as an interface for receiving a user's instruction for the display of the display device100. A thickness L of the detection space is, for example, 150 mm. Next, a specific configuration of the detection device10is described. As illustrated inFIG.2, the sensor20of the detection device10includes the light-transmissive substrate22, the driving electrode24, and the detection electrodes26ato26e. The light-transmissive substrate22of the sensor20is, for example, a glass substrate. The light-transmissive substrate22includes a first main surface22a. The driving electrode24of the sensor20is provided on the first main surface22aof the light-transmissive substrate22. The driving electrode24has a rectangular shape and is provided in a central portion of the first main surface22a. In the present embodiment, the driving electrode24covers the display area of the display panel110when viewed in the plan view. The driving electrode24is electrically connected to the controller50via a wiring (not illustrated). The detection electrodes26ato26eof the sensor20are provided on the first main surface22aof the light-transmissive substrate22, respectively. The detection electrode26ais arranged on the +Y side of the driving electrode24and extends in the X direction. The detection electrodes26bto26eare arranged side by side in the X direction on the —Y side of the driving electrode24. Each of the detection electrodes26ato26eis electrically connected to the controller50via a wiring (not illustrated). The driving electrode24and the detection electrodes26ato26eare formed of, for example, indium tin oxide (ITO). The driving electrode24and the detection electrodes26ato26eform capacitance between a target (for example, a user's finger or hand, a pen, or the like). The controller50of the detection device10detects a non-contact target from the signal waveforms acquired from the detection electrodes26ato26e, the signal waveforms each indicating a temporal change in the signal strength of the signal representing the capacitance. First, the functional configuration of the controller50is described. As illustrated inFIG.4, the controller50includes an input/output device51, a storage52, a driver54, a receiver56, a calculator58, a first discriminator62, a second discriminator64, and a detector66. The input/output device51of the controller50inputs/outputs a signal between the controller50and the display controller120of the display device100, a signal between the detector66and a controller of an electronic device, or the like. The storage52of the controller50stores a program, data, a signal received by the receiver56and representing capacitance, a signal waveform indicating a change in a signal strength over time, or the like. The driver54of the controller50applies a voltage to the driving electrode24on the basis of an instruction from the controller of the electronic device transmitted via the input/output device51. The receiver56of the controller50receives the signals representing the capacitance from the detection electrodes26ato26e. The calculator58of the controller50calculates a moving averaged signal waveform by performing a moving average process on the signal waveform indicating the temporal change in the signal strength of the signal received by the receiver56(FIG.5). This makes it possible to remove fine noise. Moreover, as illustrated inFIGS.6and7, the calculator58calculates a first-order differential waveform and a second-order differential waveform of the moving averaged signal waveform. On the basis of the first-order differential waveform and the second-order differential waveform of the moving averaged signal waveform, the first discriminator62of the controller50discriminates a rising start point of a peak and a peak top of the peak in the moving averaged signal waveform. Specifically, as illustrated inFIGS.6and7, the first discriminator62sets, as a time corresponding to the rising start point of the peak, a time when a value of the second-order differential waveform changes from a positive value to a negative value and a value of the first-order differential waveform is a positive value. Furthermore, the first discriminator62sets, as a time corresponding to the peak top of the peak, an initial time when the value of the first-order differential waveform changes from a positive value to a negative value in the direction in which time elapses from the time corresponding to the rising start point of the peak. Moreover, the first discriminator62discriminates the rising start point of the peak and the peak top of the peak from the time corresponding to the rising start point of the peak and the time corresponding to the peak top of the peak. Hereinafter, the rising start point of the peak is also referred to as a “rising start point” and the peak top of the peak is also referred to as a “peak top”. When the peak top is not discriminated even after a predetermined first period (for example, 100 ms) elapses from the time corresponding to the rising start point (that is, when the time corresponding to the rising start point and the time corresponding to the peak top are out of the predetermined first period), the first discriminator62may re-discriminate that a point, which has been discriminated as the rising start point, is not the rising start point, and re-discriminate the rising start point in the direction in which time elapses. On the basis of a time width ΔT1 from the rising start point to the peak top, a height ΔH1 from the rising start point to the peak top, and a slope Uc (Uc=ΔH1/ΔT1) on a rising side of the peak, the second discriminator64of the controller50discriminates a peak caused by the non-contact target in the moving averaged signal waveform. Specifically, the second discriminator64discriminates, as the peak caused by the non-contact target in the moving averaged signal waveform, a peak in which the time width ΔT1 is equal to or greater than a predetermined first threshold value Cw (for example, 10 ms), the height ΔH1 is equal to or higher than a predetermined second threshold value Ch (for example, 10 a.u.), and the slope Uc on the rising side is equal to or greater than a third threshold value Cd (for example, 0.15). Hereinafter, the peak caused by the non-contact target is also referred to as a “peak of the target”. In the present embodiment, the peak of the target is discriminated on the basis of the time width ΔT1, the height ΔH1, and the slope Uc on the rising side. Consequently, the detection device10can discriminate the peak of the target even though the second threshold value Ch, which is the threshold value of the height ΔH1, is set to be small. That is, the detection device10can discriminate the peak of a target having a low signal strength. Furthermore, the detection device10discriminates the peak of the target from the rising start point and the peak top, which makes it possible to discriminate the peak of the target when the signal waveform reaches the peak top and to detect the non-contact target in a short time. The detector66of the controller50detects the movement of the non-contact target from the time order of the peak tops of the peaks of the target in the moving averaged signal waveforms of the detection electrodes26ato26e. For example, when the peak tops of the peaks of the target appear in the order of the detection electrode26d, the detection electrode26c, and the detection electrode26bfrom the detection electrode26elocated on the +X side in the direction in which time elapses, the detector66discriminates that a user has made a flick gesture from the +X direction to the −X direction, and detects the user's flick gesture from the +X direction to the −X direction. The detector66outputs a signal representing the detected movement of the non-contact target to the controller of the electronic device provided with the detection device10. The signal representing the movement of the non-contact target represents, for example, a key event, a message, or the like set by the user for the flick gesture in the −X direction. The signal representing the detected movement of the non-contact target may be output once or more times for one detection. The detected gesture may be a flick gesture from the +Y direction to the —Y direction, a circle gesture in which the non-contact target moves in a circle, or the like. Hereinafter, the movement of the non-contact target is also referred to as a “movement of the target”. FIG.9illustrates the hardware configuration of the controller50. The controller50includes a central processing unit (CPU)82, a read only memory (ROM)83, a random access memory (RAM)84, an input/output interface86, and a circuit88having a specific function. The CPU82executes programs stored in the ROM83. The ROM83stores programs, data, signals, or the like. The RAM84stores data. The input/output interface86inputs and outputs signals between these components. The circuit88having a specific function includes a driving circuit, a reception circuit, an arithmetic circuit, or the like. The functions of the controller50are implemented by the execution of the programs of the CPU82and functions of the circuit88having a specific function. Next, a detection process (operation) of the detection device10is described with reference toFIGS.10to16. Hereinafter, a case where the display unit200including the detection device10and the display device100is mounted on an electronic device is described. As illustrated inFIG.10, the detection process of the detection device10is performed in the order of a driving process (step S100), a calculation process (step S200), a peak end point/peak top discrimination process (step S300), a peak discrimination process (step S400), and a non-contact detection process (step S500). After the non-contact detection process (step S500), when an end instruction is not input to the controller50(step S600; NO), the detection process of the detection device10returns to the calculation process (step S200). When the end instruction is input to the controller50(step S600; YES), the detection process of the detection device10is ended. In the driving process (step S100), the driver54of the controller50applies a voltage to the driving electrode24on the basis of an instruction from the controller of the electronic device transmitted via the input/output device51of the controller50, and the receiver56of the controller50receives a signal representing capacitance from each of the detection electrodes26ato26e. The received signal representing the capacitance is stored in the storage52of the controller50. The calculation process (step S200) is described with reference toFIG.11. In the calculation process (step S200), a moving averaged signal waveform and a first-order differential waveform and a second-order differential waveform of the moving averaged signal waveform are calculated. First, the calculator58of the controller50performs a moving average process on a signal waveform indicating a temporal change in the signal strength of the signal received by the receiver56, and calculates the moving averaged signal waveform in each of the detection electrodes26ato26e(step S202). Subsequently, the calculator58calculates a first-order differential waveform and a second-order differential waveform of the calculated moving averaged signal waveform (step S204). Next, the peak end point/peak top discrimination process (step S300) is described with reference toFIG.12. In the peak end point/peak top discrimination process (step S300), a rising start point and a peak top in the moving averaged signal waveform are discriminated on the basis of the first-order differential waveform and the second-order differential waveform of the moving averaged signal waveform. First, the first discriminator62of the controller50discriminates the rising start point in each of the moving averaged signal waveforms from the first-order differential waveform and the second-order differential waveform of each of the moving averaged signal waveforms along the direction in which time elapses (step S302). The first discriminator62discriminates the rising start point by setting, as a time corresponding to the rising start point, a time when a value of the second-order differential waveform changes from a positive value to a negative value and a value of the first-order differential waveform is a positive value. Step S302is repeated in the direction in which time elapses until the rising start point is determined (step S302; NO). When the rising start point is discriminated (step S302; YES), the first discriminator62determines the peak top in each of the moving averaged signal waveforms from the first-order differential waveform of each of the moving averaged signal waveforms (step S304). The first discriminator62discriminates the peak top by setting, as a time corresponding to the peak top, an initial time when the value of the first-order differential waveform changes from a positive value to a negative value in the direction in which time elapses from the time corresponding to the rising start point. Step S304is repeated in the direction in which time elapses until the peak top is determined (step S304; NO). When the peak top is not discriminated even after the predetermined first period (for example, 100 ms) elapses from the time corresponding to the rising start point, that is, when the time corresponding to the rising start point and the time corresponding to the peak top are out of the predetermined first period, the rising start point discriminated in step S302may be re-discriminated as not being a rising start point, and a rising start point may be discriminated again in the direction in which time elapses after returning to step S302. When the peak top is discriminated (step S304; YES), the first discriminator62stores, in the storage52, a corresponding time and a moving average value of the discriminated rising start point and a corresponding time and a moving average value of the discriminated peak top (step S306), and ends the peak end point/peak top discrimination process (step S300). The peak discrimination process (step S400) is described with reference toFIGS.13and14. In the peak discrimination process (step S400), the peak of the target in the moving averaged signal waveform is discriminated on the basis of the time width ΔT1 from the rising start point to the peak top, the height ΔH1 from the rising start point to the peak top, and the slope Uc on the rising side of the peak. First, the second discriminator64of the controller50calculates the time width ΔT1 (difference between the time corresponding to the peak top and the time corresponding to the rising start point) and the height ΔH1 (difference between the moving average value of the peak top and the moving average value of the rising start point) between the rising start point to the peak top discriminated in the peak end point/peak top discrimination process (step S300) (step S402). Subsequently, the second discriminator64calculates the slope Uc (ΔH1/ΔT1) on the rising side of the peak (step S404). Next, the second discriminator64discriminates whether a peak is the peak of the target in the moving averaged signal waveform on the basis of the time width ΔT1, the height ΔH1, and the slope Uc on the rising side of the peak (Step S406). Specifically, as illustrated inFIG.14, the second discriminator64discriminates, as the peak of the target in the moving averaged signal waveform, a peak in which the time width ΔT1 is equal to or greater than the predetermined first threshold value Cw, the height ΔH1 is equal to or higher than the predetermined second threshold value Ch, and the slope Uc on the rising side is equal to or greater than the third threshold value Cd. A peak, in which the time width ΔT1 is less than the predetermined first threshold value Cw, the height ΔH1 is less than the predetermined second threshold value Ch, and the slope Uc on the rising side is less than the third threshold value Cd, is noise (peak of noise). When the peak is not discriminated as the peak of the target (step S406; NO), the detection process returns to step S302of the peak end point/peak top discrimination process (step S300). When the peak is discriminated as the peak of the target (step S406; YES), the peak determination process (step S400) is ended. In the present embodiment, the peak of the target is discriminated on the basis of the time width ΔT1, the height ΔH1, and the slope Uc on the rising side. Consequently, the peak discrimination process (step S400) can discriminate the peak of the target even though the second threshold value Ch, which is the threshold value of the height ΔH1, is small. That is, the detection process can discriminate the peak of a target having a low signal strength. Furthermore, since the peak discrimination process (step S400) discriminates the peak of the target from the rising start point and the peak top, the peak of the target can be discriminated when the signal waveform reaches the peak top, and a non-contact target can be discriminated in a short time. The non-contact detection process (step S500) is described with reference toFIGS.15and16. In the non-contact detection process (step S500), the movement (user's gesture) of the discriminated target is discriminated from the time order of the peak tops of the discriminated target. First, the detector66of the controller50arranges the detection electrodes26ato26ein the time order of the peak tops in each moving averaged signal waveform (step S502). Subsequently, the detector66discriminates the movement (user's gesture) of the target by referring to a lookup table indicating the relationship between the time order of the peak tops in the moving averaged signal waveforms of the detection electrodes26ato26eand the movement of the target (step S504).FIG.16illustrates an example of the lookup table. For example, when the time order of the peak tops is the order of the detection electrode26e, the detection electrode26d, the detection electrode26c, and the detection electrode26b, the detector66discriminates that the user has made a flick gesture from the +X direction to the −X direction, and detects the flick gesture from the +X direction to the −X direction. The lookup table is stored in the storage52in advance. When the movement of the target is detected (step S504; YES), the detector66outputs a signal representing the movement of the detected target to the controller of the electronic device, which is provided with the display unit200(detection device10), via the input/output device51(step506). When the detector66outputs the signal representing the movement of the target, the non-contact detection process (step S500) is ended. When the movement of the target is not detected (step S506; NO), the detection process returns to step S302of the peak end point/peak top discrimination process (step S300). As described above, the detection device10discriminates the peak of a target on the basis of the time width ΔT1, the height ΔH1, and the slope Uc on the rising side, so that the peak of a target having a small signal strength can be discriminated. Furthermore, the detection device10discriminates the peak of the target from a rising start point and a peak top, so that a non-contact target can be detected in a short time. Embodiment 2 In Embodiment 1, the detection device10discriminates a rising start point of a peak and a peak top of the peak. The detection device10may discriminate the rising start point of the peak, the peak top of the peak, and a falling end point of the peak. Hereinafter, the falling end point of the peak is also referred to as a “falling end point”. The detection device10of the present embodiment includes a sensor20and a controller50, similarly to the detection device10of Embodiment 1. Since the sensor20of the present embodiment is the same as the sensor20of Embodiment 1, the controller50and a detection process of the present embodiment are described below. The controller50of the present embodiment includes an input/output device51, a storage52, a driver54, a receiver56, a calculator58, a first discriminator62, a second discriminator64, and a detector66, similarly to the controller50of Embodiment 1. Since the input/output device51, the storage52, the driver54, the receiver56, the calculator58, the second discriminator64, and the detector66of the present embodiment are the same as those of Embodiment 1, the first discriminator62of the present embodiment is described. On the basis of the first-order differential waveform and the second-order differential waveform of the moving averaged signal waveform, the first discriminator62of the present embodiment discriminates the rising start point, the peak top, and the falling end point in the moving averaged signal waveform. The discrimination of the rising start point and the peak top is the same as in Embodiment 1. As illustrated inFIGS.6,7, and17, the first discriminator62of the present embodiment discriminates the falling end point by setting, as a time corresponding to the falling end point, a time when the value of the second-order differential waveform changes from a negative value to a positive value and the value of the first-order differential waveform changes is a negative value in the direction in which time elapses from the time corresponding to the peak top. Moreover, when the falling end point is not discriminated even after a predetermined second period (for example, 30 ms) elapses from the time corresponding to the peak top (that is, when the time corresponding to the peak top and the time corresponding to the falling end point are out of the predetermined second period), the first discriminator62of the present embodiment re-discriminates that a point, which has been discriminated as the rising start point, and a point, which has been discriminated as the peak top, are not a rising start point and a peak top, and re-discriminates a rising start point in the direction in which time elapses. Next, the detection process of the present embodiment is described. The detection process of the present embodiment is performed in the order of a drive process (step S100), a calculation process (step S200), a peak end point/peak top discrimination process (step S300), a peak discrimination process (step S400), and a non-contact detection process (step S500), similarly to the detection process of Embodiment 1. Since the drive process (step S100), the peak discrimination process (step S400), and the non-contact detection process (step S500) of the present embodiment are the same as those of Embodiment 1, the peak end point/peak top discrimination process (step S300) of the present embodiment is described with reference toFIG.18. First, similarly to the peak end point/peak top discrimination process (step S300) of Embodiment 1, the first discriminator62of the controller50discriminates the rising start point (step S302) and discriminates the peak top (step S304). When the peak top is discriminated (step S304; YES), the first discriminator62discriminates the falling end point in each of the moving averaged signal waveforms from the first-order differential waveform and a second-order differential waveform of each of the moving averaged signal waveforms (step S305). Specifically, the first discriminator62discriminates the falling end point by setting, as the time corresponding to the falling end point, the time when the value of the second-order differential waveform changes from a negative value to a positive value and the value of the first-order differential waveform changes is a negative value in the direction in which time elapses from the time corresponding to the peak top. When the falling end point is not discriminated even after the predetermined second period elapses from the time corresponding to the peak top (step S305; NO), the rising start point discriminated in step S302and the peak top discriminated in step S304are re-discriminated as not being a rising start point and a peak top, respectively, and the peak end point/peak top discrimination process (step S300) is returned to step S302. When the falling end point is discriminated with the predetermined second period from the time corresponding to the peak top (step S305; YES), the first discriminator62stores, in the storage52, a corresponding time and a moving average value of the discriminated rising start point and a corresponding time and a moving average value of the discriminated peak top (step S306), and ends the peak end point/peak top discrimination process (step S300). In the present embodiment, a rising start point and a peak top (that is, the presence or absence of a peak) are discriminated depending on whether a falling end point exists within the predetermined second period from a time corresponding to the peak top. As a result, the detection device10of the present embodiment can suppress an increase in signal strength not caused by the movement of a target from being discriminated as a peak, and can suppress erroneous detection. Furthermore, the detection device10of the present embodiment can discriminate the peak of a target having a small signal strength, similarly to the detection device10of Embodiment 1. Embodiment 3 In Embodiment 1 and Embodiment 2, the detection device10detects the movement of a target from a signal waveform of each of the detection electrodes26ato26e. The detection device10may detect the movement of a target from a signal waveform obtained by averaging signal waveforms of detection electrodes (for example, the detection electrodes26bto26e). In the present embodiment, the detection device10detects the movement of a target from a signal waveform of each of the detection electrodes26ato26eand a signal waveform obtained by averaging the signal waveforms of the detection electrodes26bto26e. The detection device10of the present embodiment includes a sensor20and a controller50, similarly to the detection device10of Embodiment 1. Since the sensor20of the present embodiment is the same as the sensor20of Embodiment 1, the controller50and a detection process of the present embodiment are described below. The controller50of the present embodiment includes an input/output device51, a storage52, a driver54, a receiver56, a calculator58, a first discriminator62, a second discriminator64, and a detector66, similarly to the controller50of Embodiment 1. Since the input/output device51, the storage52, the driver54, and the receiver56of the present embodiment are the same as those of Embodiment 1, the calculator58, a first discriminator62, the second discriminator64, and the detector66of the present embodiment are described. Similar to the calculator58of Embodiment 1, the calculator58of the present embodiment calculates moving averaged signal waveforms of the detection electrodes26ato26efrom signals received by the receiver56. Then, the calculator58of the present embodiment sets a virtual detection electrode including detection electrodes, and calculates a moving averaged signal waveform of the virtual detection electrode. In the present embodiment, as illustrated inFIG.19, virtual detection electrode26b-26eis configured from the detection electrodes26bto26e. The calculator58of the present embodiment calculates an average signal waveform26b-26eobtained by averaging the signal waveforms of the detection electrodes26bto26e, as a signal waveform of the virtual detection electrode26b-26e. Then, the calculator58of the present embodiment calculates a moving averaged average signal waveform26b-26eby performing a moving average process on the average signal waveform26b-26e. Then, the calculator58of the present embodiment calculates first-order differential waveforms and second-order differential waveforms of the moving averaged signal waveforms of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26e. The first discriminator62of the present embodiment discriminates rising start points and peak tops in the moving averaged signal waveforms of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26eof the virtual detection electrode26b-26e. The discrimination of the rising start points and the peak tops is the same as in Embodiment 1. The second discriminator64of the present embodiment discriminates a peak of a target in the moving averaged signal waveforms of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26eof the virtual detection electrode26b-26e. The discrimination of the peak of the target is the same as in Embodiment 1. The detector66of the present embodiment discriminates the movement of the target from the time order of the peak tops of the peaks of the target in the moving averaged signal waveform of each of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26e. For example, when the peak top of the peak of the target appears in the order of the detection electrode26aand the virtual detection electrode26b-26ein the direction in which time elapses, the detector66discriminates that a user has made a flick gesture from the +Y direction to the −Y direction, and detects the user's flick gesture from the +Y direction to the −Y direction. When the flick gesture from the +Y direction to the −Y direction is discriminated only from the time order of the peak tops of the detection electrodes26ato26e, there are time orders of the peak tops corresponding to the flick gesture from the +Y direction to the −Y direction as illustrated inFIG.20, which may complicate discrimination. Furthermore, as illustrated inFIG.21, a signal strength difference and a time difference at the peak tops are reduced, and discrimination may be difficult. In the present embodiment, when the peak top appears in the order of the detection electrode26aand the virtual detection electrode26b-26e, since it is discriminated that the user has made a flick gesture from the +Y direction to the −Y direction, the detection device10of the present embodiment can easily discriminate the movement of a target. Furthermore, as illustrated inFIG.22, the number of signal waveforms to be discriminated is reduced, so that the detection device10of the present embodiment can easily discriminate the movement of a target. The detector66of the present embodiment outputs a signal representing the movement of the detected target to the controller of the electronic device provided with the detection device10. The signal representing the movement of the target represents, for example, a key event, a message, or the like set by the user for a flick gesture in the −Y direction. The detector66of the present embodiment may also detect a flick gesture from the +Y direction to the −Y direction from the time order of the peak tops of the detection electrode26a, the peak tops of the virtual detection electrode26b-26e, and the peak tops of the detection electrodes26bto26e. Next, the detection process of the present embodiment is described. The detection process of the present embodiment is performed in the order of a drive process (step S100), a calculation process (step S200), a peak end point/peak top discrimination process (step S300), a peak discrimination process (step S400), and a non-contact detection process (step S500), similarly to the detection process of Embodiment 1. Since the drive process (step S100) of the present embodiment is the same as that of Embodiment 1, the calculation process (step S200), the peak end point/peak top discrimination process (step S300), the peak discrimination process (step S400), and the non-contact detection process (step S500) of the present embodiment are described. In the calculation process (step S200) of the present embodiment, the calculator58calculates, as the signal waveform of the virtual detection electrode26b-26e, the average signal waveform26b-26eobtained by averaging the signal waveforms of the detection electrodes26bto26e, and further calculates the moving averaged average signal waveform26b-26e. The calculator58calculates the first-order differential waveform and the second-order differential waveform of the moving averaged average signal waveforms26b-26e. The other processes in the calculation process (step S200) of the present embodiment are the same as the calculation process (step S200) of Embodiment 1. In the peak end point/peak top discrimination process (step S300) of the present embodiment, the first discriminator62discriminates the rising start points and the peak tops in the moving averaged signal waveforms of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26eon the basis of the calculated first-order differential waveforms and second-order differential waveforms. The other processes in the peak end point/peak top discrimination process (step S300) of the present embodiment are the same as the peak end point/peak top discrimination process (step S300) of Embodiment 1. In the peak discrimination process (step S400) of the present embodiment, the second discriminator64discriminates the peak of the target in the moving averaged signal waveforms of the detection electrodes26ato26eand the moving averaged average signal waveform26b-26eon the basis of the time width ΔT1 from the rising start point to the peak top, the height ΔH1 from the rising start point to the peak top, and the slope Uc on the rising side of the peak. The other processes in the peak discrimination process (step S400) of the present embodiment are the same as the peak discrimination process (step S400) of Embodiment 1. In the non-contact detection process (step S500) of the present embodiment, the detector66discriminates the movement (user's gesture) of the discriminated target from the time order of the peak tops of the peaks of the discriminated target. Similar to Embodiment 1, the detector66discriminates the movement of the target by referring to the lookup table indicating the relationship between the time order of the peak tops and the movement of the target. As described above, the detection device10of the present embodiment discriminates the movement of a target from signal waveforms obtained by averaging signal waveforms of detection electrodes (detection electrodes26bto26e), so that the target can be easily detected. Furthermore, the detection device10of the present embodiment can discriminate the peak of a target having a small signal strength, similarly to the detection device10of Embodiment 1. Embodiment 4 In Embodiment 1 to Embodiment 3, the detection device10discriminates the movement of a target from the time order of peak tops. The detection device10may discriminate the movement of the target from the time interval of the peak tops. In the present embodiment, the detection device10discriminates the movement of the target from the time order of the peak tops and the time interval of the peak tops. The detection device10of the present embodiment includes a sensor20and a controller50, similarly to the detection device10of Embodiment 1. Since the sensor20of the present embodiment is the same as the sensor20of Embodiment 1, the controller50and a detection process of the present embodiment are described below. Similarly to the controller50of Embodiment 3, the controller50of the present embodiment includes an input/output device51, a storage52, a driver54, a receiver56, a calculator58, a first discriminator62, a second discriminator64, and a detector66, similarly to the controller50of Embodiment 3. Since the input/output device51, the storage52, the driver54, the receiver56, the calculator58, a first discriminator62, and the second discriminator64of the present embodiment are the same as those of Embodiment 3, the detector66of the present embodiment is described. The detector66of the present embodiment classifies the type of movement of a target to be discriminated (type of gesture to be discriminated) from time intervals between the peak tops of the detection electrodes26ato26eand the peak tops of the virtual detection electrode26b-26e. The detector66of the present embodiment classifies, for example, the type of the movement of the target to be discriminated into a flick gesture and a circle gesture from time intervals between the peak tops in the moving averaged signal waveform of the detection electrode26aand the peak tops in the moving averaged average signal waveform26b-26eof the virtual detection electrode26b-26e. Specifically, when a time interval T2 between the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26eis equal to or less than a predetermined fourth threshold value th4, the detector66classifies the type of the movement of the target to be discriminated into the flick gesture. When the time interval T2 between the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26eis greater than the predetermined fourth threshold value th4 and smaller than a predetermined fifth threshold value th5, the detector66classifies the type of the movement of the target to be discriminated into the circle gesture. Since the time from the start to the end of a movement in the flick gesture is shorter than the time from the start to the end of a movement in the circle gesture, the detector66can classify the type of the movement of the target to be discriminated into the flick gesture and the circle gesture according to the time interval of the peak tops. The detector66of the present embodiment further discriminates the movement of the target from the time order of the peak tops of the detection electrodes26ato26eand the peak tops of the virtual detection electrode26b-26efor each classified type of movement of the target to be discriminated. For example, when the type of the movement of the target to be discriminated is discriminated as the circle gesture and the time order of the peak tops is the order of the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26e, the movement of the target is discriminated as a clockwise circle gesture as illustrated inFIG.23. On the other hand, when the type of the movement of the target to be discriminated is discriminated as the circle gesture and the time order of the peak tops is not a time order set in advance, it is discriminated that there is no movement of the target. Furthermore, when the type of the movement of the target to be discriminated is discriminated as the flick gesture and the time order of the peak tops is the order of the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26e, it is discriminated that the movement of the target is a flick gesture from the +Y direction to the −Y direction. In the present embodiment, the detection device10discriminates the movement of a target from the time order of peak tops and the time interval of the peak tops, so that movements of a wider variety of target can be more easily discriminated. Next, the detection process of the present embodiment is described. The detection process of the present embodiment is performed in the order of a drive process (step S100), a calculation process (step S200), a peak end point/peak top discrimination process (step S300), a peak discrimination process (step S400), and a non-contact detection process (step S500), similarly to the detection process of Embodiment 1. Since the drive process (step S100), the calculation process (step S200), the peak end point/peak top discrimination process (step S300), and the peak discrimination process (step S400) are the same as those of Embodiment 3, the non-contact detection process (step S500) of the present embodiment is described with reference toFIG.24. In the non-contact detection process (step S500) of the present embodiment, first, the detector66of the controller50arranges the detection electrodes26ato26eand the virtual detection electrode26b-26ein the time order of the peak top (step S512). Next, the detector66classifies the type of the movement of the target to be discriminated (type of user's gesture) from the time intervals between the peak tops of the detection electrodes26ato26eand the peak tops of the virtual detection electrode26b-26e(step S514). Specifically, when the time interval T2 between the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26eis equal to or less than the predetermined fourth threshold value th4, the detector66classifies the type of the movement of the target to be discriminated into the flick gesture (step S514; T2≤th4). When the time interval T2 between the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26eis greater than the predetermined fourth threshold value th4 and smaller than the predetermined fifth threshold value th5, the detector66classifies the type of the movement of the target to be discriminated into the circle gesture (step S514; th4<T2<th5). Moreover, when the time interval T2 between the peak top of the detection electrode26aand the peak top of the virtual detection electrode26b-26eis the predetermined fifth threshold value th5, the detection process returns to step S302of the peak end point/peak top discrimination process (step S300). When the type of the movement of the target to be determined is the flick gesture (step S514; T2≤th4), the detector66detects the movement of the target by referring to a lookup table indicating the relationship between the time order of the peak tops and the movement of the target in the flick gesture (step S516). When the movement of the target is not detected (step S514; NO), the detection process returns to step S302of the peak end point/peak top discrimination process (step S300). On the other hand, when the type of the movement of the target to be discriminated is the circle gesture (step S514; th4<T2<th5), the detector66detects the movement of the target by referring to a lookup table indicating the relationship between the time order of the peak tops and the movement of the target in the circle gesture (step S518). When the movement of the target is not detected (step S518; NO), the detection process returns to step S302of the peak end point/peak top discrimination process (step S300). When the movement of the target is detected in step S516or step S516(step S516; YES or step S518; YES), the detector66outputs a signal representing the movement of the detected target to the controller of the electronic device provided with the display unit200(detection device10) (step506). When the detector66outputs the signal representing the movement of the target, the non-contact detection process (step S500) is ended. As described above, the detection device10of the present embodiment discriminates the movement of a target from the time order of peak tops and the time interval of the peak tops, so that movements of a wider variety of targets can be more easily discriminated. Furthermore, the detection device10of the present embodiment can discriminate the peak of a target having a small signal strength. Modification Although the embodiments have been described above, the present disclosure can be changed in various ways without departing from the gist. For example, the number and arrangement of the detection electrodes of the sensor20are arbitrary. For example, the detection electrodes may be arranged on the +X side and the −X side of the driving electrode24to surround the driving electrode24. Furthermore, the sensor20may include driving electrodes24. The detection device10may discriminate the peak of a target on the basis of at least one of a time width ΔT3 from a falling end point to a peak top, a height ΔH2 from the falling end point to the peak top, and a slope Dc (ΔH2/ΔT3) on a falling side of the peak illustrated inFIG.25, in addition to the time width ΔT1 from the rising start point to the peak top, the height ΔH1 from the rising start point to the peak top, or the slope Uc on the rising side of the peak. In the embodiments, the detection device10performs a moving average process on a signal waveform indicating a change in signal strength over time. The detection device10may not perform the moving average process on the signal waveform indicating a change in signal strength over time. For example, the detection device10may discriminate a rising start point and a peak top on the basis of a first-order differential waveform and a second-order differential waveform of a signal waveform received by the receiver56. In Embodiment 3, the movement of a target is detected from the signal waveform obtained by averaging the signal waveforms of the detection electrodes26bto26e(average signal waveform26b-26eof the virtual detection electrode26b-26e). The detection electrodes constituting the virtual detection electrode are not limited to the detection electrodes26bto26e. For example, as illustrated inFIG.26, the virtual detection electrodes may be configured from the detection electrode26aand the detection electrode26b(virtual detection electrode26a-26b), and the detection electrode26aand the detection electrode26e(virtual detection electrode26a-26e). For example, when peak tops appear in the order of the virtual detection electrodes26a-26b, the detection electrode26a, the virtual detection electrode26a-26e, and the virtual detection electrode26b-26e, the detection device10can discriminate a clockwise circle gesture. The controller50may include dedicated hardware such as an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), and a control circuit. In this case, each of the processes may be performed by individual hardware. Furthermore, the processes may be collectively performed by single hardware. Some of the processes may be performed by dedicated hardware, and others of the processes may be performed by software or firmware. The foregoing describes some example embodiments for explanatory purposes. Although the foregoing discussion has presented specific embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. This detailed description, therefore, is not to be taken in a limiting sense, and the scope of the invention is defined only by the included claims, along with the full range of equivalents to which such claims are entitled. | 49,709 |
11861107 | DETAILED DESCRIPTION In the specification, when one component (or area, layer, part, or the like) is referred to as being “on”, “connected to”, or “coupled to” another component, it should be understood that the former may be directly on, connected to, or coupled to the latter, and also may be on, connected to, or coupled to the latter via a third intervening component. Like reference numerals refer to like components. In addition, in drawings, thicknesses, proportions, and dimensions of components may be exaggerated to describe the technical features effectively. The term “and/or” includes one or more combinations of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are only used to distinguish one component from another component. For example, without departing from the scope and spirit of the inventive concept, a first component may be referred to as a “second component”, and similarly, the second component may be referred to as the “first component”. The singular forms are intended to include the plural forms unless the context clearly indicates otherwise. Also, the terms “under”, “beneath”, “on”, “above”, etc. are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. Unless otherwise defined, all terms (including technical terms and scientific terms) used in this specification have the same meaning as commonly understood by those skilled in the art to which the present disclosure belongs. Furthermore, terms such as terms defined in the dictionaries commonly used should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted in ideal or overly formal meanings unless explicitly defined herein. It will be further understood that the terms “comprises”, “includes”, “have”, etc. specify the presence of stated features, numbers, steps, operations, elements, components, or a combination thereof but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, or a combination thereof. Below, embodiments of the present disclosure will be described with reference to accompanying drawings. FIG.1is a perspective view of a display device according to an embodiment of the present disclosure. Referring toFIG.1, a display device DD according to an embodiment of the present disclosure may be in the shape of a rectangle having long sides (or edges) extending in a first direction DR1and short sides (or edges) extending in a second direction DR2intersecting the first direction DR1. However, the present disclosure is not limited thereto. For example, the display device DD may have various shapes such as a circle or a polygon. Hereinafter, a direction that is substantially perpendicular to a plane defined by the first direction DR1and the second direction DR2is defined as a third direction DR3. Also, in the specification, the expression “when viewed from above a plane” may mean “when viewed in the third direction DR3”. An upper surface of the display device DD may be defined as a display surface DS and may have a plane defined by the first direction DR1and the second direction DR2. Images IM generated by the display device DD may be provided to a user through the display surface DS. The display surface DS may include a display area DA and a non-display area NDA around the display area DA. The display area DA may display an image and the non-display area NDA may not display an image. The non-display area NDA may surround the display area DA and may define a border of the display device DD printed with a given color. FIG.2is a diagram illustrating an example of a cross section of a display device illustrated inFIG.1. FIG.2illustrates a cross-section of the display device DD when viewed in the first direction DR1. Referring toFIG.2, the display device DD may include a display panel DP, an input sensor ISP, an anti-reflection layer RPL, a window WIN, a panel protection film PPF, and first and second adhesive layers AL1and AL2. The display panel DP may be a flexible display panel. The display panel DP according to an embodiment of the present disclosure may be a light emitting display panel, but the display panel DP is not particularly limited thereto. For example, the display panel DP may be an organic light emitting display panel or an inorganic light emitting display panel. An emission layer of the organic light emitting display panel may include an organic light emitting material. An emission layer of the inorganic light emitting display panel may include a quantum dot, a quantum rod, or the like. Below, the description will be given under the condition that the display panel DP is an organic light emitting display panel. The input sensor ISP may be disposed on the display panel DP. The input sensor ISP may include a plurality of sensors (not illustrated) for sensing an external input in a capacitive scheme. The input sensor ISP may be directly manufactured on the display panel DP in the process of manufacturing the display device DD. However, the present disclosure is not limited thereto. For example, the input sensor ISP may be manufactured with a panel independent of the display panel DP and may be bonded to the display panel DP by an adhesive layer. The anti-reflection layer RPL may be disposed on the input sensor ISP. The anti-reflection layer RPL may be directly manufactured on the input sensor ISP in the process of manufacturing the display device DD. However, the present disclosure is not limited thereto. For example, the anti-reflection layer RPL may be manufactured with a separate panel and may be bonded to the input sensor ISP by an adhesive layer. The anti-reflection layer RPL may include a film for preventing an external light from being reflected. The anti-reflection layer RPL may reduce the reflectance of the external light incident from above the display device DD toward the display panel DP. As the anti-reflection layer RPL is provided, the external light may not be visually perceived by the user. When the external light traveling toward the display panel DP is reflected from the display panel DP and is again provided to an external user, the user may visually perceive the external light, like a mirror. To prevent the issue, the anti-reflection layer RPL may include a plurality of color filters displaying the same colors as pixels of the display panel DP. The color filters may filter an external light with the same colors as the pixels. In this case, the external light may not be visually perceived by the user. However, the present disclosure is not limited thereto. For example, the anti-reflection layer RPL may include a retarder and/or a polarizer for the purpose of reducing the reflectance of the external light. The window WIN may be disposed on the anti-reflection layer RPL. The window WIN may protect the display panel DP, the input sensor ISP, and the anti-reflection layer RPL from external scratches and impacts. The panel protection film PPF may be disposed under the display panel DP. The panel protection film PPF may protect a lower portion (or a lower surface) of the display panel DP. The panel protection film PPF may include a flexible plastic material such as Polyethyleneterephthalate (PET). The first adhesive layer AL1may be interposed between the display panel DP and the panel protection film PPF, and the display panel DP and the panel protection film PPF may be bonded to each other by the first adhesive layer AL1. The second adhesive layer AL2may be interposed between the window WIN and the anti-reflection layer RPL, and the window WIN and the anti-reflection layer RPL may be bonded to each other by the second adhesive layer AL2. FIG.3is a view illustrating an example of a cross section of a display panel illustrated inFIG.2. In an embodiment,FIG.3shows the cross section of the display panel DP when viewed in the first direction DR1. Referring toFIG.3, the display panel DP may include a substrate SUB, a circuit element layer DP-CL disposed on the substrate SUB, a display element layer DP-OLED disposed on the circuit element layer DP-CL, and a thin film encapsulation layer TFE disposed on the display element layer DP-OLED. The substrate SUB may include the display area DA and the non-display area NDA surrounding the display area DA. The substrate SUB may include a glass or a flexible plastic material such as polyimide (PI). The display element layer DP-OLED may be disposed in the display area DA A plurality of pixels may be disposed in the circuit element layer DP-CL and the display element layer DP-OLED. Each of the pixels may include transistors disposed in the circuit element layer DP-CL and a light emitting device disposed in the display element layer DP-OLED and connected to the transistors. The thin film encapsulation layer TFE may be disposed on the circuit element layer DP-CL so as to cover the display element layer DP-OLED. The thin film encapsulation layer TFE may protect the pixels from moisture, oxygen, and external foreign objects. FIG.4is a plan view of a display panel illustrated inFIG.2. Referring toFIG.4, the display device DD may include the display panel DP, a scan driver SDV, a data driver DDV, a light emission driver EDV, and a plurality of first pads PD1. The display panel DP may be in the shape of a rectangle having long sides extending in the first direction DR1and short sides extending in the second direction DR2. However, the shape of the display panel DP is not limited thereto. The display panel DP may include the display area DA and the non-display area NDA surrounding the display area DA. The display panel DP may include a plurality of pixels PX, a plurality of scan lines SL1to SLm, a plurality of data lines DL1to DLn, a plurality of light emission lines EL1to ELm, first and second control lines CSL1and CSL2, first and second power supply lines PL1and PL2, and connecting lines CNL. Herein, m and n are a natural number. The pixels PX may be arranged in the display area DA. The scan driver SDV and the light emission driver EDV may be disposed in the non-display area NDA so as to be adjacent to the long sides of the display panel DP, respectively. The data driver DDV may be disposed in the non-display area NDA so as to be adjacent to one of the short sides of the display panel DP. In a plan view, the data driver DDV may be disposed adjacent to a lower end of the display panel DP. The scan lines SL1to SLm may extend in the second direction DR2and may be connected to the pixels PX and the scan driver SDV. The data lines DL1to DLn may extend in the first direction DR1and may be connected to the pixels PX and the data driver DDV. The light emission lines EL1to ELm may extend in the second direction DR2and may be connected to the pixels PX and the light emission driver EDV. The first power supply line PL1may extend in the first direction DR1and may be disposed in the non-display area NDA. The first power supply line PL1may be interposed between the display area DA and the light emission driver EDV. The connecting lines CNL may extend in the second direction DR2, may be arranged in the first direction DR1, and may be connected to the first power supply line PL1and the pixels PX. A first voltage may be applied to the pixels PX through the first power supply line PL1and the connecting lines CNL that are connected to each other. The second power supply line PL2may be disposed in the non-display area NDA and may extend along the long sides of the display panel DP and one short side of the display panel DP at which the data driver DDV is not disposed. The second power supply line PL2may be disposed to surround the scan driver SDV and the light emission driver EDV. Although not illustrated, the second power supply line PL2may extend toward the display area DA and may be connected to the pixels PX. A second voltage that is lower than the first voltage may be applied to the pixels PX through the second power supply line PL2. The first control line CSL1may be connected to the scan driver SDV and may extend toward the lower end of the display panel DP. The second control line CSL2may be connected to the light emission driver EDV and may extend toward the lower end of the display panel DP. The data driver DDV may be interposed between the first control line CSL1and the second control line CSL2. The first pads PD1may be disposed in the non-display area NDA so as to be adjacent to the lower end of the display panel DP and may be closer to the lower end of the display panel DP than the data driver DDV. The data driver DDV, the first power supply line PL1, the second power supply line PL2, the first control line CSL1, and the second control line CSL2may be connected to respective first pads PD1. The data lines DL1to DLn may be connected to the data driver DDV, and the data driver DDV may be connected to respective first pads PD1corresponding to the data lines DL1to DLn. Although not illustrated, the display device DD may further include a timing controller for controlling operations of the scan driver SDV, the data driver DDV, and the light emission driver EDV, and a voltage generator (not shown) for generating the first and second voltages. The timing controller and the voltage generator may be connected to respective first pads PD1through a printed circuit board. The scan driver SDV may generate a plurality of scan signals, and the plurality of scan signals may be applied to the pixels PX through the scan lines SL1to SLm. The data driver DDV may generate a plurality of data voltages, and the plurality of data voltages may be applied to the pixels PX through the data lines DL1to DLn. The light emission driver EDV may generate a plurality of light emission signals, and the plurality of light emission signals may be applied to the pixels PX through the light emission lines EL1to ELm. The pixels PX may be provided with the data voltages in response to the scan signals. The pixels PX may display an image by emitting light of luminance corresponding to the data voltages in response to the light emission signals. FIG.5is a plan view of an input sensor illustrated inFIG.2. Referring toFIG.5, the input sensor ISP may include a plurality of sensing electrodes SE1and SE2, a plurality of lines TXL and RXL, and a plurality of second and third pads PD2and PD3. The sensing electrodes SE1and SE2, the lines TXL and RXL, and the second and third pads PD2and PD3may be disposed on the thin film encapsulation layer TFE of the display panel DP. A planar area of the input sensor ISP may include an active area AA (refer toFIG.6) and a non-active area NAA (refer toFIG.6) around the active area AA. The active area AA may overlap the display area DA, and the non-active area NAA may overlap the non-display area NDA in a plan view. The sensing electrodes SE1and SE2may be disposed in the active area AA, and the second and third pads PD2and PD3may be disposed in the non-active area NAA. In a plan view, the second pads PD2and the third pads PD3may be disposed adjacent to a lower end of the input sensor ISP. In a plan view, the first pads PD1may be interposed between the second pads PD2and the third pads PD3. The lines TXL and RXL may be connected to first ends of the sensing electrodes SE1and SE2and may extend to the non-active area NAA so as to be connected to the second and third pads PD2and PD3, respectively. Although not illustrated, a sensing control part for controlling the input sensor ISP may be connected to the second and third pads PD2and PD3through a printed circuit board. The sensing electrodes SE1and SE2may include the plurality of first sensing electrodes SE1extending in the first direction DR1and arranged in the second direction DR2, and the plurality of second sensing electrodes SE2extending in the second direction DR2and arranged in the first direction DR1. The second sensing electrodes SE2may be insulated from the first sensing electrodes SE1and may extend to intersect the first sensing electrodes SE1. The lines TXL and RXL may include the plurality of first signal lines TXL connected to the first sensing electrodes SE1and the plurality of second signal lines RXL connected to the second sensing electrodes SE2. The first signal lines TXL may extend to the non-active area NAA and may be connected to the second pads PD2. The second signal lines RXL may extend to the non-active area NAA and may be connected to the third pads PD3. In an embodiment, in a plan view, the first signal lines TXL may be disposed in the non-active area NAA disposed adjacent to a lower side of the active area AA. Also, in a plan view, the second signal lines RXL may be disposed in the non-active area NAA disposed adjacent to a right side of the active area AA. Each of the first sensing electrodes SE1may include a plurality of first sensing parts SP1arranged in the first direction DR1and a plurality of connecting patterns CP connecting the first sensing parts SP1. Each of the connecting patterns CP may overlap a second sensing part SP2disposed between the first sensing parts SP1along the first direction DR1and may connect the first sensing parts SP1disposed adjacent to each other in the first direction DR1. Each of the connecting patterns CP may be interposed between two first sensing parts SP1adjacent in the first direction DR1and may connect the two first sensing parts SP1. For example, an insulating layer (not illustrated) may be interposed between the connecting patterns CP and the first sensing parts SP1, and the connecting patterns CP may be connected to the first sensing parts SP1through contact holes defined in the insulating layer. Each of the second sensing electrodes SE2may include the plurality of second sensing parts SP2arranged in the second direction DR2and a plurality of extending patterns EP connecting the plurality of second sensing parts SP2adjacent to each other. In each of the second sensing electrodes SE2, the extending patterns EP and the second sensing electrodes SE2may be integrally formed. Each of the extending patterns EP may be interposed between two second sensing parts SP2adjacent in the second direction DR2and may extend from the two second sensing parts SP2. The first sensing parts SP1and the second sensing parts SP2may not overlap each other and may be spaced from each other; in this case, the first sensing parts SP1and the second sensing parts SP2may be alternately arranged. Capacitances may be formed by the first sensing parts SP1and the second sensing parts SP2. In a plan view, the extending patterns EP may be disposed between the connecting patterns CP and may not overlap the connecting patterns CP. The first and second sensing parts SP1and SP2and the extending patterns EP may be disposed in the same layer. The connecting patterns CP may be disposed in a layer different from that of the first and second sensing parts SP1and SP2and the extending patterns EP. FIG.6is a diagram illustrating a driving part of an input sensor ofFIG.5, which is connected to sensing parts of the input sensor. In an embodiment, compared toFIG.5, the non-active area NAA ofFIG.6is reduced, and the first and second lines TXL and RXL are illustrated in a state of extending to the outside of the non-active area NAA and being connected to a transmit circuit TXC and a receive circuit RXC. Referring toFIG.6, the input sensor ISP may include the transmit circuit TXC and the receive circuit RXC that constitute the driving part for driving the first sensing electrodes SE1and the second sensing electrodes SE2. The transmit circuit TXC may apply driving signals TS to the first lines TXL. The driving signals TS may be applied to the first sensing electrodes SE1through the first lines TXL. A touch of the user may be sensed by the first sensing electrodes SE1and the second sensing electrodes SE2. Capacitances of the first and second sensing electrodes SE1and SE2, which are changed by the user touch, may be output through the second lines RXL as sensing signals RS. The sensing signals RS may be provided to the receive circuit RXC through the second lines RXL. The receive circuit RXC may amplify and demodulate a sensing signal so as to be converted into a digital signal. A signal output from the receive circuit RXC may be used to calculate touch coordinates at an external control module (not illustrated). FIG.7is a diagram illustrating a configuration of a receive circuit illustrated inFIG.6.FIGS.8A to8Care diagrams illustrating various embodiments of an amplifier circuit illustrated inFIG.7.FIG.9is a diagram illustrating a configuration of a switching element illustrated inFIG.7in detail. Referring toFIG.7, the receive circuit RXC may include an amplifier circuit AMC, a demodulating circuit DMC connected to the amplifier circuit AMC, and a summing circuit SMC connected to the demodulating circuit DMC. The circuit illustrated inFIG.7may be a configuration connected to one of the second sensing electrodes SE2illustrated inFIG.6. That is, the receiving circuits RXC may include a plurality of the amplifier circuits AMC and a plurality of demodulating circuits DMC each connected to the second sensing electrodes SE2illustrated inFIG.6, respectively. Accordingly, the amplifier circuit AMC illustrated inFIG.7may be connected to one of the second sensing electrodes SE2illustrated inFIG.6. The sensing signal RS illustrated inFIG.6may be provided to the amplifier circuit AMC as an input signal Vin. The input signal Vin may be a modulation signal. The amplifier circuit AMC may amplify the input signal Vin and output an output signal Vout which is amplified to the demodulating circuit DMC. The demodulating circuit DMC may output a demodulation signal to a summing circuit SMC. Referring toFIGS.8A to8C, the amplifier circuit AMC may include various amplifiers AMP1, AMP2, and AMP3. The amplifiers AMP1, AMP2, and AMP3may be inverting amplifiers. Referring toFIG.8A, the amplifier circuit AMC may include the amplifier AMP1and a capacitor Cfb. The capacitor Cfb may be connected to a negative input terminal (−) of the amplifier AMP1and an output terminal of the amplifier AMP1. The input signal Vin may be input to the negative input terminal (−) of the amplifier AMP1, and a positive input terminal (+) of the amplifier AMP1may receive a reference voltage Vref. The amplifier AMP1may amplify the input signal Vin such that a polarity of the input signal Vin is inverted and may output an amplified input signal as the output signal Vout. Referring toFIG.8B, the amplifier circuit AMC may include the amplifier AMP2, the capacitor Cfb, and a resistor Rfb. The capacitor Cfb may be connected to a negative input terminal (−) of the amplifier AMP2and an output terminal of the amplifier AMP2. The resistor Rfb may be connected to the negative input terminal (−) of the amplifier AMP2and the output terminal of the amplifier AMP2. The resistor Rfb and the capacitor Cfb may be connected in parallel. The input signal Vin may be input to the negative input terminal (−) of the amplifier AMP2, and a positive input terminal (+) of the amplifier AMP2may receive the reference voltage Vref. The amplifier AMP2may amplify the input signal Vin such that a polarity of the input signal Vin is inverted and may output an amplified input signal as the output signal Vout. Referring toFIG.8C, the amplifier circuit AMC may include the amplifier AMP3and the resistor Rfb. The resistor Rfb may be connected to a negative input terminal (−) of the amplifier AMP3and an output terminal of the amplifier AMP3. The input signal Vin may be input to the negative input terminal (−) of the amplifier AMP3, and a positive input terminal (+) of the amplifier AMP3may receive the reference voltage Vref. The amplifier AMP3may amplify the input signal Vin such that a polarity of the input signal Vin is inverted and may output an amplified input signal as the output signal Vout. Referring toFIG.7, an output terminal of the amplifier circuit AMC may be connected to a first node N1of the demodulating circuit DMC. The output signal Vout output from the amplifier circuit AMC may be applied to the first node N1. The demodulating circuit DMC may include a rectifier circuit RTC, an analog-to-digital converter ADC, an output switching element OSW, first and second reset switching elements RSW1and RSW2, and first and second switching circuit SWC1and SWC2. The rectifier circuit RTC may be connected to the first node N1, that is, may be connected to the amplifier circuit AMC through the first node N1. The rectifier circuit RTC may receive the output signal Vout from the amplifier circuit AMC through the first node N1. The rectifier circuit RTC may perform a rectifying operation on a voltage of the first node N1. The driving signal TS and the sensing signal RS may be sinusoidal signals; accordingly, the input signal Vin, the output signal Vout, and the voltage of the first node N1may also be in the form of a sine wave. The rectifier circuit RTC may convert a positive-polarity voltage and a negative-polarity voltage of a sinusoidal signal into DC voltages. The above operations will be described in detail later. The analog-to-digital converter ADC may be connected to the rectifier circuit RTC and the summing circuit SMC. An input terminal of the analog-to-digital converter ADC may be connected to the rectifier circuit RTC, and an output terminal of the analog-to-digital converter ADC may be connected to the summing circuit SMC. The analog-to-digital converter ADC may be connected to the rectifier circuit RTC through the output switching element OSW. The analog-to-digital converter ADC may receive a signal output from the rectifier circuit RTC and may convert the received signal into a digital signal. The analog-to-digital converter ADC may provide the digital signal to the summing circuit SMC. The rectifier circuit RTC may include a first rectifier circuit RTC1connected between the first node N1and a second node N2and a second rectifier circuit RTC2connected between the first node N1and a third node N3. The first rectifier circuit RTC1may include a first diode Dp connected between the first node N1to the second node N2in a forward direction. The second rectifier circuit RTC2may include a second diode Dn connected between the first node N1and the third node N3in a reverse (or backward) direction. The first rectifier circuit RTC1may include a first diode Dp and a first capacitor Cp. The first diode Dp may be connected between the first node N1and the second node N2in a forward direction. For example, an anode of the first diode Dp may be connected to the first node N1and a cathode of the first diode Dp may be connected to the second node N2. The first capacitor Cp may include a first electrode connected to the second node N2and a second electrode connected to a reference node RN to which the reference voltage Vref is applied. The second rectifier circuit RTC2may include a second diode Dn and a second capacitor Cn. The second diode Dn may be connected between the first node N1and the third node N3in a reverse direction. For example, an anode of the second diode Dn may be connected to the third node N3and a cathode of the second diode Dn may be connected to the first node N1. The second capacitor Cn may include a first electrode connected to the third node N3and a second electrode connected to the reference node RN. The output switching element OSW may switch the connection of the analog-to-digital converter ADC and the rectifier circuit RTC. The analog-to-digital converter ADC may be connected to the second node N2and the third node N3through the output switching element OSW. The first rectifier circuit RTC1may be connected to the analog-to-digital converter ADC through the second node N2. The second rectifier circuit RTC2may be connected to the analog-to-digital converter ADC through the third node N3. The output switching element OSW may include a first output switching element OSW1and a second output switching element OSW2. The first and second output switching elements OSW1and OSW2may selectively connect the analog-to-digital converter ADC to the first rectifier circuit RTC1and the second rectifier circuit RTC2. The first output switching element OSW1may be connected to a first input terminal IN1of the analog-to-digital converter ADC. The second output switching element OSW2may be connected to a second input terminal IN2of the analog-to-digital converter ADC. The first rectifier circuit RTC1may be connected to the analog-to-digital converter ADC by the first and second output switching elements OSW1and OSW2. For example, to connect the first rectifier circuit RTC1and the analog-to-digital converter ADC, the first output switching element OSW1may connect the second node N2to the analog-to-digital converter ADC of the analog-to-digital converter ADC, and the second output switching element OSW2may connect the reference node RN to the analog-to-digital converter ADC. The first output switching element OSW1may connect the second node N2to the first input terminal IN1of the analog-to-digital converter ADC and the second output switching element OSW2may connect the reference node RN to the second input terminal IN2of the analog-to-digital converter ADC. The second rectifier circuit RTC2may be connected to the analog-to-digital converter ADC by the first and second output switching elements OSW1and OSW2. For example, to connect the second rectifier circuit RTC2and the analog-to-digital converter ADC, the second output switching element OSW2may connect the third node N3to the analog-to-digital converter ADC and the first output switching element OSW1may connect the reference node RN to the analog-to-digital converter ADC. The first output switching element OSW1may connect the reference node RN to the first input terminal IN1of the analog-to-digital converter ADC and the second output switching element OSW2may connect the third node N3to the second input terminal IN2of the analog-to-digital converter ADC. The first and second output switching elements OSW1and OSW2may be controlled by the first and second switching control circuits SWC1and SWC2. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the second node N2and the reference node RN by a first output switching signal OS1from the first switching control part SWC1. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the reference node RN and the third node N3by a second output switching signal OS2from the second switching control part SWC2. Referring toFIG.9, the first output switching element OSW1may include a first switching element S1that is turned on or turned off by the first output switching signal OS1and a second switching element S2that is turned on or turned off by the second output switching signal OS2. The first switching element S1may control the connection of the second node N2and the first input terminal IN1of the analog-to-digital converter ADC. The second switching element S2may control the connection of the reference node RN and the first input terminal IN1of the analog-to-digital converter ADC. The second output switching element OSW2may include a third switching element S3that is turned on or turned off by the first output switching signal OS1and a fourth switching element S4that is turned on or turned off by the second output switching signal OS2. The third switching element S3may control the connection of the reference node RN and the second input terminal IN2of the analog-to-digital converter ADC. The fourth switching element S4may control the connection of the third node N3and the second input terminal IN2of the analog-to-digital converter ADC. The first, second, third, and fourth switching elements S1, S2, S3, and S4may be NMOS transistors. The first and third switching elements S1and S3may be simultaneously controlled by the first output switching signal OS1. The second and fourth switching elements S2and S4may be simultaneously controlled by the second output switching signal OS2. The first reset switching element RSW1may be connected between the first rectifier circuit RTC1and the reference node RN and may reset the second node N2of the first rectifier circuit RTC1to the reference voltage Vref. The second reset switching element RSW2may be connected between the second rectifier circuit RTC2and the reference node RN and may reset the third node N3of the second rectifier circuit RTC2to the reference voltage Vref. The first reset switching element RSW1which is connected between the second node N2and the reference node RN may be turned on or turned off by a first reset signal RS1output from the first switching control part SWC1. The second reset switching element RSW2which is connected between the third node N3and the reference node RN may be turned on or turned off by a second reset signal RS2output from the second switching control part SWC2. The first reset switching element RSW1may be turned on by the first reset signal RS1and may reset the second node N2of the first rectifier circuit RTC1to the reference voltage Vref. The second reset switching element RSW2may be turned on by the second reset signal RS2and may reset the third node N3of the second rectifier circuit RTC2to the reference voltage Vref. The summing circuit SMC may sum “N” digital signals output from the analog-to-digital converter ADC. For example, “N” digital signals may be continuous output from the demodulating circuit DMC by continuously processing the sensing signals RS. The summing circuit SMC may sum and output the “N” digital signals that are continuously output. The summing circuit SMC may add a current digital signal to a previous digital signal. FIG.10is a timing diagram for describing an operation of a receive circuit illustrated inFIG.7.FIGS.11A to11Dare diagrams illustrating operating states of a receive circuit according to the timing diagram ofFIG.10. Referring toFIGS.10and11A, a voltage of the first node N1may have a sine wave. A positive-polarity voltage +Va and a negative-polarity voltage −Va that is determined with respect to the reference voltage Vref may be applied to the first node N1. The voltage of the first node N1may be defined as a voltage corresponding to the output signal Vout of the amplifier circuit AMC described above. The positive-polarity voltage +Va may be applied to the second node N2through the first diode Dp disposed in the forward direction. The first diode Dp may have a first threshold voltage Vthp. When a voltage difference of opposite ends of the first diode Dp (e.g., a voltage difference of the first node N1and the second node N2) is greater than the first threshold voltage Vthp, a current may flow through the first diode Dp. Even though the positive-polarity voltage +Va at the first node N1increases, a current may not flow through the first diode Dp until the voltage difference of the opposite ends of the first diode Dp is greater than the first threshold voltage Vthp. Accordingly, a voltage of the second node N2may not be changed during a first period P1that is a time required for the voltage difference of the opposite ends of the first diode Dp to reach the first threshold voltage Vthp. When the voltage difference of the opposite ends of the first diode Dp is greater than the first threshold voltage Vthp, a current may flow through the first diode Dp, and thus, a voltage of the second node N2may increase. In this case, the voltage of the second node N2may increase in proportional to an increase in the voltage of the first node N1. The voltage of the second node N2may increase until the voltage of the first node N1reaches a maximum value; afterwards, the voltage of the second node N2may be maintained at a DC voltage. A maximum voltage value of the second node N2may be defined as a peak voltage, and the peak voltage of the second node N2may be “Vref+Va−Vthp” which is a voltage subtracted the first threshold voltage Vthp of the first diode DP from the sum of the reference Vref and the positive-polarity voltage +Va. The voltage of the second node N2may be charged in the first capacitor Cp. Below, high levels of the first and second output switching signals OS1and and the first and second reset signals RS1and RS2may be defined as activated signals; low levels of the first and second output switching signals OS1and OS2and the first and second reset signals RS1and RS2may be defined as deactivated signals. The first output switching signal OS1that is activated may be applied to the first and second output switching elements OSW1and OSW2after a predetermined time (e.g., a second period P2) has passed from a point in time when the voltage of the second node N2reaches the peak voltage “Vref+Va−Vthp”. The first and third switching elements S1and S3illustrated inFIG.9may be turned on by the first output switching signal OS1. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the second node N2and the reference node RN. As a result, the first rectifier circuit RTC1may be connected to the analog-to-digital converter ADC. The second node N2may be connected to the first input terminal IN1of the analog-to-digital converter ADC and the reference node RN may be connected to the second input terminal IN2of the analog-to-digital converter ADC. As a sensing voltage, a first charging voltage charged in the first capacitor Cp may be provided to the analog-to-digital converter ADC. The analog-to-digital converter ADC may convert the sensing voltage output from the first rectifier circuit RTC1into a first digital signal DGT1and may output the first digital signal DGT1to the summing circuit SMC. The peak voltage “Vref+Va−Vthp” may be applied to the first input terminal IN1of the analog-to-digital converter ADC and the reference voltage Vref may be applied to the second input terminal IN2of the analog-to-digital converter ADC. The analog-to-digital converter ADC may convert a potential difference of the first input terminal IN1and the second input terminal IN2into a digital signal. Accordingly, a sensing voltage value processed by the analog-to-digital converter ADC may be set to a value of “Va−Vthp” being a potential difference of the peak voltage “Vref+Va−Vthp” and the reference voltage Vref. The analog-to-digital converter ADC may convert the value of “Va−Vthp” into a digital signal and may output the first digital signal DGT1as a conversion result. According to the above description, the positive-polarity voltage +Va of the output signal Vout of the amplifier circuit AMC may be provided to the first rectifier circuit RTC1, and the output of the first rectifier circuit RTC1may be provided to the analog-to-digital converter ADC and may be converted into the first digital signal DGT1. Referring toFIGS.10and11B, the first reset signal RS1that is activated may be applied to the first reset switching element RSW1after a predetermined time (e.g., a third period P3) has passed from a point in time when the first output switching signal OS1is deactivated. The first reset switching element RSW1may be turned on, and the voltage charged in the first capacitor Cp may be discharged to the reference voltage Vref. The operation in which the voltage of the first capacitor Cp is discharged to the reference voltage Vref may be defined as a reset operation. An operation of the receive circuit RXC associated with the negative-polarity voltage −Va, which will be described below, may be the same as the operation of the receive circuit RXC associated with the positive-polarity voltage +Va except that a phase of the third node N3differs. That is, a signal processing operation associated with the negative-polarity voltage −Va may be performed based on the following timing and may be substantially the same as the signal processing operation associated with the positive-polarity voltage +Va. Referring toFIGS.10and11C, the negative-polarity voltage −Va may be applied to the first node N1. The negative-polarity voltage −Va may be applied to the third node N3through the second diode Dn disposed in the reverse direction. The second diode Dn may have a second threshold voltage Vthn. Even though the negative-polarity voltage −Va at the first node N1decreases, a current may not flow through the second diode Dn until a voltage difference of opposite ends of the second diode Dn is greater than the second threshold voltage Vthn. Accordingly, a voltage of the third node N3may not be changed during a given time. When the voltage difference of the opposite ends of the second diode Dn is greater than the second threshold voltage Vthn, a current may flow through the second diode Dn, and thus, the voltage of the third node N3may decrease. In this case, the voltage of the third node N3may decrease to a minimum voltage value along a decrease in the voltage of the first node N1. A minimum voltage value of the third node N3may be defined as a peak voltage, and the peak voltage of the third node N3may be “Vref−Va+Vthn” which is a voltage added the second threshold voltage Vthn of the second diode Dn to the sum of the reference Vref and the negative-polarity voltage −Va. The voltage of the third node N3may be charged in the second capacitor Cn. The second output switching signal OS2that is activated after a predetermined time has passed from a point in time when the voltage of the third node N3reaches the peak voltage “Vref−Va+Vthn” may be applied to the first and second output switching elements OSW1and OSW2. The second and fourth switching elements S2and S4illustrated inFIG.9may be turned on by the second output switching signal OS2. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the reference node RN and the third node N3. As a result, the second rectifier circuit RTC2may be connected to the analog-to-digital converter ADC. The reference node RN may be connected to the first input terminal IN1of the analog-to-digital converter ADC and the third node N3may be connected to the second input terminal IN2of the analog-to-digital converter ADC. As a sensing voltage, a second charging voltage charged in the second capacitor Cn may be provided to the analog-to-digital converter ADC. The analog-to-digital converter ADC may convert the sensing voltage output from the second rectifier circuit RTC2into a second digital signal DGT2and may output the second digital signal DGT2to the summing circuit SMC. The reference voltage Vref may be applied to the first input terminal IN1and the peak voltage “Vref−Va+Vthn” may be applied to the second input terminal IN2. A sensing voltage value processed by the analog-to-digital converter ADC may be set to a value of “−Va+Vthn” being a potential difference of the reference voltage Vref and the peak voltage “Vref−Va+Vthn”. The analog-to-digital converter ADC may convert the value of “−Va+Vthn” into a digital signal and may output the second digital signal DGT2as a conversion result. According to the above description, the negative-polarity voltage −Va of the output signal Vout of the amplifier circuit AMC may be provided to the second rectifier circuit RTC2, and the output of the second rectifier circuit RTC2may be provided to the analog-to-digital converter ADC and may be converted into the second digital signal DGT2. The first and second digital signals DGT1and DGT2output from the analog-to-digital converter ADC may be defined as sensing values. Referring toFIGS.10and11D, the second reset signal RS2that is activated may be applied to the second reset switching element RSW2after a predetermined time has passed from a point in time when the second output switching signal OS2is deactivated. The second reset switching element RSW2may be turned on and the voltage charged in the second capacitor Cn may be discharged to the reference voltage Vref. The operation in which the voltage of the second capacitor Cn is discharged to the reference voltage Vref may be defined as a reset operation. The first and second digital signals DGT1and DGT2may be provided to the summing circuit SMC. The first and second digital signals DGT1and DGT2may be substantially defined as sensing values sensed by the first and second sensing parts SP1and SP2. When only one sensing value is used as a sensing data, a noise may be included in the only sensing value. When one sensing value including a noise is “11” and a normal sensing value is “10”, an error of 10% may occur. When one sensing value including a nose and 9 normal sensing values may be added and output by the summing circuit SMC, because a value obtained by adding 10 sensing values is “101” and a value obtained by adding 10 normal sensing values is “100”, an error of 1% may occur. Accordingly, in the case of adding and using a plurality of sensing values, a signal-to-noise ratio (SNR) may be improved. In an embodiment of the present disclosure, the summing circuit SMC may add “N” sensing values output from the analog-to-digital converter ADC. Herein, N may be a natural number of 2 or more. A value that is obtained by adding “N” sensing values may be output from the summing circuit SMC and may be used to calculate touch coordinates at an external control module (not illustrated). The demodulating circuit DMC of the receive circuit RXC according to an embodiment of the present disclosure may be implemented with two diodes Dp and Dn, two capacitors Cp and Cn, two reset switching elements RSW1and RSW2, and two output switching elements OSW1and OSW2, and thus, the demodulating circuit DMC may be implemented with a simpler circuit configuration. Also, because the demodulating circuit DMC processes and outputs a signal by using only the input signal Vin without using a carrier wave, a demodulation signal may be normally output regardless of a phase difference of the carrier wave and the input signal Vin. The signal processing operations of the demodulating circuit DMC described with reference toFIGS.11A to11Dmay be defined as a sensing operation. FIG.12is a diagram illustrating a configuration of a receive circuit according to an embodiment of the present disclosure. Below, a configuration of a receive circuit RXC-1illustrated inFIG.12will be mainly described based on a difference with the receive circuit RXC illustrated inFIG.7. Referring toFIG.12, the receive circuit RXC-1may further include a noise filter NF interposed between the amplifier circuit AMC and the demodulating circuit DMC. The noise filter NF may be connected to the output terminal of the amplifier circuit AMC and the first node N1. The noise filter NF may remove a noise of the output signal Vout and may provide the noise-free output signal Vout to the demodulating circuit DMC. Various filters may be used as the noise filter NF. For example, the noise filter NF may include a low pass filter LPF, a high pass filter HPF, or a band pass filter BPF. FIG.13is a diagram illustrating a configuration of a receive circuit according to another embodiment of the present disclosure. Below, a configuration of a receive circuit RXC-2illustrated inFIG.13will be mainly described based on a difference with the receive circuit RXC illustrated inFIG.7. Referring toFIG.13, a demodulating circuit DMC-1of a receive circuit RXC-2may further include first, second, and third connection switching elements CSW1, CSW2, and CSW3, a demultiplexer circuit DMUX, first and second subtractors SC1and SC2, and a memory MEM. The first connection switching element CSW1may be connected between the output terminal of the amplifier circuit AMC and the first node N1. The first connection switching element CSW1may be turned on or turned off by a first connection switching signal CS1output from the first switching control part SWC1. The second connection switching element CSW2may be supplied with a first voltage Vp and may be connected between the first node N1and the first voltage supply line. The first voltage Vp may be higher in level than the reference voltage Vref. That is, the first voltage Vp may have a positive polarity compared to the reference voltage Vref. The second connection switching element CSW2may be turned on or turned off by a second connection switching signal CS2output from the first switching control part SWC1. The third connection switching element CSW3may be supplied with a second voltage Vn and may be connected between the first node N1and a second voltage. The second voltage Vn may be lower in level than the reference voltage Vref. That is, the second voltage Vn may have a negative polarity compared to the reference voltage Vref. The third connection switching element CSW3may be turned on or turned off by a third connection switching signal CS3output from the second switching control part SWC2. The first rectifier circuit RTC1and the second rectifier circuit RTC2may be connected to the first, second, and third connection switching elements CSW1, CSW2, and CSW3through the first node N1. The first connection switching element CSW1may control the connection of the amplifier circuit AMC and the rectifier circuit RTC. The second connection switching element CSW2may control the first voltage Vp to be applied to the rectifier circuit RTC. The first voltage Vp may be applied to the first rectifier circuit RTC1by the second connection switching element CSW2. The third connection switching element CSW3may control the second voltage Vn to be applied to the rectifier circuit RTC2. The second voltage Vn may be applied to the second rectifier circuit RTC2by the third connection switching element CSW3. The first subtractor SC1may subtract a first voltage value (e.g., a digital value of the first voltage Vp) (hereinafter marked by the same sign) and a second voltage value (e.g., a digital value of the second voltage Vn) (hereinafter marked by the same sign) from a digital signal output from the analog-to-digital converter ADC and output subtracted voltages to the memory. A value that is output from the first subtractor SC1may be a first threshold value corresponding to the first threshold voltage Vthp of the first diode Dp and a second threshold value corresponding to the second threshold voltage Vthn of the second diode Dn. The above operation will be described in detail later. The first threshold value and the second threshold value may be stored in the memory MEM. The first threshold value may be stored in a first storage space Vth-p of the memory MEM and the second threshold value may be stored in a second storage space Vth-n of the memory MEM. The second subtractor SC2may be provided with the first and second threshold values from the memory MEM. The second subtractor SC2may subtract the first threshold value and the second threshold value from a digital signal output from the analog-to-digital converter ADC and output the subtracted voltage to the summing circuit SMX. The above operation will be described in detail later. The demultiplexer circuit DMUX may selectively connect the analog-to-digital converter ADC to the first subtractor SC1and the second subtractor SC2under control of the first and second switching control parts SWC1and SWC2. The receive circuit RXC-2may further include the noise filter NF. In the embodiment illustrated inFIG.13, the noise filter NF illustrated inFIG.12may be interposed between the amplifier circuit AMC and the first connection switching element CSW1. In an embodiment, inFIG.13, the noise filter NF is depicted by a dotted line. FIG.14is a timing diagram for describing an operation of a receive circuit illustrated inFIG.13.FIGS.15A to15Dare diagrams illustrating operating states of a receive circuit according to the timing diagram ofFIG.10. Below, in signals associated with a switching operation, a high level may be defined as an activated signal, and a low level may be defined as a deactivated signal. In an embodiment, inFIGS.15A to15D, the noise filter NF is omitted. Referring toFIGS.14and15A, after a threshold voltage measurement operation is performed in a threshold voltage measurement period, a sensing operation may be performed in a sensing period. The sensing period that is a period in which an operation according to the timing illustrated inFIG.10is performed may be defined as a signal processing operation of a demodulating circuit. In the threshold voltage measurement period, the first connection switching signal CS1may be deactivated, and the first connection switching element CSW1may be turned off. Accordingly, the amplifier circuit AMC may be disconnected from the demodulating circuit DMC-1. Next, the first and second reset switching elements RSW1and RSW2may be turned on by the first and second reset signals RS1and RS2activated, and the first and second capacitors Cp and Cn may be discharged to the reference voltage Vref. Accordingly, the second node N2and the third node N3may be reset to the reference voltage Vref. The first and second rectifier circuits RTC1and RTC2may be simultaneously reset by the first and second reset switching elements RSW1and RSW2. Afterwards, the first voltage Vp and the second voltage Vn may be applied to the first rectifier circuit RTC1and the second rectifier circuit RTC2. This will be described with reference toFIGS.15B and15C. Referring toFIGS.14and15B, the second connection switching element CSW2may be turned on by the second connection switching signal CS2activated in the threshold voltage measurement period. The first rectifier circuit RTC1may be connected to the analog-to-digital converter ADC via the first switching element OSW1in response to the first output switching signal OS1. The first voltage Vp may be provided to the first rectifier circuit RTC1by the second connection switching element CSW2. Because the first diode Dp has the first threshold voltage Vthp, a voltage of the second node N2may have a value of “Vref+Vp−Vthp”. The voltage of the second node N2may be stored in the first capacitor Cp. The voltage value “Vref+Vp−Vthp” charged in the first capacitor Cp may be provided to the analog-to-digital converter ADC as a first compensation voltage Vc1. The analog-to-digital converter ADC may convert a potential difference of the first input terminal IN1and the second input terminal IN2of the analog-to-digital converter ADC into a digital signal and may output the digital signal as a first compensation digital signal Dc1. The first compensation voltage Vc1may be applied to the first input terminal IN1, and the reference voltage Vref may be applied to the second input terminal IN2. Accordingly, a voltage value processed by the analog-to-digital converter ADC may be set to “Vp−Vthp”. As a result, the first compensation digital signal Dc1may have a value of “Vp−Vthp”. In the threshold voltage measurement period, the demultiplexer circuit DMUX may connect the analog-to-digital converter ADC to the first subtractor SC1under control of the first switching control part SWC1. The first compensation digital signal Dc1may be provided to the first subtractor SC1through the demultiplexer circuit DMUX. The first subtractor SC1may subtract the first voltage value Vp from the first compensation digital signal Dc1. A value output from the first subtractor SC1may be −Vthp obtained by subtracting the first voltage value Vp from “Vp−Vthp” and may be defined as a first threshold value −Vthp. The first threshold value −Vthp may be defined as a value corresponding to the first threshold voltage Vthp. The first threshold value −Vthp may be stored in the first storage space Vth-p of the memory MEM. The first threshold value−Vthp corresponding to the first threshold voltage Vthp may be stored in the memory MEM by using the first voltage Vp. An operation of storing a second threshold value−Vthn corresponding to the second threshold voltage Vthn by using the second voltage Vn may also be performed at the following timing and may be substantially the same as the operation described with reference toFIG.15B. Referring toFIGS.14and15C, in the threshold voltage measurement period, the third connection switching element CSW3may be turned on by the third connection switching signal CS3activated. The second rectifier circuit RTC2is connected to the analog-to-digital converter ADC via first and second output switching elements OSW1and OSW2which are turned on in response to the second output switching signal OS2activated. The second voltage Vn may be provided to the second rectifier circuit RTC2via the third connection switching element CSW3. Because the second diode Dn has the second threshold voltage Vthn, a voltage of the third node N3may have a value of “Vref−Vn+Vthn”. The voltage of the third node N3may be stored in the second capacitor Cn. The voltage value “Vref−Vn+Vthn” charged in the second capacitor Cn may be provided to the analog-to-digital converter ADC as a second compensation voltage Vc2. The analog-to-digital converter ADC may convert a potential difference of the first input terminal IN1and the second input terminal IN2of the analog-to digital converter ADC into a digital signal and may output the digital signal as a second compensation digital signal Dc2. The reference voltage Vref may be applied to the first input terminal IN1, and the second compensation voltage Vc2may be applied to the second input terminal IN2. Accordingly, a voltage value processed by the analog-to-digital converter ADC may be set to “−Vn+Vthn”. As a result, the second compensation digital signal Dc2may have a value of “−Vn+Vthn”. The second compensation digital signal Dc2may be provided to the first subtractor SC1through the demultiplexer circuit DMUX. The first subtractor SC1may subtract a second voltage value Vn from the second compensation digital signal Dc2. A value output from the first subtractor SC1may be +Vthn obtained by removing the second voltage value Vn from “−Vn+Vthn” and may be defined as a second threshold voltage +Vthn. The second threshold value+Vthn may be defined as a value corresponding to the second threshold voltage Vthn. The second threshold value+Vthn may be stored in the second storage space Vth-n of the memory MEM. Afterwards, the first and second reset switching elements RSW1and RSW2may be turned on by the first and second reset signals RS1and RS2activated, and the first and second capacitors Cp and Cn may be discharged to the reference voltage Vref. Accordingly, the second node N2and the third node N3may be reset to the reference voltage Vref. After the threshold voltage measurement period, in the sensing period, the demodulating circuit DMC-1may perform the operations described with reference toFIG.10andFIG.11D. Below, in an embodiment, an operation of the demodulating circuit DMC-1associated with the positive-polarity voltage +Va will be described. Referring toFIGS.14and15D, in the sensing period following the threshold voltage measurement period, the first connection switching element CSW1may be turned on by the first connection switching signal CS1activated. The amplifier circuit AMC may be connected to the demodulating circuit DMC-1by the first connection switching element CSW1which is turned on. The second and third connection switching elements CSW2and CSW3may be turned off by the second and third reset switching signals CS2and CS3deactivated. Accordingly, the first and second voltages Vp and Vn may not be applied to the demodulating circuit DMC-1. Referring toFIGS.11A,14, and15D, the positive-polarity voltage +Va may be applied to the second node N2through the first diode Dp disposed in the forward direction. As described with reference toFIG.11A, the voltage of the second node N2may be “Vref+Va−Vthp”. Also, as the first output switching signal OS1activated is applied to the first and second output switching elements OSW1and OSW2, the first rectifier circuit RTC1may be connected to the analog-to-digital converter ADC. As described above, because a sensing voltage value processed by the analog-to-digital converter ADC is set to “Va−Vthp”, the first digital signal DGT1may have a value of “Va−Vthp”. In the sensing period, the demultiplexer circuit DMUX may connect the analog-to-digital converter ADC to the second subtractor SC2under control of the second switching control part SWC2. The second subtractor SC2may be provided with the first digital signal DGT1from the analog-to-digital converter ADC and may be provided with the first threshold value −Vthp from the memory MEM. The second subtractor SC2may subtract the first threshold value −Vthp from the first digital signal DGT1. An output value of the second subtractor SC2may be Va (=Va−Vthp−(−Vthp)). Because the first digital signal DGT1and the first threshold value −Vthp are digital values, the output value of the second subtractor SC2may also be substantially a digital value. The above operation may be defined as a threshold voltage compensation operation. The output value of the second subtractor SC2may be defined as a sensing value sensed by the first and second sensing parts SP1and SP2. The first threshold voltage Vthp of the first diode Dp may vary due to various factors (e.g., a use time or a temperature of a diode). Accordingly, in the case where the first threshold voltage Vthp is included in a sensing value, the sensing value may be inaccurate. In an embodiment of the present disclosure, the demodulating circuit DMC-1may remove the threshold voltage Vthp of the first diode Dp and may output the sensing value. Accordingly, a more accurate sensing value may be output. Although not illustrated, the analog-to-digital converter ADC may output the second digital signal DGT2as described with reference toFIG.11B. The second subtractor SC2may subtract the second threshold value+Vthn from the second digital signal DGT2. An output value of the second subtractor SC2may be −Va (=−Va+Vthn−(+Vthn)). Accordingly, the demodulating circuit DMC-1may remove the threshold voltage Vthn of the second diode Dn and may output the sensing value. The sensing values output from the second subtractor SC2may be provided to the summing circuit SMC. As described above, the summing circuit SMC may add and output “N” sensing values output from the second subtractor SC2. FIG.16is a flowchart for describing an operation of a receive circuit illustrated inFIG.7.FIG.17is a flowchart for describing an operation of a receive circuit illustrated inFIG.13. FIG.16shows an operation in the sensing period described above,FIG.17shows an operation in the threshold voltage measurement period described above. Referring toFIGS.7and16, in operation S110, the input signal Vin may be amplified and output. In operation S121, the positive-polarity voltage +Va of the amplified input signal Vin may be charged in the first capacitor Cp through the first diode Dp disposed in the forward direction. In operation S122, a first charging voltage charged in the first capacitor Cp may be converted into a digital signal by the analog-to-digital converter ADC, and the first digital signal DGT1may be output as a conversion result. In operation S123, the first capacitor Cp may be discharged to the reference voltage Vref by the first reset switching element RSW1. In operation S131, the negative-polarity voltage −Va of the amplified input signal Vin may be charged in the second capacitor Cn through the second diode Dn disposed in the reverse direction. In operation S132, a second charging voltage charged in the second capacitor Cn may be converted into a digital signal by the analog-to-digital converter ADC, and the second digital signal DGT2may be output as a conversion result. In operation S133, the second capacitor Cn may be discharged to the reference voltage Vref by the second reset switching element RSW2. In operation S140, a current sensing value may be added to a previous sensing value. The first and second digital signals DGT1and DGT2may be added as sensing values. In operation S150, whether the number of sensing operations exceeds “N” may be determined. The number of sensing operations may be defined as the number of operations in which the demodulating circuit DMC processes and outputs a signal. When the number of sensing operations does not exceed “N”, operation S110may be performed. When the number of sensing operations exceeds “N”, that is, as described above, when the summing circuit SMC adds “N” sensing values, a summing value may be output in operation S160. Referring toFIGS.13and17, in operation S210, the first and second capacitors Cp and Cn may be discharged to the reference voltage Vref. In operation S220, the first voltage Vp may be charged in the first capacitor Cp through the first diode Dp disposed in the forward direction. In operation S230, the first compensation voltage Vc1charged in the first capacitor Cp may be converted into a digital signal by the analog-to-digital converter ADC, and the first compensation digital signal Dc1may be output as a conversion result. In operation S240, the first voltage value Vp may be subtracted from the first compensation digital signal Dc1by the first subtractor SC1, and the first threshold value −Vthp output from the first subtractor SC1may be stored in the memory MEM. In operation S250, the second voltage Vn may be charged in the second capacitor Cn through the second diode Dn disposed in the reverse direction. In operation S260, the second compensation voltage Vc2charged in the second capacitor Cn may be converted into a digital signal by the analog-to-digital converter ADC, and the second compensation digital signal Dc2may be output as a conversion result. In operation S270, the second voltage value Vn may be added to the second compensation digital signal Dc2by the first subtractor SC1, and the second threshold value+Vthn output from the first subtractor SC1may be stored in the memory MEM. Afterwards, in an operation of the sensing period illustrated inFIG.16, operation S280following operation S122and operation S290following operation S132may be performed. In operation S280, the second subtractor SC2may subtract the first threshold value −Vthp from the first digital signal DGT1and may output a subtraction result. Afterwards, operation S123may be performed. In operation S290, the second subtractor SC2may subtract the second threshold value+Vthn from the second digital signal DGT2and may output a subtraction result. Afterwards, operation S133may be performed. Then, operation S140, operation S150, and operation S160may be performed. FIGS.18A to18Jare diagrams for describing an operation of a receive circuit according to another embodiment of the present disclosure. Below, a configuration of a receive circuit RXC-3will be described with reference toFIG.18A, and a sequential operation of the receive circuit RXC-3will be described with reference toFIGS.18A to18J. Also, a configuration of the receive circuit RXC-3illustrated inFIG.18Awill be mainly described based on a difference with the receive circuit RXC illustrated inFIG.7. Referring toFIG.18A, a first connection switching element CSW1′ may include a (1-1)-th connection switching element CSW1-1and a (1-2)-th connection switching element CSW1-2. The connection switching element CSW1-1may be connected between the first node N1and a (1-1)-th node N1-1. The output terminal of the amplifier circuit AMC may be connected to the first node N1, and an input terminal of a first rectifier circuit RTC1′ (e.g., an anode of the first diode Dp) may be connected to the (1-1)-th node N1-1. The (1-1)-th connection switching element CSW1-1may control the connection of the amplifier circuit AMC and the first rectifier circuit RTC1′ under control of the first switching control part SWC1. The connection switching element CSW1-2may be connected between the first node N1and a (1-2)-th node N1-2. An input terminal of a second rectifier circuit RTC2′ (e.g., a cathode of the second diode Dn) may be connected to the (1-2)-th node N1-2. The (1-2)-th connection switching element CSW1-2may control the connection of the amplifier circuit AMC and the second rectifier circuit RTC2′ under control of the second switching control part SWC2. The second connection switching element CSW2may be connected to the (1-1)-th node N1-1, and the third connection switching element CSW3may be connected to the (1-2)-th node N1-2. The first rectifier circuit RTC1′ may include the first diode Dp, the first capacitor Cp, first and second selection switching elements SSW1and SSW2, and a first compensation capacitor Cc1. The first diode Dp may be connected between the (1-1)-th node N1-1and the second node N2in the forward direction. The first capacitor Cp may include the first electrode connected to the second node N2and the second electrode connected to the first selection switching element SSW1. A contact point between the first selection switching element SSW1and the second electrode of the first capacitor Cp may be defined as a fourth node N4. The first selection switching element SSW1may selectively connect the second electrode of the first capacitor Cp to the reference node RN and the first output switching element OSW1under control of the first switching control part SWC1. The first compensation capacitor Cc1may include a first electrode connected to the second selection switching element SSW2and a second electrode connected to the reference node RN. The second selection switching element SSW2may switch the connection of the second node N2and the first electrode of the first compensation capacitor Cc1under control of the first switching control part SWC1. The first reset switching element RSW1may be connected between the second node N2and the reference node RN and may control the connection of the second node N2and the reference node RN under control of the first switching control part SWC1. The second rectifier circuit RTC2′ may include the second diode Dn, the second capacitor Cn, third and fourth selection switching elements SSW3and SSW4, and a second compensation capacitor Cc2. The second diode Dn may be connected between the (1-2)-th node N1-2and the third node N3in the reverse direction. The second capacitor Cn may include the first electrode connected to the third node N3and the second electrode connected to the third selection switching element SSW3. A contact point between the third selection switching element SSW3and the second electrode of the second capacitor Cn may be defined as a fifth node N5. The third selection switching element SSW3may selectively connect the second electrode of the second capacitor Cn to the reference node RN and the second output transistor OSW2under control of the second switching control part SWC2. The second compensation capacitor Cc2may include a first electrode connected to the fourth selection switching element SSW4and a second electrode connected to the reference node RN. The fourth selection switching element SSW4may control the connection of the third node N3and the first electrode of the second compensation capacitor Cc2under control of the second switching control part SWC2. The second reset switching element RSW2may be connected between the third node N3and the reference node RN and may control the connection of the third node N3and the reference node RN under control of the second switching control part SWC2. With reference to the above operations, the threshold voltage measurement operation for the second diode Dn may be performed at the timing following the threshold voltage measurement operation of the first diode Dp and may be substantially the same as the threshold voltage measurement operation for the first diode Dp. Also, the sensing operation of the second rectifier circuit RTC2′ may be performed at the timing following the sensing operation of the first rectifier circuit RTC1′ and may be substantially the same as the sensing operation of the first rectifier circuit RTC1′. Accordingly, below, the threshold voltage measurement operation for the first diode Dp and the sensing operation of the first rectifier circuit RTC1′ will be mainly described with reference toFIGS.18A to18J, and the threshold voltage measurement operation for the second diode Dn and the sensing operation of the second rectifier circuit RTC2′ will be described briefly without drawings. Below, a threshold voltage measurement operation for the first diode Dp will be described with reference toFIGS.18A to18E. The sensing operation of the first rectifier circuit RTC1′ will be described with reference toFIGS.18F to18J. Referring toFIG.18A, the (1-1)-th and (1-2)-th connection switching elements CSW1-1and CSW1-2and the second and third connection switching elements CSW2and CSW3may be turned off, and thus, the first and second rectifier circuits RTC1′ and RTC2′ and the amplifier circuit AMC may be disconnected from each other. Referring toFIG.18B, the second selection switching element SSW2may be turned on, and thus, the first electrode of the first compensation capacitor Cc1may be connected to the second node N2. Accordingly, the first compensation capacitor Cc1may be connected to the first diode Dp. Referring toFIG.18C, the first reset switching element RSW1may be turned on, and thus, the first compensation capacitor Cc1may be discharged to the reference voltage Vref. Referring toFIG.18D, the first reset switching element RSW1may be turned off and the second connection switching element CSW2is turned on. As the second connection switching element CSW2is turned on, the first compensation voltage Vc1may be stored in the first compensation capacitor Cc1through the first diode Dp. In this case, the voltage of the second node N2may be “Vref+Vp−Vthp”. A voltage difference (Vp-Vthp) between the second node N2and the reference node RN may be charged in the first compensation capacitor Cc1as the first compensation voltage Vc1. Referring toFIG.18E, the second connection switching element CSW2and the second selection switching element SSW2may be turned off, and thus, the first compensation voltage Vc1may be stored in the first compensation capacitor Cc1. The above operation may be defined as an operation of measuring the first threshold voltage Vthp. Although not illustrated, after the first rectifier circuit RTC1‘ performs the operation of measuring the first threshold voltage Vthp, at the following timing, the second rectifier circuit RTC2’ may perform an operation of measuring the second threshold voltage Vthn to be the same as the first rectifier circuit RTC1′. Accordingly, the second compensation voltage Vc2may be stored in the second compensation capacitor Cc2of the second rectifier circuit RTC2′, and the second compensation voltage Vc2may have a value of “−Vn+Vthn”. Referring toFIG.18F, the first selection switching element SSW1may connect the second electrode of the first capacitor Cp to the reference node RN under control of the first switching control part SWC1. Referring toFIG.18G, the (1-1)-th connection switching element CSW1-1may be turned on, and thus, the amplifier circuit AMC may be connected to the first diode Dp. In this case, as described with reference toFIG.11A, a voltage value of “Va−Vthp” may be stored in the first capacitor Cp. Referring toFIG.18H, the (1-1)-th connection switching element CSW1-1may be turned off. The first selection switching element SSW1may connect the second electrode of the first capacitor Cp to the first output switching element OSW1under control of the first switching control part SWC1. Referring toFIG.18I, the second selection switching element SSW2may be turned on, and thus, the first compensation capacitor Cc1and the first capacitor Cp may be connected. In this case, a voltage of the fourth node N4may be set to a value that is obtained by subtracting a value charged in the first capacitor Cp from a value charged in the first compensation capacitor Cc1. The voltage of the fourth node N4may be set to a value of “Vp−Va” (=(+Vp−Vthp)−(Va−Vthp)). As the first threshold voltage Vthp is removed, the compensation for a threshold voltage may be made. In this case, the sensing operation may be performed regardless of a change in the first threshold voltage Vthp. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the first selection switching element SSW1and the reference node RN. As a result, the first rectifier circuit RTC1′ may be connected to the analog-to-digital converter ADC. The voltage of the fourth node N4may be converted into a digital signal by the analog-to-digital converter ADC, and the digital signal may be provided to the summing circuit SMC. Referring toFIG.18J, the first selection switching element SSW1may connect the first capacitor Cp to the reference node RN. The second selection switching element SSW2may be turned off. As the first reset switching element RSW1is turned on, the first capacitor Cp may be connected between the second node N2and the reference node RN. Accordingly, the voltage charged in the first capacitor Cp may be discharged to the reference voltage Vref. Although not illustrated, as in the above description given with reference toFIG.13, a first subtractor for removing the first voltage Vp from the output of the analog-to-digital converter ADC may be further used, and may be interposed between the analog-to-digital converter ADC and the summing circuit SMC. Also, as in the above description given with reference toFIG.13, the noise filter NF may be further used and may be connected to the output terminal of the amplifier circuit AMC. The above operation may be defined as a first sensing operation for the positive-polarity voltage +Va. Although not illustrated, after the first rectifier circuit RTC1′ performs the first sensing operation, at the following timing, the second rectifier circuit RTC2′ may perform a second sensing operation for the negative-polarity voltage −Va to be the same as the first rectifier circuit RTC1′. Accordingly, a voltage of the fourth node N5may be set to a value of “Va−Vn” (=(−Vn+Vthn)−(−Va+Vthn)). As the second threshold voltage Vthn is removed, the compensation for a threshold voltage may be made. In this case, the sensing operation may be performed regardless of a change in the second threshold voltage Vthn. The first output switching element OSW1and the second output switching element OSW2may be respectively connected to the reference node RN and the third selection switching element SSW3; in this case, and the voltage of the fifth node N5may be converted into a digital signal by the analog-to-digital converter ADC, and the digital signal may be provided to the summing circuit SMC. At the following timing, the second capacitor Cn may be connected between the third node N3and the reference node RN by the third selection switching element SSW3and the second reset switching element RSW2, so as to be discharged to the reference voltage Vref. According to an embodiment of the present disclosure, a receive circuit may be implemented by using elements, the number of which is less than that of an exist IQ demodulator. Also, because the receive circuit outputs a demodulation signal by using an input signal without using a carrier wave, the demodulation signal may be normally output regardless of a phase difference of the carrier wave and the input signal. Also, the receive circuit may compensate for a threshold voltage of a diode to output the demodulation signal. While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 80,436 |
11861108 | DETAILED DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to the preferred embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. In the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it can obscure the subject matter of the present disclosure. FIG.1andFIG.2are diagrams illustrating a display device having a touch sensor according to an embodiment of the present disclosure. All the components of each display device according to all embodiments of the present disclosure are operatively coupled and configured. Referring toFIG.1andFIG.2, a display device having a touch sensor according to the present disclosure can be implemented based on a flat panel display such as a liquid crystal display (LCD), a field emission display (FED), a plasma display panel (PDP), an organic light emitting display (OLED), or an electrophoretic display (EPD). Although the display device is implemented as an LCD in the following embodiment(s), the display device of the present disclosure is not limited to the LCD and other variations are possible. The display device having a touch sensor of the present disclosure can include a display panel10, a data driving circuit12, a gate driving circuit14, a timing controller16, a touch driving circuit18, a host system19, and a power supply circuit20. The display panel10includes a liquid crystal layer formed between two substrates. A pixel array of the display panel10includes pixels PXL formed in pixel regions defined by data lines D1to Dm (m being a positive integer) and gate lines G1to Gn (n being a positive integer). Each pixel PXL can include thin film transistors (TFTs), a pixel electrode charging a data voltage, a storage capacitor Cst for maintaining a voltage of a liquid crystal cell, and a common electrode COM formed at each of intersections of the data lines D1to Dm and the gate lines G1to Gn. The common electrode COM of the pixels PXL is divided into segments, and touch electrodes TS are implemented as the common electrode segments. A single common electrode segment is commonly connected to a plurality of pixels PXL and forms a single touch electrode TS. A plurality of touch electrodes arranged on a line can form a touch block line. Each touch sensor can include pixels defined by gate lines and data lines. Each touch block line overlaps a plurality of pixel lines, and one touch block line is wider than one pixel line. Here, one pixel line is composed of pixels PXL arranged in a line. A black matrix and a color filter can be formed on an upper substrate of the display panel10. A lower substrate of the display panel10can be implemented in a color filter on TFT (COT) structure. In this case, the black matrix and the color filter can be formed on the lower substrate of the display panel10. The common electrode provided with a common voltage can be formed on the upper substrate or the lower substrate of the display panel10. A polarizer can be attached to the upper substrate and the lower substrate of the display panel10and an alignment film for setting a pre-tilt angle of liquid crystal is formed on inner sides of the upper and lower substrates which come into contact with the liquid crystal. Column spacers for maintaining a cell gap of liquid crystal cells are formed between the upper and low substrates of the display panel10. A backlight unit can be provided on the backside of the display panel10. The backlight unit is implemented as an edge type or direct type backlight unit and radiates light to the display panel10. The display panel10can be implemented in any of known liquid crystal modes such as a twisted nematic (TN) mode, a vertical alignment (VA) mode, an in-plane switching (IPS) mode, and a fringe field switching (FFS) mode. The timing controller16receives timing signals, such as a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a data enable signal DE, and a main clock signal MCLK, input from the host system19and controls operation timing of the data driving circuit12, the gate driving circuit14, and the touch driving circuit18. A scan timing control signal can include a gate start pulse signal GSP, a gate shift clock signal GSC, and a gate output enable signal GOE. A data timing control signal can include a source sampling clock signal SSC, a polarity control signal POL, and a source output enable signal SOE. The timing controller16can include a micro-controller unit (MCU) shown inFIG.12toFIG.15. The timing controller16can temporally divide a driving period of the display panel into a display driving period Pd and a touch sensor driving period Pt based on a touch synchronization signal (refer to TSYNC inFIGS.16,18and19). The data driving circuit12, the gate driving circuit14, and the touch driving circuit18are synchronized in response to the touch synchronization signal TSYNC. A first logic level of the touch synchronization signal TSYNC defines the display driving period Pd and a second logic level thereof defines the touch sensor driving period Pt. The first logic level can be a high logic level and the second logic level can be a low logic level, and vice versa. The data driving circuit12and the gate driving circuit14write input image data RGB to the pixels PXL of the display panel10under the control of the timing controller16. The data driving circuit12includes a plurality of source driver integrated circuits (ICs) SIC, converts digital image data RGB input from the timing controller16into an analog positive/negative gamma compensation voltage according to the scan timing control signal to generate a data voltage, and outputs the data voltage in the display driving period Pd. The data voltage output from the data driving circuit12is supplied to the data lines D1to Dm. The data driving circuit12applies an AC signal (refer to Sdry inFIG.3) having the same phase and the same amplitude as those of a touch driving signal Tdrv applied to the touch electrodes TS in the touch sensor driving period Pt to the data line D1to Dm to minimize parasitic capacitances between the touch electrodes TS and the data lines D1to Dm and to reduce the influence of the parasitic capacitances on the touch electrodes TS. This is because charges stored in parasitic capacitors are reduced when voltages across the parasitic capacitors simultaneously change and voltage differences are smaller. When the influence of the parasitic capacitances on the touch electrodes TS is reduced, display noise mixed in a touch sensing result can be minimized and distortion of an amplifier output voltage that is a touch sensing signal can be prevented. The gate driving circuit14generates a gate pulse signal synchronized with a data voltage with reference to the scan timing control signal and outputs the gate pulse signal to the gate lines G1to Gn in the display driving period Pd to select one display line of the display panel10to which the data voltage is written. The gate driving circuit14generates an AC signal having the same phase and the same amplitude as those of the touch driving signal Tdrv applied to the touch electrodes TS in the touch sensor driving period Pt and applies the AC signal to the gate line G1to Gn to minimize parasitic capacitances between the touch electrodes TS and the gate lines G1to Gn and to reduce the influence of the parasitic capacitances on the touch electrodes TS. When the parasitic capacitances between the touch electrodes TS and the gate lines G1to Gn are minimized, display noise mixed in a touch sensing result can be minimized and distortion of an amplifier output voltage that is a touch sensing signal can be prevented. The gate driving circuit14can be configured as a gate driver IC or can be directly formed on a lower glass substrate of the display panel10in a gate driver in panel (GIP) structure. The touch driving circuit18includes readout ICs RIC. The touch driving circuit18drives and senses the touch electrodes TS included in the pixel array of the display panel10in the touch sensor driving period Pt. The touch electrodes TS can constitute a capacitance sensor for sensing touch input. The capacitance sensor can be implemented based on self-capacitance or mutual capacitance. The self-capacitance and mutual capacitance can be formed along a single-layer conductive line formed in one direction or can be formed between two orthogonal conductive lines. Each readout IC RIC can include a touch sensing circuit (SU inFIG.3) and an amplifier output control circuit operating in the touch sensor driving period Pt. The touch sensing circuit (SU inFIG.3) applies the touch driving signal (Tdrv inFIG.3) to the touch electrodes TS and amplifies charges flowing from the touch electrodes TS based on an amplifier reset signal (refer to RST inFIG.4) to generate an amplifier output voltage. The amplifier output control circuit can differentially control the level of the amplifier output voltage depending on the positions of the touch electrodes TS by adjusting at least one of a toggle timing of the amplifier reset signal RST and a voltage amplitude of the touch driving signal Tdrv to improve sensitivity deviations at the positions of the touch electrodes and enhance touch performance. The readout IC RIC and the source driver IC SIC can be integrated into one chip to be implemented as a source & readout IC SRIC, as illustrated inFIG.2. The source & readout IC SRIC can be mounted on a source chip on film (SCOF). The host system19can transmit the timing signals Vsync, Hsync, DE, and MCLK along with the digital image data RGB to the timing controller16and execute an application program associated with touch sensing data TDATA(XY) input from the touch driving circuit18. The host system19means a system main body of an electronic apparatus to which the display device of the present disclosure is applicable. The host system19can be any of a phone system, a television (TV) system, a set-top box, a navigation system, a DVD player, a Blu-ray player, a personal computer (PC), and a home theater system. The host system19receives touch input data TDATA(XY) from a touch sensing IC TIC and executes an application associated with touch input. The power supply circuit20generates driving power necessary for operation of the touch driving circuit18. The power supply circuit20can be implemented in the form of an integrated circuit such as a touch power IC (TPIC) inFIG.12andFIG.14. The power supply circuit20may include the amplifier output control circuit (including a Tdrv adjuster inFIG.14) as necessary. FIG.3is a diagram illustrating a configuration of the source & readout IC in which the data driving circuit and the touch driving circuit are integrated according to the present disclosure andFIG.4is a diagram illustrating the touch sensing circuit included in the source & readout IC. Referring toFIG.3andFIG.4, the source & readout IC SRIC includes the source driver IC SIC that drives data lines D1to D5of the display panel10and the readout IC RIC that drives touch lines SL connected to the touch electrodes TS of the display panel10. The source driver IC SIC and the readout IC RIC can be “circuits” which are functionally separate from each other in the source & readout IC SRIC. The source driver IC SIC includes a digital-to-analog converter that generates a data voltage Vdata and an output buffer BUF that stabilizes the data voltage Vdata. The source driver IC SIC outputs the data voltage Vdata to the data lines D1to D5in the display driving period and outputs the AC signal (refer to Sdrv) for reducing the influence of parasitic capacitance on the data lines D1to D5in the touch sensor driving period. The readout IC RIC can include multiplexers MUX, touch sensing circuits SU, and a common voltage generator. The common voltage generator can be included in the power supply circuit20shown inFIG.1. The common voltage generator generates a common voltage necessary to operate the display. The common voltage can be various display voltages according to display device types. For example, the common voltage can be a voltage applied to a common electrode in a liquid crystal display and can be a voltage applied to a cathode in an organic light emitting display device. Each multiplexer MUX selectively connects the touch sensing circuit SU and the common voltage generator to touch electrodes under the control of the timing controller16. When the touch screen has a resolution of M×N (M and N being positive integers equal to or greater than 2), the touch electrodes TS can be segmented into M×N touch electrode segments and M multiplexers can be provided. Each multiplexer MUX is connected to N touch electrodes TS through N touch lines SL and sequentially connects the N touch lines SL to a single touch sensing circuit SU. The touch sensing circuit SU is connected to touch lines SL through the multiplexer MUX to apply the touch driving signal Tdrv to touch electrodes TS, senses charges flowing from the touch electrodes TS, and generates touch sensing data TDATA. As illustrated inFIG.4, the touch sensing circuit SU includes a pre-amplifier that amplifies a voltage of a touch capacitor CS based on the amplifier reset signal RST, an integrator that accumulates an amplifier output voltage of the pre-amplifier, and an analog-to-digital converter (ADC) that converts the output voltage of the integrator into digital data. The digital data generated by the ADC is transmitted to the host system as touch sensing data TDATA. When the touch screen has a resolution of M×N, M touch sensing circuits SU are required. The touch capacitor CS has self-capacitance and mutual capacitance and is formed in the touch electrode. The pre-amplifier is connected to touch electrodes through the touch line SL and receives charges stored in the touch capacitor CS. A load resistance component LR and a parasitic capacitor component CP can be present on the touch line SL. The pre-amplifier includes an amplifier AMP, a feedback capacitor CFB, and a reset switch SW. The inverted terminal (−) of the amplifier AMP is connected to the touch line SL and the non-inverted terminal (+) of the amplifier AMP is provided with the touch driving signal Tdrv. The output terminal of the amplifier AMP is connected to the integrator. The feedback capacitor CFB is connected between the inverted terminal (−) and the output terminal of the amplifier AMP. The reset switch is also connected between the inverted terminal (−) and the output terminal of the amplifier AMP. The reset switch SW switches on in synchronization with the toggle timing of the amplifier reset signal RST. The pre-amplifier stores charges flowing from the touch electrode in the feedback capacitor CFB until the reset switch SW switches on and supplies the stored voltage to the integrator as an amplifier output voltage. This amplifier output voltage can vary according to positions of touch electrodes depending on an RC value of the touch line SL, which can cause sensitivity deviations at positions of touch electrodes. Accordingly, a method for improving sensitivity deviations at positions as illustrated inFIG.5toFIG.19according to one or more embodiments of the present embodiments is provided and discussed in more detail below. FIG.5is a diagram illustrating the concept of the technique for improving sensitivity deviations at positions of touch electrodes.FIG.6andFIG.7are diagrams illustrating an example of differentially adjusting the toggle timing of the amplifier reset signal to be applied to the touch sensing circuit depending on positions of touch electrodes.FIG.8andFIG.9are diagrams illustrating specific methods for adjusting the toggle timing of the amplifier reset signal.FIG.10and FIG. are diagrams illustrating an example of differentially adjusting a voltage amplitude of the touch driving signal to be applied to the touch sensing circuit. Referring toFIG.5, the amplifier output control circuit differentially controls the level of the amplifier output voltage depending on positions of touch electrodes to improve sensitivity deviations at the positions of the touch electrodes. To this end, the amplifier output control circuit can adjust at least one of the toggle timing of the amplifier reset signal RST and the voltage amplitude of the touch driving signal Tdrv. In other words, the amplifier output control circuit can include at least one of an RST adjuster that differentially adjusts the toggle timing of the amplifier reset signal RST depending on positions of touch electrodes and a Tdrv adjuster that differentially adjusts the voltage amplitude of the touch driving signal Tdrv depending on positions of touch electrodes. The RST adjuster can differentially adjust the toggle timing of the amplifier reset signal based on a predetermined on start timing and on duty of the amplifier reset signal depending on the positions of the touch electrodes. The RST adjuster can adjust the toggle timing of the amplifier reset signal RST to a first on start timing Ta for touch electrodes at a first position AR1, adjust the toggle timing of the amplifier reset signal RST to a second on start timing Tb ahead of the first on start timing to for touch electrodes at a second position AR2, and adjust the toggle timing of the amplifier reset signal RST to a third on start timing Tc ahead of the second on start timing Tb for touch electrodes at a third position AR3. Here, the first position AR1is farther from the touch sensing circuit SU than the second position AR2, and the second position AR2is farther from the touch sensing circuit SU than the third position AR3. Referring toFIG.6andFIG.7, the pre-amplifier stores charges flowing from the touch electrodes at the first position AR1in the feedback capacitor CFB from a rising timing of the touch driving signal Tdrv to the first on start timing Ta of the amplifier reset signal RST and supplies the stored voltage to the integrator as a first amplifier output voltage VA with respect to the touch electrodes at the first position AR1. Then, the integrator accumulates the first amplifier output voltage VA multiple times (e.g., three times) to generate a first integrator output voltage. Referring toFIG.6andFIG.7, the pre-amplifier stores charges flowing from the touch electrodes at the second position AR2in the feedback capacitor CFB from the rising timing of the touch driving signal Tdrv to the second on start timing Tb of the amplifier reset signal RST and supplies the stored voltage to the integrator as a second amplifier output voltage VB with respect to the touch electrodes at the second position AR2. Then, the integrator accumulates the second amplifier output voltage VB multiple times (e.g., three times) to generate a second integrator output voltage. InFIG.6andFIG.7, the voltage amplitude of the touch driving signal Tdrv applied to all touch electrodes is fixed to a difference between VTH and VTL irrespective of electrode positions. Since the second on start timing Tb is ahead of the first on start timing Ta, the level of the second amplifier output voltage VB is lower than the level of the first amplifier output voltage VA by “ΔV”. Accordingly, the level of the second integrator output voltage is lower than the level of the first integrator output voltage VA by “ΔAV”. In this manner, the level of the amplifier output voltage can be differentially controlled according to positions of touch electrodes depending on the toggle timing of the amplifier reset signal RST. The level of the amplifier output voltage can be controlled to be higher at the first position AR1than at the second position AR2and to be higher at the third position AR3than at the second position AR2. Accordingly, the level of the amplifier output voltage increases as positions of touch electrodes become farther from the touch sensing circuit SU, and thus sensitivity deviations at touch electrode positions can be effectively reduced. To adjust the toggle timing of the amplifier reset signal RST to the first on start timing Ta, the second on start timing Tb, and the third on start timing Tc, the RST adjuster can generate three amplifier reset signals RST having the same on duty and different phases, as illustrated inFIG.8. Further, to adjust the toggle timing of the amplifier reset signal RST to the first on start timing Ta, the second on start timing Tb, and the third on start timing Tc, the RST adjuster can generate three amplifier reset signals RST having different on duties, as illustrated inFIG.9. Accordingly, the amplifier reset signal RST having the first on start timing Ta can have a first on duty, the amplifier reset signal RST having the second on start timing Tb can have a second on duty, and the amplifier reset signal RST having the third on start timing Tc can have a third on duty. In this case, the first on duty is shorter than the second on duty, and the second on duty is shorter than the third on duty. For example, referring toFIG.9, the three amplifier reset signals RST having different on duties can have the same falling timing and different rising timings. The Tdrv adjuster can adjust the voltage amplitude of the touch driving signal Tdrv to a first value Da for the touch electrodes at the first position AR1, adjust the voltage amplitude of the touch driving signal Tdrv to a second value Db for the touch electrodes at the second position AR2, and adjust the voltage amplitude of the touch driving signal Tdrv to a third value Dc for the touch electrodes at the third position AR3, as illustrated inFIG.5. Here, the first value Da is greater than the second value Db, and the second value Db is greater than the third value Dc. Referring toFIG.10andFIG.11, the pre-amplifier stores charges flowing from the touch electrodes at the first position AR1in the feedback capacitor CFB from a rising timing of the touch driving signal Tdrv having the amplitude of the first value Da to the on start timing of the amplifier reset signal RST and supplies the stored voltage to the integrator as a first amplifier output voltage VA with respect to the touch electrodes at the second position AR2. Then, the integrator accumulates the first amplifier output voltage VA multiple times (e.g., three times) to generate a first integrator output voltage. Referring toFIG.10andFIG.11, the pre-amplifier stores charges flowing from the touch electrodes at the second position AR2in the feedback capacitor CFB from the rising timing of the touch driving signal Tdrv having the amplitude of the second value Db to the on start timing Tb of the amplifier reset signal RST and supplies the stored voltage to the integrator as a second amplifier output voltage VB with respect to the touch electrodes at the second position AR2. Then, the integrator accumulates the second amplifier output voltage VB multiple times (e.g., three times) to generate a second integrator output voltage. InFIG.10andFIG.11, the on start timing of the amplifier reset signal RST is fixed irrespective of touch electrode positions. Since the amplitude of the second value Db is less than the amplitude of the first value Da, the level of the second amplifier output voltage VB is lower than the level of the first amplifier output voltage VA. Accordingly, the level of the second integrator output voltage VB is lower than the level of the first integrator output voltage VA by “ΔAV”. In this manner, the level of the amplifier output voltage can be differentially controlled according to touch electrode positions depending on the voltage amplitude of the touch driving signal Tdrv. The level of the amplifier output voltage can be controlled to be higher at the first position AR1than at the second position AR2and to be higher at the third position AR3than at the second position AR2. Accordingly, the level of the amplifier output voltage increases as positions of touch electrodes become farther from the touch sensing circuit SU, and thus sensitivity deviations at touch electrode positions can be effectively reduced. FIG.12is a diagram illustrating multiplexer circuits for selectively connecting touch electrodes to the touch sensing circuit andFIG.13is a diagram illustrating an example of a configuration of a circuit for differentially adjusting the toggle timing of the amplifier reset signal depending on positions of touch electrodes. Referring toFIG.12andFIG.13, the RST adjuster can be included in a digital circuit block of the source & readout IC SRIC along with a MUX counter and a setting register. The RST Adjuster, the MUX counter, and the setting register are controlled by an MCU. The MCU can be mounted on a source printed circuit board SPCB along with the TPIC. The MUX counter generates MUX count information representing the order of connection of touch electrodes TS and the touch sensing circuit SU through a multiplexer MUX. The MUX count information is generated as different values according to the positions of the touch electrodes TS. Different on start timings and on duties of amplifier reset signals RST are set in advance in the setting register according to the positions of the touch electrodes TS. Information related to the amplifier reset signals RST in the setting register can be corrected by the MCU. The RST adjuster reads an amplifier reset signal RST corresponding to a position of a touch electrode TS from the setting register based on the MUX count information. Accordingly, the toggle timing of the amplifier reset signal RST can be differentially adjusted according to the positions of the touch electrodes. FIG.14andFIG.15are diagrams illustrating an example of a configuration of a circuit for differentially adjusting the voltage amplitude of the touch driving signal depending on the positions of the touch electrodes. The Tdrv adjuster can be included in the TPIC as illustrated inFIG.14or can be included in an analog circuit block of the source & readout IC SRIC as illustrated inFIG.15. InFIG.14andFIG.15, the MUX counter as illustrated inFIG.13is mounted in the digital circuit block of the source & readout IC SRIC. The MUX counter generates MUX count information indicating the order of connection between touch electrodes TS and the touch sensing circuit SU through a multiplexer MUX. The MUX count information is generated as different values depending on the positions of the touch electrodes TS. The Tdrv adjuster detects the positions of touch electrodes TS according to the MUX count information and adjusts the voltage amplitude of the touch driving signal Tdrv in accordance with the positions. The Tdrv adjuster supplies the touch driving signal Tdrv having the voltage amplitude adjusted in accordance with the positions of the touch electrodes TS to the touch sensing circuit SU. The touch driving signal Tdrv is applied to the touch electrodes TS through the pre-amplifier of the touch sensing circuit SU. Accordingly, the voltage amplitude of the touch driving signal Tdrv can be differentially adjusted depending on the positions of the touch electrodes. FIG.16andFIG.17are diagrams illustrating an example of a hybrid configuration for differentially adjusting the toggle timing of the amplifier reset signal and the voltage amplitude of the touch driving signal depending on positions of touch electrodes. Referring toFIG.12,FIG.16, andFIG.17, positions of touch electrodes connected through a first multiplexer MUX1are farthest from the touch sensing circuit SU and positions of touch electrodes connected through an n-th multiplexer MUXn are closest to the touch sensing circuit SU. The amplifier output control circuit differentially adjusts the toggle timing of the amplifier reset signal RST and the voltage amplitude LFD of the touch driving signal Tdrv depending on positions of touch electrodes in the display driving period Pd. Then, the touch sensing circuit SU drives and senses the touch electrodes based on the adjusted factors in the touch sensor driving period Pt. The amplifier output control circuit can adjust the voltage amplitude of the touch driving signal Tdrv to the same value Da for touch electrodes connected through first and second multiplexers MUX1and MUX2. However, the amplifier output control circuit can adjust a toggle timing of a first amplifier reset signal RST to Ta for first touch electrodes connected through the first multiplexer MUX1and adjust a toggle timing of a second amplifier reset signal RST to Tb for second touch electrodes connected through the second multiplexer MUX2. The amplifier output control circuit can control an on duty of the first amplifier reset signal RST to be shorter than an on duty of the second amplifier reset signal RST such that an amplifier output voltage for the first touch electrodes is higher than an amplifier output voltage for the second touch electrodes. The amplifier output control circuit can adjust the voltage amplitude of the touch driving signal Tdrv to the same value Db for touch electrodes connected through third and fourth multiplexers MUX3and MUX4. Here, Db is less than Da. However, the amplifier output control circuit can adjust a toggle timing of a third amplifier reset signal RST to Ta′ for third touch electrodes connected through the third multiplexer MUX3and adjust a toggle timing of a fourth amplifier reset signal RST to Tb′ for fourth touch electrodes connected through the fourth multiplexer MUX4. The amplifier output control circuit can control an on duty of the third amplifier reset signal RST to be shorter than an on duty of the fourth amplifier reset signal RST such that an amplifier output voltage for the third touch electrodes is higher than an amplifier output voltage for the fourth touch electrodes. The amplifier output control circuit can adjust the voltage amplitude of the touch driving signal Tdrv to the same value Dc for touch electrodes connected through (n−1)-th and n-th multiplexers MUXn−1 and MUXn. Here, Dc is less than Db. However, the amplifier output control circuit can adjust a toggle timing of an (n−1)-th amplifier reset signal RST to Ta″ for (n−1)-th touch electrodes connected through the (n−1)-th multiplexer MUXn−1 and adjust a toggle timing of an n-th amplifier reset signal RST to Tb″ for n-th touch electrodes connected through the n-th multiplexer MUXn. The amplifier output control circuit can control an on duty of the (n−1)-th amplifier reset signal RST to be shorter than an on duty of the n-th amplifier reset signal RST such that an amplifier output voltage for the (n−1)th touch electrodes is higher than an amplifier output voltage for the n-th touch electrodes. FIG.18is a diagram illustrating an example of differentially adjusting the toggle timing of the amplifier reset signal in a finger sensing mode and a pen sensing mode andFIG.19is a diagram illustrating an example of differentially adjusting the toggle timing of the amplifier reset signal in a self-sensing mode and a mutual sensing mode. Referring toFIG.18, the touch sensing circuit can implement the finger sensing mode and the pen sensing mode in the touch sensor driving period Pt. The touch sensing circuit generates an amplifier output voltage according to finger touch input in the finger sensing mode and generates an amplifier output voltage according to pen touch input in the pen sensing mode. Touch sensitivity can be different in the finger sensing mode and the pen sensing mode for touch electrodes at the same position. The amplifier output circuit can adjust the toggle timing of the amplifier reset signal RST to the first on start timing in the pen sensing mode and adjust the toggle timing of the amplifier reset signal RST to the second on start timing ahead of the first on start timing in the finger sensing mode for touch electrodes at the same position to improve touch sensitivity deviation between the sensing modes. Referring toFIG.19, the touch sensing circuit can implement the self-sensing mode and the mutual sensing mode in the touch sensor driving period Pt. The touch sensing circuit generates an amplifier output voltage based on self-capacitance according to touch input in the self-sensing mode and generates an amplifier output voltage based on mutual capacitance according to touch input in the mutual sensing mode. Touch sensitivity can be different in the self-sensing mode and the mutual sensing mode for touch electrodes at the same position. The amplifier output circuit can adjust the toggle timing of the amplifier reset signal RST to the first on start timing in the self-sensing mode and adjust the toggle timing of the amplifier reset signal RST to the second on start timing ahead of the first on start timing in the mutual sensing mode for touch electrodes at the same position to improve touch sensitivity deviation between the sensing modes. It will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the spirit or scope of the present disclosure. Thus, it is intended that the present disclosure cover the modifications and variations of the present disclosure provided they come within the scope of the appended claims and their equivalents. The display device having a touch sensor according to the embodiment of the present disclosure can improve sensitivity deviations at positions of touch electrodes to enhance touch performance. Furthermore, the display device having a touch sensor according to the embodiment of the present disclosure can improve sensitivity deviations between sensing modes to enhance touch performance. | 33,907 |
11861109 | DETAILED DESCRIPTION The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which various embodiments are shown. This invention may, however, be embodied in many different forms, and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like reference numerals refer to like elements throughout. It will be understood that when an element is referred to as being “on” another element, it can be directly on the other element or intervening elements may be present therebetween. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. It will be understood that, although the terms “first,” “second,” “third” etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, “a first element,” “component,” “region,” “layer” or “section” discussed below could be termed a second element, component, region, layer or section without departing from the teachings herein. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a”, “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the Figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. For example, if the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The term “lower,” can therefore, encompasses both an orientation of “lower” and “upper,” depending on the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The terms “below” or “beneath” can, therefore, encompass both an orientation of above and below. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Embodiments described herein should not be construed as limited to the particular shapes of regions as illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. For example, a region illustrated or described as flat may, typically, have rough and/or nonlinear features. Moreover, sharp angles that are illustrated may be rounded. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the precise shape of a region and are not intended to limit the scope of the present claims. Hereinafter, embodiments of the invention will be described in detail with reference to the accompanying drawings. FIG.1is a block diagram illustrating a display device100according to an embodiment of the invention. Referring toFIG.1, an embodiment of an electronic device1000includes a host processor50and a display device100. The display device100may include a touch panel140, a display panel110, and a display panel driver. The display panel driver includes a driving controller200, a gate driver300, a gamma reference voltage generator400and a data driver500. The display panel driver may further include a power voltage generator600. In an embodiment, for example, the driving controller200and the data driver500may be integrally formed as a single chip. In an alternative embodiment, for example, the driving controller200, the gamma reference voltage generator400, and the data driver500may be integrally formed as a single chip. In another alternative embodiment, for example, the driving controller200, the gamma reference voltage generator400, the data driver500, and the power voltage generator600may be integrally formed as a single chip or module. A driving module including at least the driving controller200and the data driver500which are integrally formed may be referred to as a timing controller embedded data driver (TED). The touch panel140may be mounted on an upper surface of the display panel110or formed in the display panel110. In an embodiment, for example, the touch panel140may be one of a resistive type, a capacitive type, an electromagnetic field method, an infrared method, a surface acoustic wave (SAW) type, and a near field imaging (NFI) type. The host processor50may receive proximity data of the touch panel140. The host processor50may determine whether a conductor approaches the touch panel140based on the proximity data provided from the touch panel140. In an embodiment, for example, the host processor50may determine whether the conductor approaches the touch panel140by determining a distance between the touch panel140and the conductor based on the proximity data. The host processor50may generate input image data IMG and an input control signal CONT, and provide the generated input image data IMG and the input control signal CONT to the driving controller200. The display panel110includes a display region AA on which an image is displayed and a peripheral region PA adjacent to the display region AA. In an embodiment, for example, the display panel110may be an organic light emitting diode display panel including an organic light emitting diode. In an alternative embodiment, for example, the display panel110may be a quantum-dot organic light emitting diode display panel including an organic light emitting diode and a quantum-dot color filter. In another alternative embodiment, for example, the display panel110may be a quantum-dot nano light emitting diode display panel including a nano light emitting diode and a quantum-dot color filter. In another alternative embodiment, for example, the display panel110may be a liquid crystal display panel including a liquid crystal layer. The display panel110includes a plurality of gate lines GL, a plurality of data lines DL, and a plurality of pixels P electrically connected to the gate lines GL and the data lines DL. The gate lines GL extend in a first direction D1, and the data lines DL extend in a second direction D2crossing the first direction D1. The display panel110may be driven by the display panel driver to display an image. The driving controller200receives the input image data IMG and the input control signal CONT from the host processor50(or an application processor). In an embodiment, for example, the input image data IMG may include red image data, green image data, and blue image data. The input image data IMG may further include white image data. Alternatively, the input image data IMG may include magenta image data, yellow image data, and cyan image data. The input control signal CONT may include a master clock signal and a data enable signal. The input control signal CONT may further include a vertical synchronization signal and a horizontal synchronization signal. The driving controller200generates a first control signal CONT1, a second control signal CONT2, a third control signal CONT3, a fourth control signal CONT4and a data signal DATA based on the input image data IMG and the input control signal CONT. The driving controller200generates the first control signal CONT1for controlling an operation of the gate driver300based on the input control signal CONT, and outputs the first control signal CONT1to the gate driver300. The first control signal CONT1may include a vertical start signal and a gate clock signal. The driving controller200generates the second control signal CONT2for controlling an operation of the data driver500based on the input control signal CONT, and outputs the second control signal CONT2to the data driver500. The second control signal CONT2may include a horizontal start signal and a load signal. The driving controller200generates the data signal DATA based on the input image data IMG. The driving controller200outputs the data signal DATA to the data driver500. The driving controller200generates a fourth control signal CONT4for controlling the operation of the power voltage generator600based on the input image data IMG and the input control signal CONT to generate the power voltage generator600, and outputs the fourth control signal CONT4to the power voltage generator600. In an embodiment, for example, the fourth control signal CONT4may be a power voltage level signal that determines a level of a power voltage. In an embodiment, for example, the driving controller200may generate the fourth control signal CONT4for controlling the operation of the power voltage generator600using the input control signal CONT provided from the host processor50and output the fourth control signal CONT4to the power voltage generator600. The gate driver300generates gate signals driving the gate lines GL in response to the first control signal CONT1received from the driving controller200. The gate driver300outputs gate signals to the gate lines GL. In an embodiment, for example, the gate driver300may sequentially output the gate signals to the gate lines GL. In an embodiment, the gate driver300may be integrated on the peripheral region PA of the display panel110. The gamma reference voltage generator400generates a gamma reference voltage VGREF in response to the third control signal CONT3received from the driving controller200. The gamma reference voltage generator400provides the gamma reference voltage VGREF to the data driver500. The gamma reference voltage VGREF has a value corresponding to the data signal DATA. In an embodiment, the gamma reference voltage generator400may be disposed in the driving controller200or in the data driver500. The data driver500receives the second control signal CONT2and the data signal DATA from the driving controller200, and receives the gamma reference voltage VGREF from the gamma reference voltage generator400. The data driver500converts the data signal DATA into an analog data voltage VDATA using the gamma reference voltage VGREF. The data driver500outputs the data voltage VDATA to the data line DL. The power voltage generator600may generate a first power voltage ELVDD and output the first power voltage ELVDD to the display panel110. The power voltage generator600may generate a second power voltage ELVSS and output the second power voltage ELVSS to the display panel110. In addition, the power voltage generator600may generate a gate driving voltage for driving the gate driver300and output the gate driving voltage to the gate driver300. In addition, the power voltage generator600may generate a data driving voltage for driving the data driver500, and output the data driving voltage to the data driver500. The first power voltage ELVDD may be a high power applied to the pixel P of the display panel110, and the second power voltage ELVSS may be a low power applied to the pixel P of the display panel110. The display device100according to embodiments of the invention may support a screen off mode, in which an image is not displayed on the display panel110, as well as a normal mode in which the image is displayed on the display panel110. In an embodiment, for example, when the display device100is powered on, the display device100may be first driven in the normal mode. In such an embodiment, when a predetermined condition is satisfied, the driving mode of the display device100may become the screen off mode changed from the normal mode. FIG.2is a schematic diagram illustrating an embodiment of the display device100ofFIG.1.FIG.3is a block diagram illustrating an embodiment of a touch panel140ofFIG.1including a proximity area120and a non-proximity area130. Referring toFIG.2andFIG.3, the touch panel140may be mounted on the upper surface of the display panel110. The host processor50may receive the proximity data of the touch panel140. In an embodiment, for example, when the conductor approaches the touch panel140, the host processor50may determine whether the conductor approaches the touch panel140by a change amount of a capacitance of an internal electrode of the display device100. In an embodiment, for example, when the conductor approaches the touch panel140, the proximity data of the touch panel140may increase. When the distance between the conductor and the touch panel140increases, the proximity data of the touch panel140may decrease. In an embodiment, the touch panel140may include the proximity area120and the non-proximity area130. In an embodiment, for example, the non-proximity area130of the touch panel140may be disposed lower than the proximity area120. FIG.4is a diagram illustrating a case in which the conductor approaches the touch panel140in the proximity area120ofFIG.3 Referring toFIG.4, the host processor50may receive the proximity data of the touch panel140. In an embodiment, for example, the host processor50may receive the change amount of the capacitance of the touch panel140. When the conductor approaches the touch panel140in the proximity area120, the host processor50may determine whether the conductor approaches the touch panel140based on a change amount of the proximity data in the proximity area120. In an embodiment, for example, the host processor50may determine the distance between the conductor and the touch panel140based on the change amount of the proximity data, so that whether the conductor approaches the touch panel140may be determined based on the change amount of the proximity data. In an embodiment, for example, when the conductor approaches the touch panel140, the proximity data of the touch panel140may increase. When the distance between the conductor and the touch panel140increases, the proximity data of the touch panel140may decrease. In such an embodiment, by determining whether the conductor approaches the touch panel140, the display device100may enter the screen off mode in which the display panel110does not display the image to reduce the power consumption. In an embodiment, for example, when the conductor approaches the touch panel140, the proximity data of the touch panel140may increase. In an embodiment, for example, when the conductor approaches the touch panel140, whether the conductor approaches the touch panel140may be determined based on the change amount of the capacitance of the internal electrode of the display device100. As such, the screen off mode may be entered by determining whether the conductor approaches the proximity area120based on the proximity data of the touch panel140. In addition, the host processor50may receive the proximity data of the touch panel140. The host processor50may operate the screen off mode based on the proximity data provided from the touch panel140. In an embodiment, for example, the host processor50may determine whether to turn on/off the power of the display panel110based on the proximity data provided from the touch panel140. In an embodiment, for example, the screen off mode may be a screen off mode during a call. When a user's face approaches the proximity area120of the touch panel140during the call, the display panel110may stop displaying the image to reduce the power consumption. However, when the user's face approaches the non-proximity area130of the touch panel140during the call, the display of the image on the display panel110may not be stopped. In an embodiment, for example, the proximity area120may be an upper portion of the touch panel140, and an area for determining whether to stop displaying the image of the display panel110when a conductor approaches the proximity area120. The non-proximity area130may be a lower portion of the touch panel140, and the display of the image on the display panel110may not be stopped regardless of whether the conductor approaches or moves away from the non-proximity area130. The proximity area120may be an area for determining the screen off mode, and the non-proximity area130may be an area independent from a determination of the screen off mode. When the determination process of whether to enter the screen off mode is performed once, the accuracy of the determination of whether the conductor approaches the touch panel140may be low. Accordingly, even when the conductor does not approach the touch panel140, the display panel110may erroneously enter the screen off mode, in which the image is not displayed, due to a signal interference or a noise. Therefore, an accuracy of an operation of the screen off mode may be low. In an embodiment of the invention, the host processor50may count the rise of the proximity data of the touch panel140in the proximity area120to determine whether to enter the screen off mode. In such an embodiment, since whether to enter the screen off mode may be determined not based on one rise of the proximity data of touch panel140but several rises of the proximity data, the accuracy of the operation of the screen off mode may be improved. FIG.5is a graph illustrating the proximity data in the proximity area120when the conductor approaches the touch panel140in the proximity area120ofFIG.3. Referring toFIG.5, the host processor50may enhance the accuracy of the operation of the screen off mode by counting the rise of the proximity data of the touch panel140in the proximity area120to determine whether to enter the screen off mode. In an embodiment, for example, the host processor may receive the proximity data in the proximity area120. The host processor50may determine whether the conductor approaches the touch panel140in the proximity area120based on the proximity data provided from the touch panel140. In addition, the host processor50may generate a first count value by counting the rise of the proximity data in the proximity area120. The host processor50may determine whether the first count value is equal to or greater than a first reference value, and when the first count value is equal to or greater than the first reference value, the display device100may enter the screen off mode. Accordingly, the host processor50may generate the first count value by counting the rise of the proximity data in the proximity area120, and when the first count value is equal to or greater than the first reference value, the display panel110may be powered off. In an embodiment, the host processor50may determine the rise of the proximity data in the proximity area120based on a slope of the proximity data. In an embodiment, for example, the host processor50may determine the distance between the touch panel140and the conductor based on the proximity data provided from the touch panel140. In such an embodiment, the rise of the proximity data in the proximity area120may be determined based on the slope of the proximity data to determine the distance between the touch panel140and the conductor. In an embodiment, the host processor50may extract two proximity data in the proximity area120at a predetermined time interval, that is, two proximity data with a predetermined time interval therebetween. When the slope of the two proximity data in the proximity area120is positive, the first count value may be increased. In an embodiment, for example, the host processor50may extract the proximity data in the proximity area120at the predetermined time interval, and may generate the first count value by counting the rise of the proximity data in the proximity area120. When the slope of the two proximity data in the proximity area120is positive, the first count value may be increased, and when the slope of the two proximity data in the proximity area120is negative, the first count value may not be increased. When the accumulated first count value is equal to or greater than the first reference value, displaying the image of the display on the display panel110may be stopped. In an embodiment, as described above, whether the screen off mode is entered may be determined not based on one rise of the proximity data of touch panel140but several rises of the proximity data, such that the accuracy of the operation of the screen off mode may be enhanced. FIG.6is a diagram illustrating a case in which the conductor approaches the touch panel140in the non-proximity area130ofFIG.3.FIG.7Ais a graph illustrating examples of the proximity data in the non-proximity area130, a rising period of the proximity data, and a reset period of the proximity data when the conductor approaches the touch panel140in the non-proximity area130ofFIG.3.FIG.7Bis a graph illustrating examples of the proximity data in the non-proximity area130, the rising period of the proximity data, and the reset period of the proximity data when the conductor approaches the touch panel140in the non-proximity area130ofFIG.3.FIG.7Cis a graph illustrating an example of the proximity data in the non-proximity area130, the rising period of proximity data, and the reset period of the proximity data when the conductor approaches the touch panel140in the non-proximity area130ofFIG.3.FIG.8is a graph illustrating the proximity data in the proximity area120when the conductor approaches the touch panel140in the non-proximity area130ofFIG.3. Referring toFIGS.6to8, when the conductor approaches the proximity area120, the proximity data in the proximity area120may increase. When the distance between the proximity area120of the touch panel140and the conductor increases, the proximity data in the proximity area120may decrease. In addition, when the conductor approaches the non-proximity area130, the proximity data in the non-proximity area130may increase. When the distance between the conductor and the non-proximity area130of the touch panel140increases, the proximity data in the non-proximity area130may decrease. The proximity data in the proximity area120and the proximity data in the non-proximity area130may be influenced by each other. image of the display example, even when the conductor approaches the non-proximity area130, the proximity data in the proximity area120may be affected, and the proximity data in the proximity area120may be changed. Thus, even when the conductor approaches the non-proximity area130, the host processor50may determine that the conductor approaches the touch panel140in the proximity area120by a predetermined distance or less, and the proximity data may increase in the proximity area120. When the conductor approaches and moves away from the non-proximity area130, the proximity data in the proximity area120may be affected, and the proximity data in the proximity area120may increase. As such, the screen off mode may malfunction depending on the distance of the conductor in the non-proximity area130. In an embodiment of the invention, when the proximity data increases in the non-proximity area130, the first count value may be reset to enhance the accuracy of the operation of the screen off mode. In an embodiment, for example, the host processor50may reset the first count value when the proximity data in the non-proximity area130increases to prevent the display device100from entering the screen off mode when the conductor approaches and moves away from the non-proximity area130and accordingly the proximity data in the proximity area120is affected and increases. Thus, when the conductor approaches the non-proximity area130, the first count value in the proximity area120becomes 0, so that the display device100may be effectively prevented from erroneously entering the screen off mode. In an embodiment, the host processor50may determine the rise (or increase) of the proximity data in the proximity area120based on the slope of the proximity data in the proximity area120. In an embodiment, for example, the host processor50may determine the distance between the proximity area120and the conductor based on the proximity data provided from the proximity area120. In an embodiment, the rise of the proximity data in the proximity area120may be determined based on the slope of the proximity data in the proximity area120to determine the distance between the proximity area120and the conductor. In an embodiment, the host processor50may extract two proximity data in the proximity area120at the predetermined time interval, and when the slope of the two proximity data in the proximity area120is positive, the first count value may be increased. In an embodiment, for example, the host processor50may extract the proximity data in the proximity area120at the predetermined time interval, and may generate the first count value by counting the rise of the proximity data in the proximity area120. When the slope of the two proximity data in the proximity area120is positive, the first count value may be increased, and when the slope of the two proximity data in the proximity area120is negative, the first count value may not be increased. In an embodiment, the host processor50may determine the rise of the proximity data in the non-proximity area130based on the slope of the proximity data in the non-proximity area130. In an embodiment, for example, the host processor50may determine the distance between the non-proximity area130and the conductor based on the proximity data in the non-proximity area130. In such an embodiment, the host processor50may determine the rise of the proximity data in the non-proximity area130based on the slope of the proximity data in the non-proximity area130to determine the distance between the non-proximity area130and the conductor. In an embodiment, the host processor50may extract two proximity data in the non-proximity area130at the predetermined time interval, and when the slope of the two proximity data in the non-proximity area130is positive, a second count value may be increased. In an embodiment, for example, the host processor50may extract the proximity data in the non-proximity area130at the predetermined time interval, and may generate the second count value by counting the rise of the proximity data in the non-proximity area130. When the slope of the two proximity data in the non-proximity area130is positive, the second count value may be increased, and when the slope of two proximity data in the non-proximity area130is negative, the second count value may not be increased. In an embodiment, when the proximity data in the non-proximity area130increase, the host processor50may reset the first count value generated by counting the rise of the proximity data in the proximity area120. In an embodiment, for example, the host processor50may reset the first count value when the second count value in the non-proximity area130is equal to or greater than a second reference value. In an embodiment, as illustrated inFIG.7A, the first count value may be reset immediately after the proximity data in the non-proximity area130increases. A rising period of the proximity data in the non-proximity area130may be substantially the same as a reset period of the first count value of the proximity area120. As illustrated inFIG.8, the first count value is reset prior to the rise of the proximity data in the proximity area120so that the rise of the first count value of the proximity area120greater than the first reference value due to the influence of the non-proximity area130may be effectively prevented and the malfunction of entering the screen off mode, when the conductor approaches and moves away from the non-proximity area130, may be effectively prevented. In an embodiment, as illustrated inFIG.7B, the first count value may be reset after a predetermined delay time from a time point when the proximity data in the non-proximity area130increases. In such an embodiment, the rising period of the proximity data in the non-proximity area130may not be the same as the reset period of the first count value of the proximity area120. The reset period of the first count value of the proximity area120may be formed by being delayed by a delay time from the rising period of proximity data in the non-proximity area130. As illustrated inFIG.8, the first count value is reset while the proximity data in the proximity area120is increasing so that the rise of the first count value of the proximity area120greater than the first reference value due to the influence of the non-proximity area130may be effectively prevented and the malfunction of entering the screen off mode, when the conductor approaches and moves away from the non-proximity area130, may be effectively prevented. In an embodiment, as illustrated inFIG.7C, the first count value may be reset by a rising period of the proximity data in the non-proximity area130from a time point when the rise of the proximity data in the non-proximity area130is finished. In such an embodiment, the rising period of the proximity data in the non-proximity area130may not same as the reset period of the first count value of the proximity area120. The reset period of the first count value of the proximity area120may start at an end of the rising period of the proximity data in the non-proximity area130and last as long as the rising period of the proximity data in the non-proximity area130. As illustrated inFIG.8, the first count value is reset throughout the rising period of the proximity data in the proximity area120so that the rise of the first count value in the proximity area120greater than the first reference value due to the influence of the non-proximity area130may be effectively prevented and the malfunction of entering the screen off mode, when the conductor approaches and moves away from the non-proximity area130, may be effectively prevented. FIG.9is a flowchart illustrating a method of driving the display device100when the conductor approaches the touch panel140in the proximity area120ofFIG.3.FIG.10is a flowchart illustrating a method of driving the display device100when the conductor approaches the touch panel140in the non-proximity area130ofFIG.3. Referring toFIGS.1to10, in an embodiment of a method of driving the display device100of the invention, when the conductor approaches the touch panel140in the proximity area120, the proximity data of the touch panel140is measured (S100), and the proximity data of the touch panel140is provided to the host processor50(S200), and the host processor50counts a rise of the proximity data in the proximity area120to generate the first count value, and whether the first count value is equal to or greater than the first reference value is determined (S300). When the first count value is equal to or greater than the first reference value (YES), the host processor50stops displaying the image on the display panel110(S400), and when the first count value is less than the first reference value (NO), the host processor50causes the display panel110to display the image (S500). In addition, when the conductor approaches the touch panel140in the non-proximity area130and the proximity data in the non-proximity area130increases (S600: YES), the host processor50may reset the first count value in the proximity area120(S700). In an embodiment, in the display device100, when the conductor approaches the touch panel140in the proximity area120, the proximity data of the touch panel140may be measured (S100), and the proximity data of the touch panel140may be provided to the host processor50(S200). When the conductor approaches the touch panel140, the host processor50may determine whether the conductor approaches the touch panel140by the change amount of the capacitance of the internal electrode of the display device100. In an embodiment, for example, when the conductor approaches the touch panel140, the proximity data of the touch panel140may increase. When the distance between the conductor and the touch panel140increases, the proximity data of the touch panel140may decrease. In an embodiment, the host processor50may count the rise of the proximity data in the proximity area120to generate the first count value, and whether the first count value is equal to or greater than the first reference value may be determined (S300). When the first count value is equal to or greater than the first reference value, the host processor50may stop displaying the image on the display panel110(S400), and when the first count value is less than the first reference value, the host processor50may cause the display panel110to display the image (S500). When the determination process of whether to enter the screen off mode is performed once, the accuracy of the determination of whether the conductor approaches the touch panel140may be low. Thus, even when the conductor does not approach the touch panel140, the display panel110may erroneously enter the screen off mode, in which the image is not displayed, due to a signal interference or a noise. Therefore, the accuracy of the operation of the screen off mode may be low. Thus, in an embodiment of the invention, the host processor50may count the rise of the proximity data of the touch panel140in the proximity area120to determine whether to enter the screen off mode to enhance the operation of the accuracy of the screen off mode. Accordingly, since whether to enter the screen off mode may be determined not based on one rise of the proximity data of touch panel140but several rises of the proximity data, the accuracy of the operation of the screen off mode may be enhanced. The host processor50may enhance the accuracy of the operation of the screen off mode by counting the rise of the proximity data of the touch panel140in the proximity area120to determine whether to enter the screen off mode. In an embodiment, for example, the host processor50may receive the proximity area120in the proximity area120. The host processor50may determine whether the conductor approaches the touch panel140in the proximity area120based on the proximity data provided from the touch panel140. In addition, the host processor50may generate a first count value by counting the rise of the proximity data in the proximity area120. The host processor50may determine whether the first count value is equal to or greater than a first reference value, and when the first count value is equal to or greater than the first reference value, the display device100may enter the screen off mode. The host processor50may generate the first count value by counting the rise of the proximity data in the proximity area120, and when the first count value is equal to or greater than the first reference value, the display panel110may be powered off. In an embodiment, when the conductor approaches the touch panel140in the non-proximity area130and the proximity data in the non-proximity area130increases (S600), the host processor50may reset the first count value in the proximity area120(S700). In an embodiment, as illustrated inFIG.7A, the first count value may be reset immediately the proximity data in the non-proximity area130increases. Alternatively, as illustrated inFIG.7B, the first count value may be reset after the predetermined delay time from a time point when the proximity data in the non-proximity area130increases. Alternatively, as illustrated inFIG.7C, the first count value may be reset by the rising period of the proximity data in the non-proximity area130from a time point when the rise of the proximity data in the non-proximity area130is finished. According to an embodiment, the first count indicating the rise of the proximity data in the proximity area120may be reset, when the proximity data rises in the non-proximity area130. Accordingly, when the conductor approaches and moves away from the non-proximity area130, the malfunction of entering the screen off mode may be effectively prevented. Thus, the accuracy of the screen off mode function may be enhanced and the reliability of the display device may be enhanced. FIG.11is a block diagram illustrating an electronic device1000according to embodiment of the invention.FIG.12is a diagram illustrating an embodiment in which the electronic device1000ofFIG.11is implemented as a smart phone. Referring toFIGS.11and12, an embodiment of the electronic device1000may include a processor1010, a memory device1020, a storage device1030, an input/output (I/O) device1040, a power supply1050, and a display device1060. The display device1060may be the display device100ofFIG.1. In addition, the electronic device1000may further include a plurality of ports for communicating with a video card, a sound card, a memory card, a universal serial bus (USB) device, other electronic device, and the like. In an embodiment, as illustrated inFIG.12, the electronic device1000may be implemented as a smart phone. However, the electronic device1000is not limited thereto. For example, the electronic device1000may be implemented as a cellular phone, a video phone, a smart pad, a smart watch, a tablet personal computer (PC), a car navigation system, a computer monitor, a laptop, a head mounted display (HMD) device, and the like. The processor1010may perform various computing functions. The processor1010may be a micro processor, a central processing unit (CPU), an application processor (AP), and the like. The processor1010may be coupled to other components via an address bus, a control bus, a data bus, and the like. Further, the processor1010may be coupled to an extended bus such as a peripheral component interconnection (PCI) bus. The memory device1020may store data for operations of the electronic device1000. For example, the memory device1020may include at least one non-volatile memory device such as an erasable programmable read-only memory (EPROM) device, an electrically erasable programmable read-only memory (EEPROM) device, a flash memory device, a phase change random access memory (PRAM) device, a resistance random access memory (RRAM) device, a nano floating gate memory (NFGM) device, a polymer random access memory (PoRAM) device, a magnetic random access memory (MRAM) device, a ferroelectric random access memory (FRAM) device, and the like and/or at least one volatile memory device such as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a mobile DRAM device, and the like. The storage device1030may include a solid state drive (SSD) device, a hard disk drive (HDD) device, a CD-ROM device, and the like. The I/O device1040may include an input device such as a keyboard, a keypad, a mouse device, a touch-pad, a touch-screen, and the like, and an output device such as a printer, a speaker, and the like. In some embodiments, the I/O device1040may include the display device1060. The power supply1050may provide power for operations of the electronic device1000. The inventions may be applied to any display device and any electronic device including the touch panel. For example, the inventions may be applied to a mobile phone, a smart phone, a tablet computer, a digital television (TV), a three-dimensional (3D) TV, a personal computer (PC), a home appliance, a laptop computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a digital camera, a music player, a portable game console, a navigation device, etc. The invention should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art. While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit or scope of the invention as defined by the following claims. | 41,500 |
11861110 | DETAILED DESCRIPTION Electronic devices may be provided with displays. Displays may be used for displaying images for users. Displays may be formed from arrays of light-emitting diode pixels or other pixels. For example, a device may have an organic light-emitting diode (OLED) display. The electronic devices may have sensors such touch sensors. This provides the display with touch screen capabilities. A schematic diagram of an illustrative electronic device having a display is shown inFIG.1. Device10may be a cellular telephone, tablet computer, laptop computer, wristwatch device or other wearable device, a television, a stand-alone computer display or other monitor, a computer display with an embedded computer (e.g., a desktop computer), a system embedded in a vehicle, kiosk, or other embedded electronic device, a media player, or other electronic equipment. Configurations in which device10is a wristwatch, cellular telephone, tablet computer, or other portable electronic device may sometimes be described herein as an example. This is illustrative. Device10may, in general, be any suitable electronic device with a display. Device10may include control circuitry20. Control circuitry20may include storage and processing circuitry for supporting the operation of device10. The storage and processing circuitry may include storage such as nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory configured to form a solid state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Processing circuitry in control circuitry20may be used to gather input from sensors and other input devices and may be used to control output devices. The processing circuitry may be based on one or more microprocessors, application processors, microcontrollers, digital signal processors, baseband processors and other wireless communications circuits, power management units, audio chips, application specific integrated circuits, etc. The processing circuitry of circuitry20is sometimes referred to as an application processor or a system processor. During operation, control circuitry20may use a display and other output devices in providing a user with visual output and other output. To support communications between device10and external equipment, control circuitry20may communicate using communications circuitry22. Circuitry22may include antennas, radio-frequency transceiver circuitry (wireless transceiver circuitry), and other wireless communications circuitry and/or wired communications circuitry. Circuitry22, which may sometimes be referred to as control circuitry and/or control and communications circuitry, may support bidirectional wireless communications between device10and external equipment over a wireless link (e.g., circuitry22may include radio-frequency transceiver circuitry such as wireless local area network transceiver circuitry configured to support communications over a wireless local area network link, near-field communications transceiver circuitry configured to support communications over a near-field communications link, cellular telephone transceiver circuitry configured to support communications over a cellular telephone link, or transceiver circuitry configured to support communications over any other suitable wired or wireless communications link). Wireless communications may, for example, be supported over a Bluetooth® link, a WiFi® link, a wireless link operating at a frequency between 6 GHz and 300 GHz, a 60 GHz link, or other millimeter wave link, cellular telephone link, wireless local area network link, personal area network communications link, or other wireless communications link. Device10may, if desired, include power circuits for transmitting and/or receiving wired and/or wireless power and may include batteries or other energy storage devices. For example, device10may include a coil and rectifier to receive wireless power that is provided to circuitry in device10. Device10may include input-output devices such as devices24. Input-output devices24may be used in gathering user input, in gathering information on the environment surrounding the user, and/or in providing a user with output. Devices24may include one or more displays such as display14. Display14may be an organic light-emitting diode display, a liquid crystal display, an electrophoretic display, an electrowetting display, a plasma display, a microelectromechanical systems display, a display having a pixel array formed from crystalline semiconductor light-emitting diode dies (sometimes referred to as microLEDs), and/or other display. Configurations in which display14is an organic light-emitting diode display are sometimes described herein as an example. Sensors16in input-output devices24may include force sensors (e.g., strain gauges, capacitive force sensors, resistive force sensors, etc.), audio sensors such as microphones, touch and/or proximity sensors such as capacitive sensors (e.g., a two-dimensional capacitive touch sensor integrated into display14, a two-dimensional capacitive touch sensor overlapping display14, and/or a touch sensor that forms a button, trackpad, or other input device not associated with a display), and other sensors. Display14with overlapping touch sensor circuitry that provide touch sensing functionality may sometimes be referred to as a touch screen display. If desired, sensors16may include optical sensors such as optical sensors that emit and detect light, ultrasonic sensors, optical touch sensors, optical proximity sensors, and/or other touch sensors and/or proximity sensors, monochromatic and color ambient light sensors, image sensors, fingerprint sensors, temperature sensors, sensors for measuring three-dimensional non-contact gestures (“air gestures”), pressure sensors, sensors for detecting position, orientation, and/or motion (e.g., accelerometers, magnetic sensors such as compass sensors, gyroscopes, and/or inertial measurement units that contain some or all of these sensors), health sensors, radio-frequency sensors, depth sensors (e.g., structured light sensors and/or depth sensors based on stereo imaging devices that capture three-dimensional images), optical sensors such as self-mixing sensors and light detection and ranging (lidar) sensors that gather time-of-flight measurements, humidity sensors, moisture sensors, gaze tracking sensors, and/or other sensors. In some arrangements, device10may use sensors16and/or other input-output devices to gather user input. For example, buttons may be used to gather button press input, touch sensors overlapping displays can be used for gathering user touch screen input, touch pads may be used in gathering touch input, microphones may be used for gathering audio input, accelerometers may be used in monitoring when a finger contacts an input surface and may therefore be used to gather finger press input, etc. If desired, electronic device10may include additional components (see, e.g., other devices18in input-output devices24). The additional components may include haptic output devices, audio output devices such as speakers, light-emitting diodes for status indicators, light sources such as light-emitting diodes that illuminate portions of a housing and/or display structure, other optical output devices, and/or other circuitry for gathering input and/or providing output. Device10may also include a battery or other energy storage device, connector ports for supporting wired communication with ancillary equipment and for receiving wired power, and other circuitry. FIG.2is a perspective view of electronic device10in an illustrative configuration in which device10is a portable electronic device such as a wristwatch, cellular telephone, or tablet computer. As shown inFIG.2, device10may have a display such as display14. Display14may cover some or all of the front face of device10. Touch sensor circuitry such as two-dimensional capacitive touch sensor circuitry (as an example) may be incorporated into display14. Display14may be characterized by an active area such as active area AA and an inactive border region that runs along one or more sides of active area AA (see, e.g., inactive area IA). Active area AA contains an array of pixels P that are configured to display an image for a user. Inactive area IA is free of pixels and does not display image content. If desired, there may be notch-shaped or island-shaped regions without pixels P in active area AA and these areas may contain inactive display borders (e.g., IA may extend around openings in active area AA and/or other pixel-free regions in display14). Configurations in which inactive area IA forms a peripheral border for display14are sometimes described herein as an example. Display14may be mounted in housing12. Housing12may form front and rear housing walls, sidewall structures, and/or internal supporting structures (e.g., a frame, midplate member, etc.) for device10. Glass structures, transparent polymer structures, and/or other transparent structures that cover display14and other portions of device10may provide structural support for device10and may sometimes be referred to as housing structures or display cover layer structures. For example, a transparent housing portion such as a glass or polymer housing structure that covers and protects a pixel array in display14may serve as a display cover layer for the pixel array while also serving as a housing wall on the front face of device10. The portions of housing12on the sidewalls and rear wall of device10may be formed from transparent structures and/or opaque structures. Device10ofFIG.2has a rectangular outline (rectangular periphery) with four corners. Device10may have other shapes, if desired (e.g., circular shapes, other shapes with curved and/or straight edges, etc.). FIG.3is a cross-sectional side view of a touch screen display14(i.e., a display with overlapping touch sensor circuitry). As shown inFIG.3, display14may include a substrate such as substrate302. Substrate302may be formed from glass, metal, plastic, ceramic, sapphire, or other suitable substrate materials. As examples, substrate302may be an organic substrate formed from polyimide (PI), polyethylene terephthalate (PET), or polyethylene naphthalate (PEN). The surface of substrate302may optionally be covered with one or more buffer layers (e.g., inorganic buffer layers such as layers of silicon oxide, silicon nitride, etc.). Thin-film transistor (TFT) layers304may be formed over substrate302. The TFT layers304may include thin-film transistor circuitry such as thin-film transistors (e.g., silicon transistors, semiconducting oxide transistors, etc.), thin-film capacitors, associated routing circuitry, and other thin-film structures formed within multiple metal routing layers and dielectric layers. Organic light-emitting diode (OLED) layers306may be formed over the TFT layers304. The OLED layers306may include a cathode layer, an anode layer, and emissive material interposed between the cathode and anode layers. The cathode layer is typically formed above the anode layer. The cathode layer may be biased to a ground power supply voltage ELVSS. Ground power supply voltage ELVSS may be 0 V, −2 V, −4, −6V, less than −8 V, −10V, −12V, or any suitable ground or negative power supply voltage level. If desired, the cathode layer may be formed under the anode layer. Circuitry formed in the TFT layers304and the OLED layers306may be protected by encapsulation layers308. As an example, encapsulation layers308may include a first inorganic encapsulation layer, an organic encapsulation layer formed on the first inorganic encapsulation layer, and a second inorganic encapsulation layer formed on the organic encapsulation layer. Encapsulation layers308formed in this way can help prevent moisture and other potential contaminants from damaging the conductive circuitry that is covered by layers308. This is merely illustrative. Encapsulation layers308may include any number of inorganic and/or organic barrier layers formed over the OLED layers306. One or more buffer layers such as layer310may be formed on encapsulation layers308. Buffer layer310may be formed from silicon oxide, silicon nitride, or other suitable buffering materials. One or more touch layers316that implement the touch sensor functions of touch screen display14may be formed over the display layers. For example, touch (sensor) layers316may include touch sensor circuitry such as horizontal touch sensor electrodes and vertical touch sensor electrodes collectively forming an array of capacitive touch sensor electrodes. A cover glass layer320may be formed over the touch sensor layers316using adhesive318(e.g., optically clear adhesive material). Cover glass320may serve as an outer protective layer for display14. In certain applications, noise from the display circuitry (e.g., the circuitry in layers304and306) can leak or be inadvertently coupled to the touch sensor circuitry (e.g., the circuitry in layers316). For example, power supply noise on the upper cathode layer can sometimes be inadvertently coupled to the touch sensor circuitry. Such display noise can potentially degrade the accuracy and performance of the touch sensor circuitry. Display noise may be particularly problematic at higher refresh rates (e.g., refresh rates of greater than 60 Hz, greater than 80 Hz, greater than 100 Hz, 120 Hz or greater, etc.). In accordance with an embodiment, one or more shielding layers such as shielding layer(s)312may be interposed between the display circuitry and the touch sensor circuitry. As shown in the stackup ofFIG.3, shielding layer312may be formed on buffer layer310above the display encapsulation layers308. Buffer layer310may sometimes be considered to be part of shielding layers312. Shielding layer312may be implemented as a conductive mesh structure, a transparent conductive film, a conductive mesh structure overlapped by a transparent conductive film, or other suitable electrical shielding configurations. The presence of shielding layer312reduces the capacitive coupling between the display and touch sensor circuities and thus helps to mitigate the effect of display noise on the touch sensor structures. The shielding layer312can be actively driven using noise canceling signals or passively driven using a direct current (DC) power supply voltage source. Shielding layer312may therefore sometimes be referred to as a noise shielding layer. If desired, one or more layers314may be interposed between shielding layer312and touch sensor layers316. Layers314may include one or more polarizer films, optically clear adhesive films, and other suitable layers in a touch screen display. In general, other layers (not shown) may also be included in the stackup ofFIG.3. FIG.4Aillustrates one embodiment of noise shielding layer312. As shown inFIG.4A, noise shielding layer312may be formed directly on buffer layer310. Buffer layer310may be formed above the display encapsulation layers (seeFIG.3). Buffer layer310may be a dielectric layer configured to provide improved adhesion for noise shielding layer312. Noise shielding layer312may include conductive routing lines330collectively forming a conductive mesh structure. Conductive mesh structure330can be formed from metal such as molybdenum, aluminum, nickel, chromium, copper, titanium, silver, gold, ferrite, a combination of these materials, other metals, or other suitable electromagnetic shielding material. Mesh structure330is therefore sometimes referred to as a metal mesh structure, a metal shielding mesh structure, or a conductive mesh shielding structure. Mesh structure330may be formed by first depositing a layer of metal and then patterning the metal layer by selectively forming openings or slots to create the mesh configuration. FIG.5is a top plan (layout) view showing one illustrative arrangement of conductive mesh shielding structure330. As shown inFIG.5, conductive mesh shielding structure330may be configured as a conductive grid having openings (windows or slots) aligned with respective display subpixels. For example, mesh shielding structure330may include a first set of openings in the grid overlapping with the green (G) display subpixels, a second set of openings in the grid overlapping with the red (R) display subpixels, and a third set of openings in the grid overlapping with the blue (B) display subpixels. A uniform mesh or grid-like structure configured in this way helps maximize noise shielding capabilities while minimizing electrical loading and potential optical degradation due to the shielding layer312. In the example ofFIG.5, the openings associated with the blue subpixels may be larger than the openings associated with the green subpixels, which are larger than the opening associated with the red subpixels. This is merely illustrative. As another example, the openings associated with the different color subpixels may be the same size. As another example, the openings associated with the green subpixels may be larger than the openings associated with the blue subpixels, which are larger than the opening associated with the red subpixels. As another example, the openings associated with the red subpixels may be larger than the openings associated with the blue subpixels, which are larger than the opening associated with the green subpixels. As another example, the openings associated with the blue subpixels may be larger than the openings associated with the red subpixels, which are larger than the opening associated with the green subpixels. As another example, the openings associated with the green subpixels may be larger than the openings associated with the red subpixels, which are larger than the opening associated with the blue subpixels. As another example, the openings associated with the red subpixels may be larger than the openings associated with the green subpixels, which are larger than the opening associated with the blue subpixels. Referring back toFIG.4A, a planarization layer such as PLN layer332may be formed over metal shielding mesh330. Planarization layer332may be formed from organic dielectric materials such as polymer. Planarization layer332may be configured to protect the metal shielding mesh330from corrosion. If desired, one or more additional buffer layers may be formed between mesh330and planarization layer332to promote improved adhesion and/or to provide improved protection from external elements or contaminants. The embodiment ofFIG.4Ain which shielding layer312includes metal mesh structure330is merely illustrative.FIG.4Bshows another embodiment in which shielding layer312includes a transparent conductive film such as transparent conductive film331without any mesh structure. As shown inFIG.4B, transparent conductive film331may be formed directly on buffer layer310. Buffer layer310may be formed above the display encapsulation layers (seeFIG.3). Buffer layer310may be a dielectric layer configured to provide improved adhesion for transparent conductive film331. Transparent conductive film331can be formed from indium tin oxide (ITO), indium zinc oxide (IZO), zinc tin oxide (ZTO), fluorine tin oxide (FTO), aluminum zinc oxide (AZO), a combination of these materials, multiple layers of one or more of these materials, and/or other transparent conducting film material. Transparent conductive film331may be formed by depositing a thin layer of transparent conductive material on buffer layer310. Transparent conductive film331can be configured to help maximize noise shielding capabilities and optical transmittance through shielding layer312. Film331can therefore sometimes be referred to as a transparent shielding layer. A planarization layer such as PLN layer332may be formed over transparent conductive film331. Planarization layer332may be formed from organic dielectric materials such as polymer. Planarization layer332may be configured to protect the transparent conductive (shielding) layer331from corrosion. If desired, one or more additional buffer layers may be formed between transparent conductive layer331and planarization layer332to promote improved adhesion and/or to provide improved protection from external elements or contaminants. The example ofFIG.4Ain which shielding layer312includes metal mesh shielding structure330and the example ofFIG.4Bin which shielding layer312includes transparent conductive film331are merely illustrative.FIG.4Cshows another embodiment in which shielding layers312include both metal mesh shielding structure330and transparent conductive film331. As shown inFIG.4C, transparent conductive film331may be deposited directly on buffer layer310. Metal mesh shielding structure330may then be deposited and patterned over transparent conductive film331. Planarization layer332may then be formed over metal mesh shielding structure330. If desired, one or more additional buffer layers may be formed between metal mesh shielding structure330and planarization layer332. The example ofFIG.4Cin which shielding layer312includes transparent conductive film331formed under metal mesh shielding structure330is merely illustrative.FIG.4Dshows another embodiment in which transparent conductive film331is formed over metal mesh shielding structure330. As shown inFIG.4D, metal mesh shielding structure330may be deposited and patterned directly on buffer layer310. Transparent conductive film331may then be deposited over metal mesh shielding structure330. Planarization layer332may then be formed over transparent conductive film331. If desired, one or more additional buffer layers may be formed between transparent conductive layer331and planarization layer332. Shielding layer(s)312can be actively driven or passively biased.FIG.6is an exploded view showing how shielding layer312can be actively driven based on signals from the display cathode layer307in accordance with some embodiments. As shown inFIG.6, a display cathode layer such as cathode layer307(see, e.g., OLED layers306inFIG.3having a cathode layer) may be coupled to an input of an inverting circuit400via input path404. Inverting circuit400may be formed on a printed circuit separate from the display substrate302ofFIG.3. Inverting circuit400may have an output that is coupled to shielding layer312via output path406. Inverting circuit400may include an operational amplifier402having a first (positive) input coupled to ground, a second (negative) input, and an output coupled to output path406. Inverting circuit400may include a first capacitor C1and a first resistor R1coupled in series between input path404and the negative input of operational amplifier402. Inverting circuit400may further include a second capacitor and a second resistor T2coupled in parallel between the output and the negative input of operational amplifier402. The particular implementation of inverting circuit400as shown inFIG.6is merely illustrative. If desired, other types of signal inverting circuit can be used. Shielding layer312may be of the type described in connection withFIGS.4A-4D(as examples). Arranged in this way, inverting circuit400may be configured to receive a display noise signal Snoise from cathode layer307, to invert the display noise signal to generate a corresponding inverted display noise signal Snoise_inv, and to actively drive shielding layer312using the inverted display noise signal Snoise_inv. Noise signal Snoise may represent a cathode noise, a display power supply noise, or other noise associated with the TFT/OLED layers. By actively feeding the inverted display noise to shielding layer312, any noise leaking from the cathode layer307to the touch layers can be effectively cancelled out or reduced. Inverting circuit400coupled and operated in this way is therefore sometimes referred to as noise compensation circuitry or noise cancellation circuitry400. FIG.7is a top plan (layout) view of display14showing how noise cancellation circuitry400can be coupled to the display and shielding layers in accordance with some embodiments. As shown inFIG.7, the display and shielding layers may be formed on a substrate450. Substrate450may be formed from glass, plastic, polymer, ceramic, sapphire, metal, or other suitable substrate materials. Substrate450is shown to have a rectangular peripheral outline. This is illustrative. Substrate450can have straight edges and curved corners. Display pixels (e.g., organic light-emitting diode pixels) may be formed in an active area AA delineated by the dotted outline. Conductive shielding mesh structure330may overlap the active area AA (e.g., mesh330may have an array of grid openings aligned with display subpixels in the active area) and may have a conductive border334that completely surrounds active area AA. When viewed from the perspective ofFIG.7, display substrate450of the display panel can be said to have a left peripheral edge, a right peripheral edge, a top peripheral edge joining the top portions of the left and right outer edges, and a bottom peripheral edge joining the bottom portions of the left and right outer edges. Conductive border334may be formed along the left, top, right, and bottom peripheral edges of display substrate450. The display circuitry formed on substrate450may be controlled using components such as a display driver integrated circuit454(sometimes referred to as a timing controller integrated circuit) that is formed on a separate printed circuit board452. Printed circuit board452may be a flexible printed circuit cable that joins the display circuitry to control circuitry20(seeFIG.1). Display driver integrated circuit454may communicate directly with control circuitry20to send control and data signals to column driver circuitry and gate driver circuitry on the display panel. In other words, control circuitry20controls display14through display driver integrated circuit454(i.e., control circuitry20is coupled to display14via timing controller454). Noise cancelling circuitry400can be formed on printed circuit452. Noise cancelling circuitry400(e.g., operational amplifier402and associated components C1, C2, R1, and R2as shown inFIG.6) may be formed as discrete components surface mounted on printed circuit452, may be formed as part of display driver integrated circuit454, or may be formed as part of a separate integrated circuit chip mounted on printed circuit452. In the example ofFIG.7, the input of circuitry400may be coupled to the center point460-1of the left edge of the cathode layer (i.e., along the left peripheral edge of the display panel) via input path404, whereas the output of circuitry400may be coupled to one or more locations along bottom edge334′ of the conductive border334(i.e., along the bottom peripheral edge of the display panel) via output driving path406. The example ofFIG.7in which the input of noise cancellation circuitry400is coupled to the center point460-1along the left edge of the cathode layer is merely illustrative. As another example, the input of circuitry400may be coupled to a center point (see location460-2) along the right edge of the cathode layer. As another example, the input of circuitry400may be coupled to a center point (see location460-3) along the top edge of the cathode layer. As another example, the input of circuitry400may be coupled to a center point (see location460-4) along the bottom edge of the cathode layer. As another example, the input of circuitry400may be coupled to a top left corner (see location460-5) of the cathode layer. As another example, the input of circuitry400may be coupled to a top right corner (see location460-6) of the cathode layer. As another example, the input of circuitry400may be coupled to a bottom right corner (see location460-7) of the cathode layer. As yet another example, the input of circuitry400may be coupled to a bottom left corner (see location460-8) of the cathode layer. The embodiment ofFIGS.6and7in which the input of the noise cancelling circuitry400is coupled to one of the display layers (e.g., the cathode layer) is merely illustrative.FIG.8shows another embodiment where the input of noise cancellation circuitry400is coupled to the shielding layer. As shown inFIG.8, the display and shielding layers may be formed on substrate450. Display pixels may be formed in an active area AA delineated by the dotted region. Conductive shielding mesh structure330may overlap the active area AA (e.g., mesh330may have an array of grid openings aligned with display subpixels in the active area) and may have a conductive border334that completely surrounds active area AA. When viewed from the perspective ofFIG.8, display substrate450of the display panel can be said to have a left peripheral edge, a right peripheral edge, a top peripheral edge joining the top portions of the left and right outer edges, and a bottom peripheral edge joining the bottom portions of the left and right outer edges. Conductive border334may be formed along the left, top, right, and bottom peripheral edges of display substrate450. Display driver integrated circuit454(sometimes referred to as a timing controller) may be formed on printed circuit452adjoining the bottom peripheral edge of the display panel. Noise cancelling circuitry400(e.g., noise cancellation circuitry of the type shown inFIG.6) can be formed on printed circuit452. Noise cancelling circuitry400may be formed as discrete components surface mounted on printed circuit452, may be formed as part of display driver integrated circuit454, or may be formed as part of a separate integrated circuit chip mounted on printed circuit452. In the example ofFIG.8, the input of circuitry400may be coupled to the top left corner (see location462-1) of the conductive border334via input path404, whereas the output of circuitry400may be coupled to one or more locations along bottom edge334′ of the conductive border334(i.e., along the bottom peripheral edge of the display panel) via output driving path406. Connected in this way, any potential noise coupled onto shielding mesh structure330can be canceled or compensated by the inverted noise signal that is driven back onto shielding mesh structure330. The example ofFIG.8in which the input of noise cancelling circuitry400is coupled to the top left corner462-1of the shielding structure is merely illustrative. As another example, the input of circuitry400may be coupled to a center point (see location462-2) along the top edge of conductive border334in the shielding structure. As another example, the input of circuitry400may be coupled to a top right corner (see location460-3) of conductive border334in the shielding structure. As another example, the input of circuitry400may be coupled to a center point (see location462-4) along the right edge of conductive border334in the shielding structure. As another example, the input of circuitry400may be coupled to a bottom right corner (see location462-5) of conductive border334in the shielding structure. As another example, the input of circuitry400may be coupled to a center point (see location462-6) along one or more locations along bottom edge334′ of the shielding structure. As another example, the input of circuitry400may be coupled to a bottom left corner (see location462-7) of conductive border334in the shielding structure. As yet another example, the input of circuitry400may be coupled to a center point (see location462-8) along the left edge of conductive border334in the shielding structure. The top plan view ofFIG.8may represent shielding layer312of the type shown inFIG.4Athat includes conductive shielding mesh structure330. If desired,FIG.8may also represent shielding layer312of the type shown inFIG.4Cwhere transparent conductive film331is formed below the mesh shielding structure330or shielding layer312of the type shown inFIG.4Dwhere transparent conductive film331is formed above the mesh shielding structure330. Transparent conductive film331is not explicitly labeled inFIG.8to avoid obscuring the present embodiments. The example ofFIG.8in which the shielding layer includes mesh structure330is merely illustrative.FIG.9shows another embodiment in which the shielding layer includes transparent conductive film331but without any mesh structure (see, e.g., shielding layer312of the type shown inFIG.4B). As shown inFIG.9, the display and shielding layers may be formed on substrate450. Display pixels may be formed in an active area AA delineated by the dotted area. Transparent conductive film331may cover and overlap the active area AA and may be electrically coupled to a conductive border334that completely surrounds active area AA. When viewed from the perspective ofFIG.9, display substrate450of the display panel can be said to have a left peripheral edge, a right peripheral edge, a top peripheral edge joining the top portions of the left and right outer edges, and a bottom peripheral edge joining the bottom portions of the left and right outer edges. Conductive border334may be formed along the left, top, right, and bottom peripheral edges of display substrate450. Display driver integrated circuit454(sometimes referred to as a timing controller) may be formed on printed circuit452disposed along the bottom peripheral edge of the display panel. Noise cancelling circuitry400(e.g., noise cancellation circuitry of the type shown inFIG.6) can be formed on printed circuit452. The noise cancelling circuitry may be formed as discrete components surface mounted on printed circuit452, may be formed as part of display driver integrated circuit454, or may be formed as part of a separate integrated circuit chip mounted on printed circuit452. In the example ofFIG.9, the input of circuitry400may be coupled to a center point (see location464) along the left edge of conductive border334(i.e., along the left peripheral edge of the display panel) via input path404, whereas the output of circuitry400may be coupled to one or more locations along bottom edge334′ of the conductive border334(i.e., along the bottom peripheral edge of the display panel) via output driving path406. Connected in this way, any potential noise coupled onto shielding mesh structure330can be canceled or compensated by the inverted noise signal that is driven back onto transparent conductive film331. The example ofFIG.9in which the input of circuitry400is coupled to the center point464along the left edge of conductive border334in the shielding layer is merely illustrative. If desired, the input of circuitry400may alternatively be coupled to the top left corner of border334in the shielding layer, to a center point along the top edge of border334in the shielding layer, to a top right corner of border334in the shielding layer, to a center point along a right edge of border334in the shielding layer, to a bottom right corner of border334in the shielding layer, to a center point along bottom edge334′ in the shielding layer, or to a bottom left corner of border334in the shielding layer (see, e.g., alternate tapping point locations as shown in the example ofFIG.8). The examples ofFIGS.6-9in which the shielding layer receives an inverted signal from an analog circuit such as circuitry400of the type shown inFIG.6is merely illustrative.FIG.10shows another embodiment in which the shielding layer receives a noise cancellation signal generated by control circuitry20. As shown inFIG.10, one or more processors within control circuitry20(e.g., a system processor or an application processor) may generate noise cancellation signal Scancel using digital signal processing that is optionally dependent on the display content. For example, a given display content may result in a given display noise characteristic, so the system processor may be configured to generate noise compensation signal Scancel that can effectively cancel out or mitigate the given display noise characteristic produced by the given display content. The system processor may be mounted on a printed circuit board separate from printed circuit452(e.g., the system processor is sometimes mounted on a main logic board separate from flex circuit452). Signal Scancel may be routed to one or more locations along bottom edge334′ of the shielding structure via output path490(as an example). The examples ofFIG.6-9in which the noise cancellation circuitry includes only one inverting circuit is merely illustrative.FIG.11shows another embodiment in which the noise cancelling circuitry includes multiple inverting circuits for injecting inverted display noise signals onto different edges of the shielding structure. As shown inFIG.11, the display and shielding layers may be formed on substrate450. Display pixels may be formed in an active area AA delineated by the dotted region. Conductive shielding mesh structure330may overlap the active area AA (e.g., mesh330may have an array of grid openings aligned with display subpixels in the active area) and may have a conductive border334that completely surrounds active area AA. When viewed from the perspective ofFIG.11, display substrate450of the display panel can be said to have a left peripheral edge, a right peripheral edge, a top peripheral edge joining the top portions of the left and right outer edges, and a bottom peripheral edge joining the bottom portions of the left and right outer edges. Conductive border334may be formed along the left, top, right, and bottom peripheral edges of display substrate450. Display driver integrated circuit454(sometimes referred to as a timing controller) may be formed on printed circuit452disposed along the bottom peripheral edge of the display panel. In the example ofFIG.11, the noise cancelling circuitry may include a first inverting circuit400-1and a second inverting circuit400-2. Inverting circuits400-1and400-2may each be implemented using an inverting circuit configuration of the type shown inFIG.6or other types of signal inverting circuit. The noise cancelling circuitry can be formed on printed circuit452as discrete components surface mounted on printed circuit452, as part of display driver integrated circuit454, or as part of a separate integrated circuit chip mounted on printed circuit452. First inverting circuit400-1may have an input coupled to a center point (see location470) along a left edge of conductive border334via input path404-1and may have an output coupled to a center point along bottom edge334′ of the shielding structure. Second inverting circuit400-2may have an input coupled to a center point (see location472) along a right edge of conductive border334via input path404-2and may have an output that is coupled to a top left corner (see location474) of the shielding structure via first output path406-2aand that is coupled to a top right corner (see location476) of the shielding structure via second output path406-2b. Using a double-ended or head-to-head driving scheme in this way can help reduce signal settling time and further enhance noise cancellation capabilities. The example ofFIG.11in which the inputs of inverting circuits400-1and400-2are coupled to the center points of left and right edges of the shielding structure is merely illustrative. If desired, the inputs of inverting circuits400-1and400-2can be coupled to any other location(s) along conductive border334. Similarly, the outputs of inverting circuits400-1and400-2can be coupled to any other location(s) along conductive border334. The top plan view ofFIG.11may represent shielding layer312of the type shown inFIG.4Athat includes conductive shielding mesh structure330. If desired,FIG.11may also represent shielding layer312of the type shown inFIG.4Cwhere transparent conductive film331is formed below the mesh shielding structure330or shielding layer312of the type shown inFIG.4Dwhere transparent conductive film331is formed above the mesh shielding structure330. Transparent conductive film331is not explicitly labeled inFIG.11to avoid obscuring the present embodiments. The example ofFIG.11in which the shielding layer includes mesh structure330is merely illustrative.FIG.12shows another embodiment in which the shielding layer includes transparent conductive film331but without any mesh structure (see, e.g., shielding layer312of the type shown inFIG.4B). As shown inFIG.12, the display and shielding layers may be formed on substrate450. Display pixels may be formed in an active area AA delineated by the dotted area. Transparent conductive film331may cover and overlap the active area AA and may be electrically coupled to a conductive border334that completely surrounds active area AA. When viewed from the perspective ofFIG.12, display substrate450of the display panel can be said to have a left peripheral edge, a right peripheral edge, a top peripheral edge joining the top portions of the left and right outer edges, and a bottom peripheral edge joining the bottom portions of the left and right outer edges. Conductive border334may be formed along the left, top, right, and bottom peripheral edges of display substrate450. Display driver integrated circuit454(sometimes referred to as a timing controller) may be formed on printed circuit452disposed along the bottom peripheral edge of the display panel. The noise cancelling circuitry may include a first inverting circuit400-1and a second inverting circuit400-2. The noise cancelling circuitry can be formed on printed circuit452as discrete components surface mounted on printed circuit452, as part of display driver integrated circuit454, or as part of a separate integrated circuit chip mounted on printed circuit452. First inverting circuit400-1may have an input coupled to a center point (see location470) along a left edge of conductive border334via input path404-1and may have an output coupled to one or more locations along bottom edge334′ of the shielding structure. Second inverting circuit400-2may have an input coupled to a center point (see location472) along a right edge of conductive border334via input path404-2and may have an output that is coupled to a top left corner (see location474) of the shielding structure via first output path406-2aand that is coupled to a top right corner (see location476) of the shielding structure via second output path406-2b. Using a double-ended or head-to-head driving scheme in this way can help reduce signal settling time and further enhance noise cancellation capabilities. The example ofFIG.12in which the inputs of inverting circuits400-1and400-2are coupled to the center points of the left and right edges of the shielding structure is merely illustrative. If desired, the inputs of inverting circuits400-1and400-2can be coupled to any other location(s) along conductive border334. Similarly, the outputs of inverting circuits400-1and400-2can be coupled to any other location(s) along conductive border334. The examples ofFIGS.11and12in which the shielding layer receives inverted signals from two analog circuits such as inverting circuits400-1and400-2is merely illustrative.FIG.13shows another embodiment in which the shielding layer receives noise cancellation signals generated by control circuitry20. As shown inFIG.13, one or more processors within control circuitry20(e.g., a system processor or an application processor) may generate noise cancellation signals Scancel_1and Scancel_2using digital signal processing that is optionally dependent on the display content. For example, a given display content may result in a given display noise characteristic, so the system processor may be configured to generate noise compensation signals Scancel_1and Scancel_2that can effectively cancel out or mitigate the given display noise characteristic produced by the given display content. Signals Scancel_1and Scancel_2may be identical or may be different. The system processor may be mounted on a printed circuit board separate from printed circuit452(e.g., the system processor is sometimes mounted on a main logic board separate from flex circuit452). Signal Scancel_1may be routed to a center point470along the left edge of the shielding structure via output path490-1, whereas signal Scancel_2may be routed to a center point472along the right edge of the shielding structure via output path490-2. If desired, signals Scancel_1and Scancel_2can be routed to any other location(s) along conductive border334. The embodiments ofFIGS.6-13showing how the shielding layer is actively driven using noise canceling signals (e.g., inverted display noise signals or digitally generated noise compensation signals) are merely illustrative.FIG.14illustrates another embodiment in which the shielding layer is passively biased using a power supply voltage. As shown inFIG.14, mesh structure330may be coupled to a ground line500on printed circuit452via path502. Path502may be coupled to one or more locations along bottom edge334′ of the conductive border. A ground power supply voltage VSS may be provided on ground line500. Biasing mesh structure330to the ground voltage can help reduce the amount of noise coupling from the display circuitry to the touch circuitry. This example in which the shielding structure is biased to ground voltage VSS is merely illustrative. In other embodiments, the shielding structure may be biased to a positive power supply voltage VDD, to a reference voltage, to an initialization voltage, to a reset voltage, to a bias voltage, or other static or time-varying voltages. The top plan view ofFIG.14may represent shielding layer312of the type shown inFIG.4Athat includes conductive shielding mesh structure330. If desired,FIG.14may also represent shielding layer312of the type shown inFIG.4Cwhere transparent conductive film331is formed below the mesh shielding structure330or shielding layer312of the type shown inFIG.4Dwhere transparent conductive film331is formed above the mesh shielding structure330. Transparent conductive film331is not explicitly labeled inFIG.14to avoid obscuring the present embodiments. The example ofFIG.14in which the shielding layer includes mesh structure330that is grounded is merely illustrative.FIG.15shows another embodiment in which the shielding layer includes transparent conductive film331but without any mesh structure (see, e.g., shielding layer312of the type shown inFIG.4B). As shown inFIG.15, transparent conductive film331(which is electrically coupled to conductive border334) may be coupled to ground line500on printed circuit452via path502. Path502may be coupled to one or more locations along bottom edge334′ of conductive border334. A ground power supply voltage VSS may be provided on ground line500. Biasing transparent conductive film331to the ground voltage can help reduce the amount of noise coupling from the display circuitry to the touch circuitry. This example in which the shielding film331is biased to ground voltage VSS is merely illustrative. In other embodiments, the shielding film may be biased to a positive power supply voltage VDD, to a reference voltage, to an initialization voltage, to a reset voltage, to a bias voltage, or other static voltages. The example ofFIG.7in which the inverting circuit has an input that only taps into the cathode layer or the example ofFIG.8in which the inverting circuit has an input that only taps into the conductive shielding structure is merely illustrative.FIG.16illustrates another suitable embodiment where noise cancelling inverting circuit400′ has an input that taps into both the cathode layer and the conductive shielding structure. As shown inFIG.16, inverting circuit400′ includes an operational amplifier402having a first (positive) input coupled to ground, a second (negative) input, and an output coupled to output path406. Output path406may be coupled to one or more locations along the bottom edge334′ of conductive border334. Inverting circuit400may include capacitor C2and resistor R2coupled in parallel between the output and the negative input of operational amplifier402. Inverting circuit400may further include resistor R1coupled to the positive input of operational amplifier402, capacitor C1ahaving a first terminal coupled to resistor R1and a second terminal coupled to an edge of the display cathode layer via first input path404-1, and capacitor C1bhaving a first terminal coupled to resistor R1and a second terminal coupled to a corner of the conductive mesh330via second input path404-2. The example ofFIG.16in which path404-1is coupled to a center point460along the left edge of the cathode layer and path404-2is coupled to a top left corner461of the conductive shielding mesh is merely illustrative. In general, path404-1may be coupled to any one or more locations along the border of the cathode layer, whereas404-2may be coupled to any one or more locations along conductive border334. The example ofFIG.16may also represent a shielding layer having a transparent conductive film formed under mesh330(see, e.g., shielding layer312of the type shown inFIG.4C) or a shielding layer having a transparent conductive film formed on mesh330(see, e.g., shielding layer312of the type shown inFIG.4D). If desired, the shielding layer inFIG.16need not include any mesh structure and may only include a transparent conductive film (see, e.g., shielding layer312of the type shown inFIG.4B). The example ofFIG.8in which the inverting circuit has an input that taps into only one location along the conductive shielding structure is merely illustrative.FIG.17illustrates another suitable embodiment where noise cancelling inverting circuit400′ has an input that taps into multiple locations along the conductive shielding structure. As shown inFIG.17, inverting circuit400′ includes an operational amplifier402having a first (positive) input coupled to ground, a second (negative) input, and an output coupled to output path406. Output path406may be coupled to one or more locations along the bottom edge334′ of conductive border334. Inverting circuit400may include capacitor C2and resistor R2coupled in parallel between the output and the negative input of operational amplifier402. Inverting circuit400may further include resistor R1coupled to the positive input of operational amplifier402, capacitor C1ahaving a first terminal coupled to resistor R1and a second terminal coupled to a first corner461-1of conductive mesh330via first input path404-1, and capacitor C1bhaving a first terminal coupled to resistor R1and a second terminal coupled to a second corner461-2of the conductive mesh330via second input path404-2. By coupling inverting circuit400′ to both sides of the conductive shielding structure, the risk of over compensating one side of the display relative to the other is reduced. The example ofFIG.17in which path404-1is coupled to the top left corner461-1of the conductive shielding mesh and path404-2is coupled to a top right corner461-2of the conductive shielding mesh is merely illustrative. In general, path404-1may be coupled to any one or more locations along conductive border334, whereas404-2may be coupled to any one or more locations along conductive border334. The example ofFIG.17may also represent a shielding layer having a transparent conductive film formed under mesh330(see, e.g., shielding layer312of the type shown inFIG.4C) or a shielding layer having a transparent conductive film formed on mesh330(see, e.g., shielding layer312of the type shown inFIG.4D). If desired, the shielding layer inFIG.17need not include any mesh structure and may only include a transparent conductive film (see, e.g., shielding layer312of the type shown inFIG.4B). The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination. | 51,786 |
11861111 | DETAILED DESCRIPTION OF THE EMBODIMENTS The advantages and features of the present invention, and the method for achieving the advantages and features will become apparent with reference to embodiments described below in detail in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below and can be implemented in a variety of different forms, and these embodiments allow the disclosure of the present invention to be complete and are merely provided to fully inform those of ordinary skill in the art to which the present invention belongs of the scope of the invention. Further, the invention is merely defined by the scope of the claims. The shapes, sizes, proportions, angles, numbers, etc. disclosed in the drawings for describing the embodiments of the present invention are illustrative, and thus the present invention is not limited to the illustrated elements. The same reference symbol refers to the same element throughout the specification. In addition, in describing the present invention, when it is determined that a detailed description of a related known technology can unnecessarily obscure the subject matter of the present invention, such a detailed description will be omitted. When “equipped with”, “including”, “having”, “consisting”, etc. are used in this specification, other parts can also be present, unless “only” is used. When an element is expressed in the singular, the element can be interpreted as being plural unless otherwise explicitly stated. In interpreting an element in the embodiments of the present invention, it is to be interpreted as including an error range even when there is no separate explicit description thereof. In addition, in describing elements of the present invention, terms such as first, second, A, B, (a), (b), etc. can be used. These terms are only for distinguishing the elements from other elements, and the nature, turn, order, number of the elements, etc. are not limited by the terms. When an element is described as being “linked”, “coupled”, or “connected” to another element, the element can be directly linked or connected to the other element. However, it should be understood that another element can be “interposed” between the respective elements, or each element can be “linked”, “coupled”, or “connected” through another element. In the case of a description of a positional relationship, for example, when a positional relationship between two parts is described using “on”, “above”, “below”, “next to”, etc., one or more other parts can be located between the two parts, unless “immediately” or “directly” is used. Elements in the embodiments of the present invention are not limited by these terms. These terms are merely used to distinguish one element from another element. Accordingly, a first element mentioned below can be a second element within the spirit of the present invention. Features (configurations) in the embodiments of the present invention can be partially or wholly combined or associated with each other, or separated from each other, and various types of interlocking and driving are technically possible. The respective embodiments can be implemented independently of each other, or can be implemented together in an interrelated relationship. Hereinafter, the embodiments of the present invention will be described in detail with reference to the accompanying drawings. All the components of each display device according to all embodiments of the present invention are operatively coupled and configured. FIG.1is a configuration block diagram illustrating a touch sensing display device according to an embodiment of the present invention. As illustrated inFIG.1, the touch sensing display device according to the embodiment of the present invention can include a display panel DP, a gate driving circuit110, a data driving circuit120, a touch driving circuit SRIC130, a timing controller T-CON140, and a touch controller150. The display panel DP displays an image based on a scan signal SCAN delivered from the gate driving circuit110through a gate line GL and a data signal Vdata delivered from the data driving circuit120through a data line DL. The display panel DP includes a plurality of subpixels SP defined by a plurality of data lines DL and a plurality of gate lines GL. When the display panel DP is a liquid crystal display panel, one subpixel SP can include a thin film transistor (TFT) for supplying a data voltage Vdata of the data line DL to a pixel electrode according to a scan signal of the gate line GL, and a storage capacitor Cst charging the data voltage Vdata and maintaining the data voltage Vdata for one frame. When the display panel DP is an organic light emitting display panel, one subpixel SP can include an organic light emitting diode (OLED), a switching transistor for supplying a data voltage of the data line DL, a driving transistor for controlling current flowing through the OLED according to a data voltage supplied by the switching transistor, and a capacitor Cst charging the data voltage Vdata and maintaining the data voltage Vdata for one frame. Meanwhile, the display panel DP can include a touch panel embedded in a pixel array using an in-cell self-touch scheme. The touch panel includes a touch sensor (electrode). A detailed description of the touch panel will be described later. The timing controller140controls the gate driving circuit110and the data driving circuit120. The timing controller140is supplied with image data Vdata and timing signals such as a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a data enable signal DE, and a main clock MCLK from a host system (not illustrated). The timing controller140controls the gate driving circuit110based on scan timing control signals, such as a gate start pulse GSP, a gate shift clock, and a gate output enable signal GOE. In addition, the timing controller140controls the data driving circuit120based on data timing control signals such as a source sampling clock SSC, a polarity control signal POL, and a source output enable signal SOE. The gate driving circuit110sequentially drives a plurality of gate lines GL by sequentially supplying a scan signal SCAN to the display panel DP through the plurality of gate lines GL. Here, the gate driving circuit110is also referred to as a scan driving circuit or a gate driving integrated circuit GDIC. The gate driving circuit110sequentially supplies a scan signal SCAN of an on voltage or an off voltage to the plurality of gate lines GL under control of the timing controller140. To this end, the gate driving circuit110can include a shift register, a level shifter, etc. The gate driving circuit110can be located only on one side (for example, left side or right side) of the display panel DP, and can be located on both sides of the display panel DP according to a driving scheme, a design scheme, etc. depending on the case. The data driving circuit120receives image data Vdata from the timing controller140and supplies an analog image data voltage corresponding to the image data to the plurality of data lines DL, thereby driving the plurality of data lines DL. Here, the data driving circuit120is also referred to as a source driving circuit or a source driving integrated circuit SDIC. When a specific gate line GL is enabled by the gate driving circuit110, the data driving circuit120converts the image data Vdata received from the timing controller140into an analog image data voltage and supplies the analog image data voltage to the plurality of data lines DL. The data driving circuit120can be located only on one side (for example, upper side or lower side) of the display panel DP, and can be located on both sides of the display panel DP according to a driving scheme, a design scheme, etc. The data driving circuit120can include a shift register, a latch circuit, a digital-to-analog converter DAC, an output buffer, etc. Here, the digital-to-analog converter DAC is configured to convert the image data Vdata received from the timing controller140into an analog image data voltage to be supplied to the data line DL. The touch driving circuit130senses the presence or absence of a touch and a touched position on the display panel DP. The touch driving circuit130includes a driving circuit that generates a driving voltage for driving the touch sensor, and a sensing circuit that senses the touch sensor and generates data for detecting the presence or absence of a touch, coordinate information, etc. The driving circuit and the sensing circuit of the touch driving circuit130can take the form of one integrated circuit (IC) or can be divided and separated by function. The touch driving circuit130can be formed on an external substrate connected to the display panel DP. The touch driving circuit130is connected to the display panel DP through a plurality of sensing lines SL. The touch driving circuit130can sense the presence or absence and position of a touch based on a difference in capacitance between touch sensors formed on the display panel DP. For example, a deviation in capacitance occurs between a position touched by a finger of a user and a non-contact position, and the touch driving circuit130senses the presence or absence and position of a touch using a scheme of detecting such a deviation in capacitance. The touch driving circuit130generates a touch sensing signal for the presence or absence and position of a touch and transmits the touch sensing signal to the touch controller150. The touch controller150controls the touch driving circuit130. The touch controller150receives control synchronization signals Vsync and Tsync from the timing controller140and controls the touch driving circuit130based on the received control synchronization signals Vsync and Tsync. The touch controller150transmits and receives a touch sensing signal based on an interface IF defined with the touch driving circuit130. FIG.2is a diagram illustrating the touch driving circuit130and a touch panel TSP for self-capacitance-based touch sensing in the touch sensing display device according to the embodiments of the present invention. The touch sensing display device according to the embodiments of the present invention can sense a touch input by a finger and/or a pen through a capacitance-based touch sensing technique. To this end, as illustrated inFIG.2, a plurality of touch electrodes TE are disposed on the touch panel TSP. A touch driving signal can be applied to each of the plurality of touch electrodes TE and a touch sensing signal can be sensed therein. Each of the plurality of touch electrodes TE can be electrically connected to the touch driving circuit130through one signal line SL. A shape of one touch electrode TE illustrated inFIG.2is merely an example and can be designed in various ways. A size of a region in which one touch electrode TE is formed can be larger than a size of an area in which one subpixel is formed. For example, a size of a region in which one touch electrode TE is formed can correspond to a size of several to tens of subpixel areas. Meanwhile, as illustrated inFIG.2, the touch driving circuit130includes one or more first circuits ROIC for supplying a touch driving signal to the touch panel TSP and detecting (receiving) a touch sensing signal from the touch panel TSP, a second circuit TCR for detecting the presence or absence and/or a position of a touch input using a result of detecting the touch sensing signal of the first circuit ROIC, etc. The one or more first circuits ROIC included in the touch driving circuit130can be implemented by being integrated into one or more unified integrated circuits (touch driving circuit SRIC) together with one or more source driver integrated circuits SDIC implementing the data driving circuit120. FIG.3is an exemplary diagram illustrating timing of display driving periods DP and touch driving periods TP of the touch sensing display device according to the embodiments of the present invention, andFIG.4is an exemplary diagram illustrating 16 display driving periods DP1to DP16and 16 touch driving periods TP1to TP16obtained by time-dividing one frame time in the touch sensing display device according to the embodiments of the present invention. Referring toFIG.3, the touch sensing display device according to the embodiments of the present invention performs display driving for image display during a predetermined display driving period DP, and performs touch driving for sensing touch input by a finger and/or a pen during a predetermined touch driving period TP. The display driving period DP and the touch driving period TP are temporally separated, and the display driving period DP and the touch driving period TP can be alternated. As described above, when the display driving period DP and the touch driving period TP are temporally separated while being alternated, the touch driving period TP can be a blank period in which display driving is not performed. The touch sensing display device can generate a touch synchronization signal Tsync swinging to a high level and a low level, thereby identifying or controlling the display driving period DP and the touch driving period TP. For example, a high level section (or low level section) of the touch synchronization signal Tsync can correspond to the display driving period DP, and a low level section (or high level section) of the touch synchronization signal Tsync can correspond to the touch driving period TP. Meanwhile, in relation to a method of allocating the display driving period DP and the touch driving period TP within one frame period, as an example, one frame period is time-divided into one display driving period DP and one touch driving period TP, so that display driving can be performed during the one display driving period DP, and touch driving for sensing touch input by a finger and/or a pen can be performed during the one touch driving period TP corresponding to a blank period. As another example, one frame period is time-divided into two or more display driving periods DP and two or more touch driving periods TP. Display driving for one frame can be performed during two or more display driving periods DP within one frame. During two or more touch driving periods (TP) corresponding to a blank period within one frame, touch driving for sensing touch input by a finger and/or pen in the entire screen area can be performed once or twice or more, or touch driving for sensing touch input by a finger and/or a pen in a partial area of the screen can be performed. Meanwhile, when one frame period is time-divided into two or more display driving periods DP and two or more touch driving periods TP, each of two or more blank periods corresponding to the two or more touch driving periods TP within one frame period is referred to as a “long horizontal blank (LHB)”. Here, touch driving performed during two or more LHBs within one frame is referred to as “LHB driving”. Referring toFIG.4, one frame period can be time-divided into 16 display driving periods DP1to DP16and 16 touch driving periods TP1to TP16. In this case, the 16 touch driving periods TP1to TP16correspond to 16 LHBs (LHB #1to LHB #16). FIG.5is an explanatory diagram for an input/output signal of the touch driving circuit SRIC130, the timing controller T-CON140, and the touch controller150for improving touch noise characteristics when driving a moving image in the touch sensing display device according to an embodiment of the present invention.FIG.6is an explanatory diagram of a lookup table of the touch controller150. As illustrated inFIG.5, the timing controller T-CON140outputs the vertical synchronization signal Vsync, the touch synchronization signal Tsync, and average data to the touch controller150. As described with reference toFIG.4, the touch synchronization signal Tsync can be time-divided into 16 display driving periods DP1to DP16and 16 touch driving periods TP1to TP16. The display driving period DP and the touch driving period TP alternate. An average data value can be an average value of data supplied to the data driving circuit120in each of the display driving periods DP1to DP16before each of the touch driving periods TP1to TP16. But the present disclosure is not limited thereto. For example, an average data value can be an average value of data supplied to the data driving circuit120in at least one of the display driving periods DP1to DP16before each of the touch driving periods TP1to TP16 The touch controller150stores charge remover capacitance (CRC) compensation values, charge remover voltage (CRV) compensation values, and gain compensation values according to average data values input from the timing controller140in a lookup table as illustrated inFIG.6. When driving the moving image, and the touch electrode TE is not touched by a finger and/or a pen, a touch output voltage is relatively low when low-grayscale image data is displayed and relatively high when high-grayscale image data is displayed. Accordingly, the charge remover capacitance (CRC) compensation values, the charge remover voltage (CRV) compensation values, and the gain compensation values stored in the lookup table are set so that a relatively high touch output voltage is output when the average data value is a low grayscale, and a relatively low touch output voltage is output when the average data value is a high grayscale. The touch controller150receives an average data value from the timing controller140and reads a charge remover capacitance (CRC) compensation value, a charge remover voltage (CRV) compensation value, and gain compensation values according to the received average data value from the lookup table. Then, the touch controller150outputs the read charge remover voltage (CRV) compensation value to the power controller160. The power controller160supplies a charge remover voltage VCRcorresponding to the charge remover voltage (CRV) compensation value to the touch driving circuit SRIC130. The touch controller150supplies the read charge remover capacitance (CRC) compensation value and the gain compensation value to the touch driving circuit SRIC130. The touch driving circuit SRIC130senses a touch from the touch electrode according to the charge remover voltage VCR, the charge remover capacitance (CRC) compensation value and the gain compensation value, and outputs a touch output voltage. FIG.7is a circuit configuration diagram of the touch driving circuit SRIC in the touch sensing display device according to an embodiment of the present invention. Referring toFIG.7, the touch driving circuit SRIC integrates a touch sensing signal through an amplification circuit to improve touch sensitivity of the display panel and remove noise for touch sensing. In order to prevent a sensing signal of the amplification circuit from being saturated, a charge remover circuit that removes a charging voltage of the amplification circuit is used with the touch driving circuit SRIC. The touch driving circuit SRIC can include an OP amplifier OP, a charge remover voltage input terminal VCRto which the charge remover voltage VCRis input, a charge remover capacitor CCRconnected between the charge remover voltage input terminal VCRand a first input terminal of the OP amplifier OP, and a feedback capacitor CFBconnected between the first input terminal and an output terminal of the OP amplifier OP to adjust the amplification gain of the OP amplifier. The OP amplifier OP receives a touch signal from the touch electrode TE through the first input terminal and a reference voltage (ΔVLFD) through a second input terminal, amplifies the touch signal, and outputs a touch detection (sensing) voltage (ΔVout). Here, the charge remover capacitor CCRcan be a variable capacitor. The charge remover capacitor CCRvaries a capacitance according to a CRC compensation value output from the touch controller150. The feedback capacitor CFBcan be a variable capacitor. The feedback capacitor CFBvaries a capacitance according to a gain compensation value output from the touch controller150. The touch driving circuit SRIC ofFIG.7illustrates the case in which the display panel DP is a liquid crystal display panel. Since the touch driving circuit SRIC can be applied to an organic light emitting display panel, etc., the present invention is not limited thereto. In the touch sensing display device according to the present invention configured as described above, a method of setting (storing) the charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation values according to the average data value in the lookup table will be described as follows. First, a touch detection output voltage ΔVout_NON_TOUCHof the touch driving circuit SRIC when no touch is generated on the touch electrode TE is expressed by [Equation 1]. ΔVOUT_NON_TOUCH=ΔVLFD[Equation 1] Here, ΔVLFDis a reference voltage supplied to the second input terminal of the OP amplifier OP of the touch driving circuit SRIC. In addition, when a finger touches the touch electrode TE, a touch detection output voltage condition of the touch driving circuit SRIC is as illustrated in [Equation 2]. CFingerΔVLFD=CFB(ΔVOUT_TOUCH−ΔVLFD)+(VCR−ΔVLFD)CCR[Equation 2] Here, CFingerdenotes a capacitance when a finger is in contact with the touch electrode, ΔVout_TOUCHdenotes an output voltage of the touch driving circuit SRIC when the finger is touched, CCRdenotes a capacitance value of the charge remover capacitor CCRof the touch driving circuit SRIC, CFBdenotes a capacitance value of the feedback capacitor CFBof the touch driving circuit SRIC, and VCRdenotes the charge remover voltage VCRsupplied from the power controller160to the touch driving circuit SRIC. [Equation 2] is arranged to [Equation 3] in terms of the output voltage ΔVout_TOUCHof the touch driving circuit SRIC at the time of finger touching. ΔVOUT_TOUCH=ΔVLFD+CFingerCFBΔVLFD-CCRCFBVCR[Equation3] As can be seen from [Equation 3], the output voltage ΔVout_TOUCHof the touch driving circuit SRIC is inversely proportional to the capacitance value of the charge remover capacitor CCRof the touch driving circuit SRIC, the capacitance value of the feedback capacitor CFBof the touch driving circuit SRIC, and the charge remover voltage VCRsupplied to the touch driving circuit SRIC from the power controller160. Accordingly, in the lookup table of the touch controller150, the charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation values are set to be relatively low when the average data value is a low grayscale, and the charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation values are set to be relatively high when the average data value is a high grayscale. A method of driving the touch sensing display device according to one or more embodiments of the present invention configured as described above will be described as follows. First, the touch controller150stores the charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation value according to the input average data value in the lookup table. The charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation values are set to be relatively low when the average data value is a low grayscale, and are set to be relatively high when the average data value is a high grayscale. The timing controller140is supplied with the image data Vdata and the timing signals such as the vertical synchronization signal Vsync, the horizontal synchronization signal Hsync, the data enable signal DE, and the main clock MCLK from the host system (not illustrated). The timing controller140generates the touch synchronization signal Tsync time-divided into 16 display driving periods DP1to DP16and 16 touch driving periods TP1to TP16during one frame, in which the display driving period DP and the touch driving period TP alternate. The timing controller140generates average data values of data supplied to the data driving circuit120in each of the display driving periods DP1to DP16before each of the touch driving periods TP1to TP16. In addition, the timing controller140supplies the generated touch synchronization signal Tsync, the generated average data value, and the vertical synchronization signal Vsync to the touch controller150. The touch controller150receives the average data value in synchronization with the touch synchronization signal Tsync from the timing controller140. The touch controller150reads the charge remover capacitance (CRC) compensation value, the charge remover voltage (CRV) compensation value, and the gain compensation value according to the average data value received from the lookup table. Then, the touch controller150outputs the read charge remover voltage (CRV) compensation value to the power controller160. The power controller160supplies the charge remover voltage VCRcorresponding to the CRV compensation value to the touch driving circuit SRIC130. The touch controller150supplies the read charge remover capacitance (CRC) compensation value and gain compensation values to the touch driving circuit SRIC130. The touch driving circuit SRIC130receives the charge remover voltage VCR. The touch driving circuit SRIC130receives the charge remover capacitance (CRC) compensation value, and varies the capacitance of the variable charge remover capacitor CCRaccording to the received charge remover capacitance (CRC) compensation value. The touch driving circuit SRIC130receives the gain compensation value, and varies the capacitance of the variable feedback capacitor CFBaccording to the received gain compensation value. In addition, the touch driving circuit SRIC130amplifies a touch signal from the touch electrode TE and outputs a touch sensing value, according to the charge remover voltage VCR, the varied capacitance of the charge remover capacitor CCR, and the varied capacitance of the feedback capacitor CFB(ΔVout). FIG.8is a graph illustrating an output voltage of the touch driving circuit SRIC versus a gray level. The X-axis indicates a gray level of average data input from the timing controller140to the touch controller150, and the Y-axis indicates a code value obtained by converting an analog output voltage of the touch driving circuit SRIC into a digital value. InFIG.8, the case where the present invention is applied was compared with the case where the present invention is not applied (a related art case). For example, in the related art to which the present invention is not applied (the related art case), the touch output voltage of the touch driving circuit SRIC is relatively low when low-grayscale image data is displayed and is relatively high when high-grayscale image data is displayed. However, when the present invention is applied, the touch output voltage of the touch driving circuit SRIC is maintained constant when low-grayscale image data is displayed or when high-grayscale image data is displayed. Accordingly, it is possible to improve the touch noise characteristics during moving image driving according to the embodiments of the present invention. The touch display device and the driving method thereof according to the embodiments of the present invention having the above characteristics have the following effects. The embodiments of the present invention varies the charge remover capacitor, the capacitance of the charge remover capacitor, and the capacitor of the feedback capacitor according to the average value of the data supplied to the data lines in each display driving period before each touch driving period and amplifies and outputs the touch sensing signal from each touch electrode. Accordingly, during moving image driving, when low-grayscale image data is displayed or when high-grayscale image data is displayed, a touch sensing output voltage can be kept constant. In addition, according to the embodiments of the present invention, since the touch sensing output voltage is kept constant during moving image driving as described above, touch noise is reduced during moving image driving, and when a window needs to be moved while the moving image is driven in a small window, the window can be smoothly moved. It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. | 29,026 |
11861112 | MODE FOR INVENTION In the detailed description of the present invention described below, reference is made to the accompanying drawings, which illustrate a specific exemplary embodiment in which the present invention may be carried out, as an example. The exemplary embodiment is described in detail sufficient to enable a person skilled in the art to carry out the present invention. It should be understood that various exemplary embodiments of the present invention are different from each other, but need not to be mutually exclusive. For example, specific shapes, structures, and characteristics described herein may be implemented in other exemplary embodiments without departing from the spirit and the scope of the present invention in relation to one exemplary embodiment. Further, it should be understood that a location or disposition of an individual component in each disclosed exemplary embodiment may be changed without departing from the spirit and the scope of the present invention. Accordingly, the detailed description below is not intended to be taken in a limited meaning, and the scope of the present invention, if appropriately described, is limited only by the appended claims along with all scopes equivalent to those claimed by the claims. Like reference numerals in the drawings refer to the same or similar functions over several aspects. Hereinafter, a touch input device according to an exemplary embodiment of the present invention will be described with reference to the accompanying drawings. Hereinafter, a capacitive type touch sensor panel1will be exemplified, but the touch input device1000is also identically/similarly applied to the touch sensor panel1which is capable of detecting a touch position by a predetermined method. FIG.1Ais a schematic diagram illustrating a capacitive type touch sensor10included in the touch sensor panel1of the general touch input device1000and a configuration for an operation of the touch sensor10. Referring toFIG.1A, the touch sensor10includes the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm, and a driving unit12which applies a driving signal to the plurality of driving electrodes TX1to TXn for an operation of the touch sensor10, and a detection unit11which receives a detection signal including information on the amount of capacitance changed according to a touch to a touch surface from the plurality of receiving electrodes RX1to RXm and detects the touch and a touch position. As illustrated inFIG.1A, the touch sensor10may include the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm.FIG.1Aillustrates the case where the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm of the touch sensor10configure an orthogonal array, but the present invention is not limited thereto, and the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may have any number of dimensions and applications arrangements thereof including a diagonal arrangement, a concentric arrangement, and a three-dimensional random arrangement. Herein, n and m are positive integers, and may have the same or different values, and have different sizes depending on an exemplary embodiment. The plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be arranged to cross each other. The driving electrode TX may include the plurality of driving electrodes TX1to TXn extending in a first axis direction, and the receiving electrode RX may include the plurality of receiving electrodes RX1to RXm extending in a second axis direction crossing the first axis direction. As illustrated inFIG.1B, the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be formed on different layers. For example, any one of the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be formed on an upper surface of a display panel (not illustrated), and the other one may be formed on a lower surface of a cover which is to be described below or inside the display panel (not illustrated). Further, as illustrated inFIGS.1C and1D, the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be formed may be formed on the same layer in the touch sensor100according to the exemplary embodiment of the present invention. For example, the plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be formed on the upper surface of the display panel. The plurality of driving electrodes TX1to TXn and the plurality of receiving electrodes RX1to RXm may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. The driving unit12according to the exemplary embodiment of the present invention may apply a driving signal to the driving electrodes TX1to TXn. In the exemplary embodiment of the present invention, the driving signal may be sequentially applied to one driving electrode at a time from the first driving electrode TX1to the nth driving electrode TXn. The application of the driving signal may be repeatedly performed. However, this is merely an example, and the driving signal may also be simultaneously applied to the plurality of driving electrodes according to the exemplary embodiment. The detection unit11may detect whether a touch is input and a touch position by receiving a detection signal including information on capacitance (Cm; 14) generated between the driving electrodes TX1to TXn, to which the driving signal is applied through the receiving electrodes RX1to RXm and the receiving electrodes RX1to RXm. For example, the detection signal may be the signal in which the driving signal applied to the driving electrode TX is coupled by the capacitance (Cm: 14) generated between the driving electrode TX and the receiving electrode RX. As described above, the process of detecting the driving signal applied from the first driving electrode TX1to the nth driving electrode TXn through the receiving electrodes RX1to RXm may be referred to as scanning the touch sensor10. For example, the detection unit11may include a receiver (not illustrated) connected with each of the receiving electrodes RX1to RXm through a switch. The switch is turned on in a time period for detecting the signal of the corresponding receiving electrode RX so that the sensing signal from the receiving electrode RX may be detected by the receiver. The receiver may include an amplifier (not illustrated) and a feedback capacitor coupled between a negative (−) input terminal of the amplifier and an output terminal of the amplifier, that is, a feedback path. In this case, a positive (+) input terminal of the amplifier may be connected to ground. Further, the receiver may further include a reset switch connected to the feedback capacitor in parallel. The reset switch may reset a conversion from a current to a voltage performed in the receiver. The negative input terminal of the amplifier may be connected to the corresponding receiving electrode RX and receive a current signal including information on the capacitance (Cm: 14) and then integrate the received current signal and convert the integrated current signal to a voltage. The detection unit11may further include an analog to digital converter (ADC) (not illustrated) which converts the data integrated through the receiver to digital data. Later, the digital data may be input to a processor (not illustrated) and processed so as to obtain touch information for the touch sensor10. The detection unit11may include the ADC and the processor together with the receiver. A control unit13may perform a function of controlling the operations of the driving unit12and the detection unit11. For example, the control unit13may generate a driving control signal and then transmit the generated driving control signal to the driving unit12so that the driving signal is applied to a predetermined driving electrode TX at a predetermined time. Further, the control unit13may generate a detection control signal and then transmit the generated detection control signal to the detection unit11to make the detection unit11receive the detection signal from a predetermined receiving electrode RX at a predetermined time and perform a predetermined function. InFIG.1A, the driving unit12and the detection unit11may configure a touch detecting device (not illustrated) which is capable of detecting whether a touch is input to the touch sensor10and a touch position. The touch detecting device may further include the control unit13. The touch detecting device may be integrated on a touch sensing Integrated Circuit (IC). The driving electrode TX and the receiving electrode RX included in the touch sensor10may be connected to the driving unit12and the detection unit11included in the touch sensing IC through, for example, a conductive trace and/or a conductive pattern printed on a circuit board. The touch sensing IC may be positioned on a circuit board on which a conductive pattern is printed, for example, a touch circuit board (hereinafter, referred to as a touch PCB). According to the exemplary embodiment, the touch sensing IC may be mounted on a main board for operating the touch input device1000. As described above, capacitance (Cm) having a predetermined value is generated at each crossing point of the driving electrode TX and the receiving electrode RX, and when an object, such as a finger, approaches the touch sensor10, the value of the capacitance may be changed. InFIG.1A, the capacitance may represent mutual capacitance (Cm). The detection unit11may detect the electric characteristic to detect whether a touch is input to the touch sensor10and/or a touch position. For example, the detection unit10may detect whether a touch is input for the surface of the touch sensor10, which is formed of a two-dimensional plane consisting of a first axis and a second axis, and/or the position of the touch. More particularly, the detection unit10may detect the position of the touch in the direction of the second axis by detecting the driving electrode TX to which the driving signal is applied when the touch for the touch sensor10is generated. Similarly, when the touch to the touch sensor10is input, the detection unit10may detect the position of the touch in the direction of the first axis by detecting a change in the capacitance from the reception signal received through the receiving electrode RX. As illustrated inFIGS.1C and1D, when the driving electrode and the receiving electrode are disposed on the same layer, the number of wires may increase. Accordingly, based onFIG.2, the touch sensor panel in the form in which the number of wires is decreased, will be described, and the reason why the horizontal split of the touch signal output from the touch sensor panel in the form in which the number of wires is decreased occurs will be described. FIG.2is a diagram illustrating an example of the touch sensor panel in which a horizontal split phenomenon of the touch signal occurs, and illustrates the case where the plurality of driving electrodes and the plurality of receiving electrodes are formed in a matrix type. As illustrated inFIG.2, in the typical touch sensor panel, the plurality of driving electrodes TX and the plurality of receiving electrodes Rx are arranged on the same layer in the matrix form. More particularly, in the plurality of columns, the plurality of receiving electrodes RX are disposed apart from each other in odd-numbered columns, and the plurality of driving electrodes TX are disposed apart from each other in even-numbered columns. One electrode group G including one receiving electrode RX and the plurality of driving electrodes TX is arranged in plural in a first direction (or a row direction or a left-right direction). Herein, the number of driving electrodes TX included in one electrode group G may be four as illustrated inFIG.2, but the present invention is not limited thereto, and the number of driving electrodes TX may be three or five or more. One electrode group G includes a plurality of unit cells U. Herein, one unit cell U may be configured as one part of one driving electrode TX and the receiving electrode RX adjacent to the driving electrode TX in the first direction (or the row direction or the left-right direction). Accordingly, in case ofFIG.2, one electrode group G may be formed of four unit cells U. In particular, an area of the unit cell U may be approximately 4 mm in the column direction (or vertically) and 4 mm in the row direction (or horizontally). The driving electrode TX is smaller than the receiving electrode RX. For example, the driving electrode TX may have a quadrangular shape of approximately 2 mm in width and 4 mm in length, and the receiving electrode RX may have a quadrangular shape of approximately 2 mm in width and 16 mm in length. The horizontal length and the vertical length may be appropriately changed according to a design. In the touch sensor panel illustrated inFIG.2, the number indicated in the driving electrode TX and the receiving electrode RX means the number of the driving electrode TX and the number of the receiving electrode RX, and the driving electrodes TX of the same number are electrically connected with each other through wires (or conductive trace). The same driving signal is applied to the driving electrodes TX of the same number at the same time. The receiving electrodes RX of the same number may also be electrically connected with each other through wires (or conductive trace). FIGS.3to7are diagrams for describing the horizontal split phenomenon of the output touch signal in the case where a part of the touch sensor panel is touched with a predetermined object (for example, a finger) in the state where the device including the touch sensor panel illustrated inFIG.2is floated. Herein, the state where the device including the touch sensor panel is floated is the state where the device is placed in a Low Ground Mass (LGM) state, and refers to, for example, the state where the device is not gripped by the hand of the user. FIG.3is a diagram illustrating an example of the case where a finger (thumb) is touched within a predetermined area A of the touch sensor panel illustrated inFIG.2. InFIG.3, it is assumed that the finger is touched in a width of 20 mm in the first direction (or the row direction or the horizontal direction). The touch with the width of 20 mm is a rather extreme situation, but if the problem occurring in this situation can be solved, a problem occurring in another general situation may also be solved. FIG.4is a diagram for describing a final touch signal output through a first reception terminal in the case where in the situation illustrated inFIG.3, the driving signal is applied to the plurality of first driving electrodes TX1and the plurality of first receiving electrodes RX1outputs a detection signal. Referring toFIGS.3and4, predetermined capacitance (Cm) is formed between the first driving electrode TX1-alocated at the left side and two first receiving electrodes RX1-aand RX1-blocated at both sides of the first driving electrode TX1-a, but the finger is not in contact with the first driving electrode TX1-aand the two first receiving electrodes RX1-aand RX1-b, so that the amount of capacitance changed is not generated from the two first receiving electrodes RX1-aand RX1-b. Accordingly, the number of abnormal touch signals output from the two first receiving electrodes RX1-aand RX1-bis 0. In contrast, when the driving signal is applied to the first driving electrode TX1-alocated at the left side, the same driving signal is also simultaneously applied to the first driving electrode TX1-blocated between two third receiving electrodes RX3. Then, coupling capacitance is formed between the first driving electrode TX1-band the finger, and in this case, when the finger is in the LGM state, the driving signal applied to the first driving electrode TX1-bis transmitted to the three first receiving electrodes RX1-b, RX1-c, and RX1-dthat are being in contact with the finger. That is, the finger in the LGM state forms a current path. Accordingly, an LGM jamming signal (− diff) having a sign opposite to that of a normal touch signal is output from each of the three first receiving electrodes RX1-b, RX1-c, and RX1-dthat are being in contact with the finger. Herein, the reason why the LGM jamming signal has the sign opposite to that of the normal touch signal is that in the normal touch signal, when the finger is in contact with the receiving electrodes in the state where predetermined mutual capacitance (Cm) is formed between the driving electrode and the receiving electrode, the mutual capacitance (Cm) is decreased, but in the LGM jamming signal, the coupling capacitance is generated due to the contact of the finger in the floating state, so that the LGM jamming signal and the normal touch signal have opposite signs. In the meantime, the size of the normal touch signal may be the same as or different from the size of the LGM jamming signal, so that hereinafter, for convenience of the description, it is assumed that the normal touch signal is 1 diff and the LGM jamming signal is −1 diff. The first reception terminal outputs a touch signal by summing all of the signals output from the four first receiving electrodes RX1-a, RX1-b, RX1-c, and RX1-d, and there is no amount of capacitance changed in the first receiving electrode RX1-a, so that 0 diff is output, and the LGM jamming signal (−1 diff) is output from each of the three remaining first receiving electrodes RX1-b, RX1-c, and RX1-d, so that a touch signal corresponding to −3 diff is output from the first reception terminal as a result. FIG.5is a diagram for describing a touch signal output through the first reception terminal in the case where in the situation illustrated inFIG.3, the driving signal is applied to the plurality of ninth driving electrodes TX9-aand TX9-band the plurality of first receiving electrodes RX1-a, RX1-b, RX1-c, and RX1-doutputs a detection signal. Referring toFIGS.3and5, predetermined capacitance (Cm) is formed between the ninth driving electrode TX9-alocated at the left side and two first receiving electrodes RX1-cand RX1-dlocated at both sides of the ninth driving electrode TX9-a, but the finger is in contact with the ninth driving electrode TX9-1and the two first receiving electrodes RX1-cand RX1-d, so that the normal touch signal 1 diff is output from each of the two first receiving electrodes RX1-cand RX1-d. In the meantime, when the driving signal is applied to the ninth driving electrode TX9-a, coupling capacitance is formed between the ninth driving electrode TX9-aand the finger, and in this case, when the finger is in the LGM state, the driving signal applied to the ninth driving electrode TX9-ais transmitted to the three first receiving electrodes RX1-b, RX1-c, and RX1-dthat are being in contact with the finger. That is, the finger in the LGM state forms a current path. Accordingly, the LGM jamming signal (−1 diff) having a sign opposite to that of the normal touch signal is output from each of the three first receiving electrodes RX1-b, RX1-c, and RX1-dthat are being in contact with the finger. The first reception terminal outputs a touch signal by summing all of the signals output from the four first receiving electrodes RX1-a, RX1-b, RX1-c, and RX1-d, and the normal touch signal (1 diff) is output from each of the two first receiving electrodes RX1-cand RX1-dand the LGM jamming signal (−1 diff) is also output from each of the three remaining first receiving electrodes RX1-b, RX1-c, and RX1-dtogether, so that the touch signal corresponding to −1 diff is output from the first reception terminal as a result. FIG.6is a graph and a table in which the situations ofFIGS.4and5and additional situations are synthesized. Referring to the table ofFIG.6, the first row is the situation ofFIG.4, in which the driving signal is applied to the plurality of first driving electrodes TX1-aand TX1-band the touch signal is output from the plurality of first receiving electrodes RX1-a, RX1-b, RX1-c, and RX1-d, and the third row is the situation ofFIG.5, in which the driving signal is applied to the plurality of ninth driving electrodes TX9-aand TX9-band the touch signal is output from the plurality of first receiving electrodes RX1-a, RX1-b, RX1-c, and RX1-d. In the table ofFIG.6, the fourth row is the case where the driving signal is applied to the plurality of thirteenth driving electrodes TX13, and the sum of touch signals output from the plurality of first receiving electrodes RX1is −2 diff. In the table ofFIG.6, the fifth row is the case where the driving signal is applied to the plurality of thirteenth driving electrodes TX13, and the sum of touch signals output from the plurality of third receiving electrodes RX3is −2 diff. Referring to the graph ofFIG.6, when the driving signal is applied to the thirteenth driving electrode TX13, the final touch signal is indicated by −4 diff, and this is due to the sum in the remap process in which the touch signal in the fourth row of the table ofFIG.6and the touch signal in the fifth row of the table ofFIG.6are mapped. Referring to the graph ofFIG.6, when the driving signal is applied to the thirteenth driving electrode TX13, a difference between the touch signal (diff) output in the state where the device including the touch sensor panel is in the floating state and the touch signal (diff) output in the state where the device including the touch sensor panel is in the gripped state is large. Due to the difference, a single touch is input to a specific portion including the thirteenth driving electrode TX13of the touch sensor panel in the LGM state, a phenomenon in which the touch signal is split in the left and right direction occurs in the specific portion. FIG.7is the table and the graph representing the actual test of the situation ofFIG.6. Referring toFIG.7, the −diff value of the LGM jamming signal may be calculated from the case where there is only the LGM jamming signal (the first row and the eighth row of the table ofFIG.6) and the number of LGM jamming signals (the number of LGMs). A compensation value by the LGM may be calculated by using the calculated −diff value of the LGM jamming signal, and when the compensation value is compared with a measurement value (the original value that needs to be output) in the state where the device is gripped, it was confirmed that the measurement value (the original value that needs to be output) in the state where the device is gripped is almost similar to the compensation value by the LGM. As described above, in the touch sensor panel illustrated inFIG.2, when the user single touches the specific portion in the state where the device including the touch sensor panel is in the floating state, the phenomenon in which the output touch signal is split in the left and right direction may occur. Due to the horizontal split phenomenon of the touch signal, the device including the touch sensor panel may incorrectly recognize one touch of the user as multi-touch and perform an operation that does not match the user's intention. Hereinafter, the touch sensor panels according to the exemplary embodiment of the present invention, which are capable of improving the phenomenon in which a touch signal output from the touch sensor panel in the floating state is split in the left and right direction, will be described with reference toFIGS.8A to14B. The touch sensor panels illustrated inFIGS.8A to14Bmay improve the phenomenon in which the touch signal output from the touch sensor panel in the floating state is split in the left and right direction. In the touch sensor panels illustrated inFIGS.8A to14B, two second electrodes adjacent to both sides with respect to each first electrode are the same, or two second electrodes adjacent to both sides with respect to each first electrode are different from each other. Herein, the first electrode and the second electrode may be configured oppositely, and any one of the first electrode and the second electrode is a receiving electrode RX and the remaining one is a driving electrode TX. The meaning of the same is that the electrodes are electrically connected with each other through wires (or conductive trace). More particularly, in the touch sensor panels illustrated inFIGS.8A to14B, two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX are the same or two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX are different from each other. Otherwise, two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX are the same or two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX are different from each other. The touch sensor panels illustrated inFIGS.8A to14Bmay improve the phenomenon in which the touch signal output from the touch sensor panel in the floating state is split in the left and right direction. Hereinafter, each touch sensor panel will be described, and it is assumed that each driving electrode and each receiving electrode are disposed while being spaced apart from each other at a predetermined interval, and the receiving electrodes of the same number are electrically connected with each other through wires, and the driving electrodes of the same number are also electrically connected with each other through wires. Herein, two driving electrodes or receiving electrodes consecutive in the column direction have the same number, two driving electrodes or receiving electrodes consecutive in the column direction may be configured as one according to a design. First Exemplary Embodiment FIG.8Ais an enlarged diagram of only a part of an arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a first exemplary embodiment, andFIG.8Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.8Ais in a floating state. Referring toFIG.8A, the touch sensor panel according to the first exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged on the same layer in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)) and the like. However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Wires (or conductive trace) are connected to each of the driving electrodes TX and the receiving electrodes RX. In the following drawings includingFIG.8A, it seems that the wires having various thicknesses are illustrated, but it should be noted that as in the enlarged view of the part illustrated at the top ofFIG.8A, several wires are densely arranged so that wires of various thicknesses appear. As a matter of course, the thicknesses of the wires may also be different from each other depending on a case. Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wires are integrally implemented with the metal mesh, a dead zone, such as a space between the electrode and the wire and/or a space between the electrode and another electrode, in which a touch position is not detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the first exemplary embodiment is arranged with respect to the plurality of receiving electrodes RX. Accordingly, hereinafter, the arrangement of the plurality of receiving electrodes RX disposed in columns A1 to A8 will be described first, and an arrangement structure of the plurality of driving electrodes TX will be described. The plurality of receiving electrodes RX is arranged in each of the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8. Herein, the plurality of driving electrodes TX is arranged in the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, and B10 formed between the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8, in which the receiving electrodes RX are arranged, at the external side of the first column A1, and at the external side of the eighth column A8. With respect to each receiving electrode RX of the plurality of receiving electrodes RX, the two driving electrodes TX adjacent to both sides have the same characteristic. That is, the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same number. Herein, the meaning that the two driving electrodes TX are the same or that the numbers of the two driving electrodes TX are the same is that the two driving electrodes TX are electrically connected through wires. The touch sensor panel according to the first exemplary embodiment includes one or more sets in which the plurality of receiving electrodes RX and the plurality of driving electrodes TX are disposed in a predetermined arrangement. The plurality of sets is repeatedly arranged in the row direction and the column direction, so that the touch sensor panel according to the first exemplary embodiment may be formed. One set may include the plurality of different receiving electrodes RX, and for example, one set may include eight receiving electrodes including a 0threceiving electrode RX0to a seventh receiving electrode RX7. The eight receiving electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7may be disposed in a predetermined arrangement. The eight receiving electrodes of the 0threceiving electrode RX0to the eighth receiving electrode RX are divided and arranged in the four columns A1, A2, A3, and A4 consecutive in the row direction. Accordingly, in each of the four columns, the two receiving electrodes may be disposed from top to bottom. The plurality of receiving electrodes having the consecutive numbers is disposed in each column. Herein, the arrangement order of the odd-numbered columns A1 and A3 and the arrangement order of the even-numbered columns A2 and A4 may be opposite to each other. For example, the receiving electrodes RX0and RX1having the consecutive numbers are sequentially arranged from top to bottom in the first column A1, the receiving electrodes RX2and RX3having the consecutive numbers are sequentially arranged from bottom to top in the second column A2, the receiving electrodes RX4and RX5having the consecutive numbers are sequentially arranged from top to bottom in the third column A3, and the receiving electrodes RX6and RX7having the consecutive numbers are sequentially arranged from bottom to top in the fourth column A4. Herein, although not illustrated in the drawing, the plurality of different receiving electrodes included in one set may not be sequentially arranged in the row or column direction, but may be arranged randomly. In the meantime, the touch sensor panel according to the first exemplary embodiment includes the plurality of driving electrodes TX, and for example, the plurality of driving electrodes TX may include a 0thdriving electrode TX0to a seventh driving electrode TX7. Herein, each driving electrode may be disposed to satisfy the following arrangement condition. The plurality of driving electrodes TX is arranged to satisfy the following conditions. 1) With respect to one receiving electrode RX, four different driving electrodes are arranged at the left side, and four different driving electrodes are arranged at the right side. 2) With respect to each receiving electrode RX, two facing driving electrodes TX have the same number. 3) Five same driving electrodes are consecutively arranged in the row direction. 4) Eight driving electrodes adjacent to both sides of the receiving electrode RX1in the even-numbered row are arranged to be symmetric to eight driving electrodes adjacent to both sides of the receiving electrode RX0in the odd-numbered row. 5) A length (horizontal length) of the driving electrodes TX arranged at both edges of each set is a half the length (horizontal length) of the other driving electrodes. Referring toFIG.8B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the first exemplary embodiment is in the floating state, it can be seen that the horizontal split hardly appears, through a size of the output touch signal (the final amount of capacitance changed). In the touch sensor panel of the first exemplary embodiment, since the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same characteristic, the part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Second Exemplary Embodiment FIG.9Ais an enlarged view of only a part of the arrangement structure of the driving electrodes the receiving electrodes of a touch sensor panel according to a second exemplary embodiment, andFIG.9Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.9Ais in a floating state. Referring toFIG.9A, the touch sensor panel according to the second exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged on the same layer in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wire are integrally implemented with the metal mesh, a dead zone, such as between the electrode and the wire and/or between the electrode and another electrode, in which a touch position cannot be detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the second exemplary embodiment is arranged with respect to the plurality of receiving electrodes RX. Accordingly, hereinafter, the arrangement of the plurality of receiving electrodes RX disposed in columns A1 to A8 will be described first, and an arrangement structure of the plurality of driving electrodes TX will be described. The plurality of receiving electrodes RX is arranged in plural in each of the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8. Herein, the plurality of driving electrodes TX is arranged in plural in the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, and B16 formed between the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8, in which the receiving electrodes RX are arranged, at the external side of the first column A1, and at the external side of the eighth column A8. With respect to each receiving electrode RX of the plurality of receiving electrodes RX, the two driving electrodes TX adjacent to both sides have the same characteristic. That is, the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same number. Herein, the meaning that the two driving electrodes TX are the same or that the numbers of the two driving electrodes TX are the same is that the two driving electrodes TX are electrically connected through wires. The touch sensor panel according to the second exemplary embodiment includes one or more sets in which the plurality of receiving electrodes RX and the plurality of driving electrodes TX are disposed in a predetermined arrangement. The plurality of sets is repeatedly arranged in the row direction and the column direction, so that the touch sensor panel according to the second exemplary embodiment may be formed. One set may include the plurality of different receiving electrodes RX, and for example, one set may include eight receiving electrodes including a 0threceiving electrode RX0to a seventh receiving electrode RX7. The eight receiving electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7may be disposed in a predetermined arrangement. The eight receiving electrodes including the 0threceiving electrode RX0to the seventh receiving electrode RX are divided and arranged in the consecutive four columns A1, A2, A3, and A4 in the row direction. Accordingly, in each of the four columns, the two receiving electrodes may be disposed from top to bottom. The plurality of receiving electrodes having the consecutive numbers is disposed in each column. Herein, the arrangement order of the odd-numbered columns A1 and A3 and the arrangement order of the even-numbered columns A2 and A4 may be opposite to each other. For example, the receiving electrodes RX0and RX1having the consecutive numbers are sequentially arranged from top to bottom in the first column A1, the receiving electrodes RX2and RX3having the consecutive numbers are sequentially arranged from bottom to top in the second column A2, the receiving electrodes RX4and RX5having the consecutive numbers are sequentially arranged from top to bottom in the third column A3, and the receiving electrodes RX6and RX7having the consecutive numbers are sequentially arranged from bottom to top in the fourth column A4. Herein, although not illustrated in the drawing, the plurality of different receiving electrodes included in one set may not be sequentially arranged in the row or column direction, but may be arranged randomly. In the meantime, the touch sensor panel according to the second exemplary embodiment includes the plurality of driving electrodes TX, and for example, the plurality of driving electrodes TX may include a 0thdriving electrode TX0to a 15thdriving electrode TX15. Herein, each driving electrode may be disposed to satisfy the following arrangement condition. The plurality of driving electrodes TX may be arranged to satisfy the following conditions. 1) With respect to one receiving electrode RX, four different driving electrodes are arranged at the left side, and four different driving electrodes are arranged at the right side. 2) With respect to each receiving electrode RX, two facing driving electrodes TX have the same number. 3) Each receiving electrode has a size corresponding to 8 times that of the driving electrode TX. 4) Eight driving electrodes adjacent to the receiving electrode RX1in the even-numbered row are arranged to be symmetric to eight driving electrodes adjacent to the receiving electrode RX0in the odd-numbered row. Referring toFIG.9B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the second exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). Like the touch sensor panel according to the first exemplary embodiment, in the touch sensor panel of the second exemplary embodiment, since the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same characteristic, the part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Third Exemplary Embodiment FIG.10Ais an enlarged view of only a part of the arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a third exemplary embodiment, andFIG.10Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.10Ais in a floating state. Referring toFIG.10A, the touch sensor panel according to the third exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged on the same layer in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wire are integrally implemented with the metal mesh, a dead zone, such as between the electrode and the wire and/or between the electrode and another electrode, in which a touch position cannot be detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the third exemplary embodiment is arranged with respect to the plurality of receiving electrodes RX. Accordingly, the arrangement structure of the plurality of receiving electrodes RX will be described first, and the arrangement structure of the plurality of driving electrodes TX will be described. The plurality of receiving electrodes RX is arranged in each of the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8. Herein, the plurality of driving electrodes TX is arranged in the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, and B12 formed between the plurality of columns A1, A2, A3, A4, A5, A6, A7, and A8, in which the receiving electrodes RX are arranged, at the external side of the first column A1, and at the external side of the eighth column A8. With respect to each receiving electrode RX of the plurality of receiving electrodes RX, the two driving electrodes TX adjacent to both sides have the same characteristic. That is, the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same number. Herein, the meaning that the two driving electrodes TX are the same or that the numbers of the two driving electrodes TX are the same is that the two driving electrodes TX are electrically connected through wires. The touch sensor panel according to the third exemplary embodiment includes one or more sets in which the plurality of receiving electrodes RX and the plurality of driving electrodes TX are disposed in a predetermined arrangement. The plurality of sets is repeatedly arranged in the row direction and the column direction, so that the touch sensor panel according to the third exemplary embodiment may be formed. One set may include the plurality of different receiving electrodes RX, and for example, one set may include eight receiving electrodes including a 0threceiving electrode RX0to a seventh receiving electrode RX7. The eight receiving electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7may be disposed in a predetermined arrangement. The eight receiving electrodes including the 0threceiving electrode RX0to the seventh receiving electrode RX are divided and arranged in the consecutive four columns A1, A2, A3, and A4 in the row direction. Accordingly, in each of the four columns, the two receiving electrodes may be disposed from top to bottom. The plurality of receiving electrodes having the consecutive numbers is disposed in each column. Herein, the arrangement order of the odd-numbered columns A1 and A3 and the arrangement order of the even-numbered columns A2 and A4 may be opposite to each other. For example, the receiving electrodes RX0and RX1having the consecutive numbers are sequentially arranged from top to bottom in the first column A1, the receiving electrodes RX2and RX3having the consecutive numbers are sequentially arranged from top to bottom in the second column A2, the receiving electrodes RX4and RX5having the consecutive numbers are sequentially arranged from bottom to top in the third column A3, and the receiving electrodes RX6and RX7having the consecutive numbers are sequentially arranged from bottom to top in the fourth column A4. Herein, although not illustrated in the drawing, the plurality of different receiving electrodes included in one set may not be sequentially arranged in the row or column direction, but may be arranged randomly. In the meantime, the touch sensor panel according to the third exemplary embodiment includes the plurality of driving electrodes TX, and for example, the plurality of driving electrodes TX may include a 0thdriving electrode TX0to a 15thdriving electrode TX15. Herein, each driving electrode may be disposed to satisfy the following arrangement condition. The plurality of driving electrodes TX may be arranged to satisfy the following conditions. 1) With respect to one receiving electrode RX, four different driving electrodes are arranged at the left side, and four different driving electrodes are arranged at the right side. 2) With respect to each receiving electrode RX, two facing driving electrodes TX have the same number. 3) Three driving electrodes having the same number are consecutively arranged in the row direction. 4) Eight driving electrodes adjacent to the receiving electrode RX1in the even-numbered row are arranged to be symmetric to eight driving electrodes adjacent to the receiving electrode RX0in the odd-numbered row. 5) A length (horizontal length) of the driving electrodes TX arranged at both edges of each set and the driving electrodes arranged at the center of each set is a half the length (horizontal length) of the other driving electrodes. Referring toFIG.10B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the third exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). Like the touch sensor panel according to the first exemplary embodiment, in the touch sensor panel of the third exemplary embodiment, since the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same characteristic, the part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Fourth Exemplary Embodiment FIG.11Ais an enlarged view of a part of the arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a fourth exemplary embodiment, andFIG.11Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.11Ais in a floating state. Referring toFIG.11A, the touch sensor panel according to the fourth exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged on the same layer in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wire are integrally implemented with the metal mesh, a dead zone, such as between the electrode and the wire and/or between the electrode and another electrode, in which a touch position cannot be detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the fourth exemplary embodiment is arranged with respect to the plurality of receiving electrodes RX. Accordingly, hereinafter, the arrangement structure of the receiving electrodes RX disposed in plural in columns B1 to B8 will be first described, and then the arrangement structure of the plurality of driving electrodes TX will be described. The plurality of receiving electrodes RX is arranged in each of the plurality of columns B1, B2, B3, B4, B5, B6, B7, and B8. Herein, the plurality of driving electrodes TX is arranged in the plurality of columns A1, A2, A3, A4, A5, A6, A7, A8, and A9 formed between the plurality of columns B1, B2, B3, B4, B5, B6, B7, and B8 in which the receiving electrodes RX are arranged, at the external side of the first column B1, and at the external side of the eighth column B8. With respect to each receiving electrode RX of the plurality of receiving electrodes RX, the two driving electrodes TX adjacent to both sides have the same characteristic. That is, the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same number. Herein, the meaning that the two driving electrodes TX are the same or that the numbers of the two driving electrodes TX are the same is that the two driving electrodes TX are electrically connected through wires. The touch sensor panel according to the fourth exemplary embodiment includes one or more sets in which the plurality of receiving electrodes RX and the plurality of driving electrodes TX are disposed in a predetermined arrangement. The plurality of sets is repeatedly arranged in the column direction, so that the touch sensor panel according to the fourth exemplary embodiment may be formed. However, the receiving electrodes RX of the even-numbered set are disposed to be symmetric to the receiving electrodes of the odd-numbered set. One set may include the plurality of different receiving electrodes RX, and for example, one set may include 16 receiving electrodes including a 0threceiving electrode RX0to a 15threceiving electrode RX15. The 16 receiving electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, RX7, RX8, RX9, RX10, RX11, RX12, RX13, RX14, and RX15may be disposed in a predetermined arrangement. The 16 receiving electrodes including the 0threceiving electrode RX0to the 15threceiving electrode RX15are divided and arranged in two rows consecutive in the column direction. Accordingly, the eight receiving electrodes may be disposed in each of the two rows. The receiving electrodes numbered from 0 to 7 are arranged from left to right in the order of RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7in a first row, and the receiving electrodes numbered from 8 to 15 are arranged from left to right in the order of RX15, RX14, RX13, RX12, RX11, RX10, RX9, and RX8in a second row. In the meantime, the touch sensor panel according to the fourth exemplary embodiment includes the plurality of driving electrodes TX, and for example, the plurality of driving electrodes TX may include a 0thdriving electrode TX0to a third driving electrode TX3. Herein, each driving electrode may be disposed to satisfy the following arrangement condition. The plurality of driving electrodes TX may be arranged to satisfy the following conditions. 1) One driving electrode is disposed at the left side and the right side with respect to two different receiving electrodes RX0and RX15consecutive in the column direction. 2) Two facing driving electrodes TX with respect to the two different receiving electrodes RX0and RX15consecutive in the column direction have the same number. 3) The driving electrodes TX arranged in the column direction have the different numbers, and the driving electrodes TX arranged in the row direction have the same number. 4) A length (horizontal length) of the driving electrodes arranged at both edges of each set is a half the length (horizontal length) of the other driving electrodes. Referring toFIG.11B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the fourth exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). Like the touch sensor panel according to the first exemplary embodiment, in the touch sensor panel of the fourth exemplary embodiment, since the two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX have the same characteristic, the part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Fifth Exemplary Embodiment FIG.12Ais an enlarged view of only a part of the arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a fifth exemplary embodiment, andFIG.12Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.12Ais in a floating state. In the touch sensor panel according to the fifth exemplary embodiment illustrated inFIG.12A, the arrangement of the receiving electrodes in the even-numbered set is the same as the arrangement of the receiving electrodes in the odd-numbered set, compared to the touch sensor panel according to the fourth exemplary embodiment illustrated inFIG.11A. That is, the arrangements of the receiving electrodes of all of the sets are the same. Since the rest is the same as the touch sensor panel according to the fourth exemplary embodiment illustrated inFIG.11A, so that a detailed description of the rest will be omitted. Referring toFIG.12B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the fifth exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). Sixth Exemplary Embodiment FIG.13Ais an enlarged view of only a part of the arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a sixth exemplary embodiment, andFIG.13Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.13Ais in a floating state. Referring toFIG.13A, the touch sensor panel according to the sixth exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wire are integrally implemented with the metal mesh, a dead zone, such as between the electrode and the wire and/or between the electrode and another electrode, in which a touch position cannot be detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the sixth exemplary embodiment is arranged with respect to the plurality of driving electrodes TX. Accordingly, hereinafter, the arrangement structure of the driving electrodes TX disposed in plural in columns B1 to B16 will be first described, and then the arrangement structure of the plurality of receiving electrodes RX will be described. The plurality of driving electrodes TX is arranged in each of the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, and B16. Herein, the plurality of receiving electrodes RX is arranged in the plurality of columns A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16 formed between the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, and B16, in which the driving electrodes TX are arranged, at the external side of the first column B1, and at the external side of the 16thcolumn B16. With respect to each driving electrode TX of the plurality of driving electrodes TX, the two receiving electrodes RX adjacent to both sides have the different characteristic. That is, the two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX have the different number. Herein, the meaning that the two receiving electrodes RX are different or the two receiving electrodes RX have different numbers is that the receiving electrodes are not electrically connected through wires. The plurality of driving electrodes TX includes a first set set 1 in which 32 driving electrodes including the 0thdriving electrode TX0to the 31stdriving electrode TX31are disposed in a first arrangement, and a second set set 2 in which the 32 driving electrodes including the 0thdriving electrode TX to the 31stdriving electrode TX31are disposed in a second arrangement. The first set set 1 may be provided with two consecutively in the row direction and two in the column direction, and the first set set 1 located in the even-numbered row may be symmetric to the first set set 1 located in the odd-numbered row. The second set set 2 may be provided with two consecutively in the row direction and two in the column direction, and the second set set 2 located in the even-numbered row may be symmetric to the second set set 2 located in the odd-numbered row. Further, the plurality of second sets may be disposed at one side of the plurality of first sets. In the first arrangement of the first set set 1, the 32 driving electrodes including the 0thdriving electrode TX0to the 31stdriving electrode TX31are divided and arranged in four columns consecutively in the row direction, and in the first column, the driving electrodes numbered from 0 to 7 are arranged from top to bottom in the order of TX0, TX1, TX2, TX3, TX4, TX5, TX6, and TX7, in the second column, the driving electrodes numbered from 8 to 15 are arranged from top to bottom in the order of TX15, TX14, TX13, TX12, TX11, TX10, TX9, and TX8, in the third column, the driving electrodes numbered from 16 to 23 are arranged from top to bottom in the order of TX16, TX17, TX18, TX19, TX20, TX21, TX22, and TX23, and in the fourth column, the driving electrodes numbered from 24 to 31 are arranged from top to bottom in the order of TX31, TX30, TX29, TX28, TX27, TX26, TX25, and TX24. In the second arrangement of the second set set 2, the 32 driving electrodes including the 0thdriving electrode TX0to the 31stdriving electrode TX31are divided and arranged in four columns consecutively in the row direction, and in the first column, the driving electrodes numbered from 16 to 23 are arranged from top to bottom in the order of TX16, TX17, TX18, TX19, TX20, TX21, TX22, and TX23, in the second column, the driving electrodes numbered from 24 to 31 are arranged from top to bottom in the order of TX31, TX30, TX29, TX28, TX27, TX26, TX25, and TX24, in the third column, the driving electrodes numbered from 0 to 7 are arranged from top to bottom in the order of TX0, TX1, TX2, TX3, TX4, TX5, TX6, and TX7, and in the fourth column, the driving electrodes numbered from 8 to 15 are arranged from top to bottom in the order of TX15, TX14, TX13, TX12, TX11, TX10, TX9, and TX8. In the meantime, the touch sensor panel according to the sixth exemplary embodiment includes the plurality of receiving electrodes RX, and for example, the plurality of receiving electrodes RX may include a 0threceiving electrode RX0to a 15threceiving electrode RX15. Herein, each receiving electrode may be disposed so as to satisfy the following arrangement condition. The plurality of receiving electrodes RX are disposed so as to satisfy the following arrangement condition. 1) With respect to the eight different driving electrodes TX consecutive in the column direction, One receiving electrode is disposed at the left side and one receiving electrode is disposed at the right side. 2) With respect to the eight different driving electrodes TX consecutive in the column direction, two facing receiving electrodes RX have different numbers. 3) Two different receiving electrodes RX are arranged in the column direction, and eight different receiving electrodes RX are repeatedly arranged in the row direction. 4) A length (horizontal length) of the receiving electrodes arranged at both edges in the column direction may be a half the length (horizontal length) of the other receiving electrodes as illustrated inFIG.13B, but is not limited thereto, and a length (horizontal length) of the receiving electrodes arranged at both edges in the column direction may be the same as the length (horizontal length) of the other receiving electrodes as illustrated inFIG.13A. Referring toFIG.13B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the sixth exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). In the touch sensor panel of the sixth exemplary embodiment, since the two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX have the same characteristic, the specific part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Seventh Exemplary Embodiment FIG.14Ais an enlarged view of a part of the arrangement structure of driving electrodes and receiving electrodes of a touch sensor panel according to a seventh exemplary embodiment, andFIG.14Bis an experimental data showing that horizontal split is improved when a conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel illustrated inFIG.14Ais in a floating state. Referring toFIG.14A, the touch sensor panel according to the seventh exemplary embodiment includes the plurality of driving electrodes TX and the plurality of receiving electrodes RX. The plurality of driving electrodes TX and the plurality of receiving electrodes RX are arranged on the same layer in a matrix form. The plurality of driving electrodes TX and the plurality of receiving electrodes RX may be made of a transparent conductive material (for example, indium tin oxide (ITO) or antimony tin oxide (ATO) made of tin oxide (SnO2) and indium oxide (In2O3)). However, this is merely an example, and the driving electrode TX and the receiving electrode RX may also be formed of other transparent conductive materials or an opaque conductive material. For example, the driving electrode TX and the receiving electrode RX may include at least one of silver ink, copper, nano silver, and carbon nanotube (CNT). Further, the driving electrode TX and the receiving electrode RX may be implemented with a metal mesh. When the driving electrode TX and the receiving electrode RX are implemented with the metal mesh, the wires connected to the driving electrode TX and the receiving electrode RX may also be implemented with the metal mesh, and the driving electrode TX and the receiving electrode RX and the wires may also be integrally implemented with the metal mesh. When the driving electrode TX, the receiving electrode RX, and the wire are integrally implemented with the metal mesh, a dead zone, such as between the electrode and the wire and/or between the electrode and another electrode, in which a touch position cannot be detected, is reduced, so that sensitivity of detecting a touch position may be further improved. The touch sensor panel according to the seventh exemplary embodiment is arranged with respect to the plurality of driving electrodes TX. Accordingly, hereinafter, the arrangement structure of the driving electrodes TX disposed in plural in columns B1 to B16 will be first described, and then the arrangement structure of the plurality of receiving electrodes RX will be described. The plurality of driving electrodes TX is arranged in each of the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, and B16. Herein, the plurality of receiving electrodes RX is arranged in the plurality of columns A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16 formed between the plurality of columns B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, and B16, in which the driving electrodes TX are arranged, at the external side of the first column B1, and at the external side of the 16thcolumn B16. With respect to each driving electrode TX of the plurality of driving electrodes TX, the two receiving electrodes RX adjacent to both sides have the different characteristic. That is, the two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX have the different number. Herein, the meaning that the two receiving electrodes RX are different or the two receiving electrodes RX have different numbers is that the receiving electrodes are not electrically connected through wires. The plurality of driving electrodes TX includes a set in which 32 driving electrodes including a 0thdriving electrode TX0to a 31stdriving electrode TX31are disposed in a first arrangement. Herein, the set may be repeatedly arranged in plural in the row direction and the column direction. The set located in the even-numbered row may be symmetric to the set located in the odd-numbered row. In the first arrangement of the first set set 1, 32 driving electrodes including a 0thdriving electrode TX0to a 31stdriving electrode TX31are arranged in four columns consecutively in the row direction, and in the first column, the driving electrodes numbered from 0 to 7 are arranged from top to bottom in the order of TX0, TX1, TX2, TX3, TX4, TX5, TX6, and TX7, in the second column, the driving electrodes numbered from 8 to 15 are arranged from top to bottom in the order of TX15, TX14, TX13, TX12, TX11, TX10, TX9, and TX8, in the third column, the driving electrodes numbered from 16 to 23 are arranged from top to bottom in the order of TX16, TX17, TX18, TX19, TX20, TX21, TX22, and TX23, and in the fourth column, the driving electrodes numbered from 24 to 31 are arranged from top to bottom in the order of TX31, TX30, TX29, TX28, TX27, TX26, TX25, and TX24. In the meantime, the touch sensor panel according to the seventh exemplary embodiment includes the plurality of receiving electrodes RX, and for example, the plurality of receiving electrodes may include a 0threceiving electrode RX0to a 31streceiving electrode RX31. Herein, each receiving electrode may be disposed so as to satisfy the following arrangement condition. The plurality of receiving electrodes RX is disposed so as to satisfy the following arrangement condition. 1) One receiving electrode is disposed at the left side and one receiving electrode is disposed at the right side with respect to the eight different driving electrodes TX consecutive in the column direction. 2) Two facing receiving electrodes RX have different numbers with respect to the eight different driving electrodes TX consecutive in the column direction. 3) Two different receiving electrodes are arranged in the column direction, and the 16 different receiving electrodes are repeatedly arranged in the row direction. 4) A length (horizontal length) of the receiving electrodes arranged at both edges in the column direction may be a half a length of other receiving electrodes (horizontal length) as illustrated inFIG.14B, but is not limited thereto, and a length (horizontal length) of the receiving electrodes arranged at both edges in the column direction may be the same as a length of other receiving electrodes (horizontal length) as illustrated inFIG.14A. Referring toFIG.14B, when one conductive rod of 15 phi is in contact with the touch sensor panel in the state where the touch sensor panel of the seventh exemplary embodiment is in the floating state, it can be seen that the horizontal split does not appear, through a size of the output touch signal (the final amount of capacitance changed). Like the touch sensor panel of the seventh exemplary embodiment, even in the touch sensor panel of the seventh exemplary embodiment, since the two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX have the same characteristic, the specific part where the LGM signal rapidly increases in the remap process is gone, so that it is expected that the horizontal split is improved. Further, the number of receiving electrodes RX of the touch sensor panel of the seventh exemplary embodiment is larger than that of the sixth exemplary embodiment touch sensor panel. Since the number of different reception channels is larger, the touch sensor panel may be less affected by LGM, so that the horizontal split phenomenon may be further improved. As illustrated inFIGS.8A to14B, the touch sensor panels according to the first to seventh exemplary embodiments have an advantage in that a horizontal split phenomenon is improved. This is because in the touch sensor panels illustrated inFIGS.8A to14B, two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX are the same or two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX are different from each other. Herein, it should be noted that in the touch sensor panels illustrated inFIGS.8A to14B, the driving electrode and the receiving electrode may be configured in reverse. Accordingly, there is an advantage in that even in the case where two driving electrodes TX adjacent to both sides with respect to each receiving electrode RX are different from each other or two receiving electrodes RX adjacent to both sides with respect to each driving electrode TX are the same, the horizontal split phenomenon is improved. In the meantime, the touch sensor panel in the related art illustrated inFIG.2, a “vertical split phenomenon” may occur in the floating state. The vertical split phenomenon will be described in detail with reference toFIGS.15A-B. FIGS.15A-Bare diagrams illustrating a vertical split phenomenon of an output touch signal when a part of the touch sensor panel is touched with a finger in the state where the device including the touch sensor panel illustrated inFIG.2is floated. FIG.15Ais an enlarged diagram of a part of the touch sensor panel illustrated inFIG.2, and illustrates the case where an object (conductive rod) touches a specific part t (touch position), andFIG.15Bis a diagram illustrating an actual output value of a touch signal output after the remap process in the situation illustrated inFIG.15A. In the case where a touch of the object is input to the touch position t illustrated inFIG.15A, a vertical split may occur like a portion “d” ofFIG.15B. The reason why the vertical split occurs is that the sum of the number of normal touch signals and the number of LGMs at the corresponding touch position t is larger than the sum of the number of normal touch signals and the number of LGMs at a previous touch position t′. For example, when the driving signal is applied from the eighth driving electrode TX8at the previous touch position t′, the normal touch signal is one capacitance change value (1diff) between the eighth driving electrode TX8and the first receiving electrode RX1adjacent to the left side and one capacitance change value (1diff) between the eighth driving electrode TX8and the first receiving electrode RX1adjacent to the right side, and a total of two normal touch signals (2diff) are output. In the meantime, the LGM signal is output from each of the four first receiving electrodes RX1, so that the number of LGM signals is four (−4diff). The LGM jamming signal has a sign opposite to that of the normal touch signal, so that the number of final touch signals is 2+(−4), which is −2 (2diff). In the meantime, when the driving signal is applied from the eighth driving electrode TX8at the corresponding touch position t, the normal touch signal is one mutual capacitance change value (1diff) between the eighth driving electrode TX8and the first receiving electrode RX1adjacent to the left, one mutual capacitance change value (1diff) between the eighth driving electrode TX8and the first receiving electrode RX1adjacent to the right, one mutual capacitance change value (1diff) between the eighth driving electrode TX8and the second receiving electrode RX2adjacent to the left, and one mutual capacitance change value (1diff) between the eighth driving electrode TX8and the second receiving electrode RX2adjacent to the right, so that a total of four normal touch signals (4diff) is output. In the meantime, the LGM signal is output from each of the four first receiving electrodes RX1and each of the four second receiving electrodes RX2, so that the number of LGM signals is eight (−8diff). The LGM jamming signal has a sign opposite to that of the normal touch signal, so that the number of final touch signals is 4+(−8), which is −4 (4diff). As described above, compared to the previous touch position t′, the LGM signal component rapidly increases in the final touch signal output at the corresponding touch position t, so that the touch signal output after the remap process rapidly decreases as indicated with d inFIG.15B. Due to the foregoing phenomenon, the vertical split may occur. Further, another reason why the vertical split occurs may also be that the number of same driving electrodes or/and the number of same receiving electrodes included in the corresponding touch position t is larger than the number of driving electrodes or/and the number of same receiving electrodes included in the previous touch position t′. Herein, an area of the driving electrode or an area of the receiving electrode that is in contact with the corresponding touch position t may also be further considered. Among the touch sensor panels of the present invention illustrated inFIGS.8A to14B, in the touch sensor panel according to the first exemplary embodiment illustrated inFIGS.8A and8B, the vertical split occurs at the corresponding touch position t as illustrated inFIG.16. An actual output value of the touch signal output in the portion in which the vertical split occurs is smaller than a reference value (for example, 65) based on which whether a touch is input is determined, so that it is somewhat difficult to overcome the occurring vertical split phenomenon even with software. In order to solve the vertical split phenomenon in the touch sensor panel according to the first exemplary embodiment illustrated inFIGS.8A and8B, as illustrated inFIG.17, in the touch sensor panel according to the exemplary embodiment of the present invention, when a predetermined touch window area w is placed at a certain position on the touch sensor panel, it is desirable to arrange the driving electrodes (or receiving electrodes) so that the same driving electrodes (or receiving electrodes) are not consecutive in the column direction in the touch window area w. Herein, the touch window area w may mean an area covering the first number of first electrodes consecutive in a first direction among the plurality of first electrodes and the second number of second electrodes consecutive in a second direction with respect to each of the first number of first electrodes. Herein, the first electrode may be any one of the driving electrode and the receiving electrode, and the second electrode may be the remaining one. Further, the touch window area w may mean an area covering the first electrodes included in a first length among the plurality of first electrodes and the second electrodes included in a second length among the plurality of second electrodes. Herein, the first length and the second length may be the same or may also be different from each other. For example, inFIG.17, the touch window area w has a circular shape, and may be 15 phi. Referring toFIG.17, even if the touch window area w is moved in any direction on the touch sensor panel, the same driving electrodes are not consecutive in the column direction within the corresponding touch window area w. More particularly, in order to improve the vertical split, as illustrated inFIG.17, in the touch sensor panel including the plurality of first electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7arranged in plural in a first direction (or the row direction) and a second direction (or the column direction) and the plurality of second electrodes TX0, TX3, TX4, and TX7arranged in plural in the first direction (or the row direction) and the second direction (or the column direction) on the same layer, the plurality of first electrodes includes at least a first-a electrode RX0and a first-b electrode RX1arranged in the second direction, the first-a electrode RX0and the first-b electrode RX1are connected to independent wires, respectively, and are electrically separated from each other, the plurality of second electrodes includes the plurality of different second-a electrodes TX0, TX3, TX4, and TX7arranged while being adjacent to the first-a electrode RX0and the plurality of different second-b electrodes TX0, TX3, TX4, TX7arranged while being adjacent to the first-b electrode RX1, the second-a electrodes TX0, TX3, TX4, and TX7are electrically connected with the second-b electrodes TX0, TX3, TX4, TX7through wires so as to correspond one to one to the second-b electrodes TX0, TX3, TX4, TX7, mutual capacitance is generated between each of the second-a electrodes TX0, TX3, TX4, and TX4and the first-a electrode RX0, the mutual capacitance is generated between each of the second-b electrodes TX0, TX3, TX4, TX7and the first-b electrode RX1, the touch window area w is configured to cover the first number of first electrodes RX0, RX3, RX4, and RX7consecutive in the first direction among the plurality of first electrodes on the touch sensor panel and the second number of second electrodes TX0, TX3, TX4, and TX7consecutive in the second direction with respect to each of the first number of first electrodes RX0, RX3, RX4, and RX7, and it is desirable that the plurality of second electrodes is arranged to satisfy a condition that the same second electrodes are not consecutively disposed in the second direction in the touch window area w. More particularly, in order to improve the vertical split, as illustrated inFIG.17, in the touch sensor panel including the plurality of first electrodes RX0, RX1, RX2, RX3, RX4, RX5, RX6, and RX7arranged in plural in a first direction (or the row direction) and a second direction (or the column direction) and the plurality of second electrodes TX0, TX3, TX4, and TX7arranged in plural in the first direction (or the row direction) and the second direction (or the column direction) on the same layer, the plurality of first electrodes includes at least a first-a electrode RX0and a first-b electrode RX1arranged in the second direction, the first-a electrode RX0and the first-b electrode RX1are connected to independent wires, respectively, and are electrically separated from each other, the plurality of second electrodes includes the plurality of different second-a electrodes TX0, TX3, TX4, and TX7arranged while being adjacent to the first-a electrode RX0and the plurality of different second-b electrodes TX0, TX3, TX4, TX7arranged while being adjacent to the first-b electrode RX1, the second-a electrodes TX0, TX3, TX4, and TX7are electrically connected with the second-b electrodes TX0, TX3, TX4, TX7through wires so as to correspond one to one to the second-b electrodes TX0, TX3, TX4, TX7, mutual capacitance is generated between each of the second-a electrodes TX0, TX3, TX4, and TX4and the first-a electrode RX0, the mutual capacitance is generated between each of the second-b electrodes TX0, TX3, TX4, TX7and the first-b electrode RX1, the touch window area w is configured to cover the first electrodes RX0, RX3, RX4, and RX7included in the first length among the plurality of first electrodes on the touch sensor panel and the second electrodes TX0, TX3, TX4, and TX7included in the second length among the plurality of second electrodes, and it is desirable that the plurality of second electrodes is arranged to satisfy a condition that the same second electrodes are not consecutively disposed in the direction of the second length in the touch window area w. FIG.18is a diagram illustrating modified examples of the touch window area w illustrated inFIG.17. As illustrated in the left drawing ofFIG.18, a touch window area w1 may have a quadrangular shape. For example, the touch window area w1 may have a square shape. In this case, a length of one side may have a range of 15 mm to 20 mm In the meantime, as illustrated in the right drawing ofFIG.18, a touch window area w2 may have an elliptical shape. A long axis may have a length of 15 mm to 24 mm, and a short axis may have a length of 15 mm to 20 mm. In the meantime, in the touch sensor panel according to the fourth exemplary embodiment illustrated inFIGS.11A and11B, the vertical split occurs at the corresponding touch position t as illustrated inFIG.19. Further, an actual output value of the touch signal output in the portion in which the vertical split occurs is smaller than the reference value (for example, 65) based on which whether a touch is input is determined, so that it is difficult to overcome the occurring vertical split phenomenon with software. In order to release or prevent the vertical split phenomenon of the touch sensor panel according to the fourth exemplary embodiment illustrated inFIGS.11A and11B, the touch sensor panel may have the disposition in which the same receiving electrodes are not consecutive in the column direction within the touch window area w, like the touch sensor panel according to the fifth exemplary embodiment illustrated inFIGS.12A and12B. However, in the touch sensor panel according to the fifth exemplary embodiment illustrated inFIGS.12A and12B, the vertical split occurs at the corresponding touch position t likeFIG.20. The reason why the vertical split phenomenon occurs in the touch sensor panel according to the fifth exemplary embodiment illustrated inFIGS.12A and12Blike the touch sensor panel according to the fifth exemplary embodiment illustrated inFIGS.11A and11Bis that the number of receiving electrodes RX adjacent to one driving electrode TX is two, which is small, compared to the case ofFIG.17(the four driving electrodes TX are disposed while being adjacent to one receiving electrode RX). Particularly, as the size of the touch window area increases, the vertical split phenomenon becomes more severe. Accordingly, in order to release or prevent the vertical split phenomenon, it is desirable to add a condition that the number of other receiving electrodes (or driving electrodes) adjacent to one driving electrode (or receiving electrode) is at least two or more, in addition to the condition that the same receiving electrodes are not consecutive in the column direction within the corresponding touch window area w even though the touch window area w is moved. In the meantime, the vertical split phenomenon occurs even in the touch sensor panel according to the second exemplary embodiment illustrated inFIGS.9A and9B, the touch sensor panel according to the third exemplary embodiment illustrated inFIGS.10A and10B, the touch sensor panel according to the sixth exemplary embodiment illustrated inFIGS.13A and13B, and the touch sensor panel according to the seventh exemplary embodiment illustrated inFIGS.14A and14B, but the actual output value of the touch signal output after the remap process is equal to or larger than the reference value based on which whether the touch is input is determined, so that it is possible to overcome the vertical split phenomenon with software SW. As a matter of course, even in the case of the touch sensor panel according to the second exemplary embodiment illustrated inFIGS.9A and9B, the touch sensor panel according to the third exemplary embodiment illustrated inFIGS.10A and10B, the touch sensor panel according to the sixth exemplary embodiment illustrated inFIGS.13A and13B, and the touch sensor panel according to the seventh exemplary embodiment illustrated inFIGS.14A and14B, when the same driving electrodes are not consecutively disposed in the column direction within the touch window area w, it is expected to further release or prevent the vertical split phenomenon. In the meantime, in the touch sensor panel according to the second exemplary embodiment illustrated inFIGS.9A and9B, the touch sensor panel according to the third exemplary embodiment illustrated inFIGS.10A and10B, the touch sensor panel according to the sixth exemplary embodiment illustrated inFIGS.13A and13B, and the touch sensor panel according to the seventh exemplary embodiment illustrated inFIGS.14A and14Bin the floating state, the actual output value of the touch signal output after the remap process is equal to or larger than the reference value based on which whether the touch is input is determined, and this is due to the reduced LGM jamming signal compared to the touch sensor panel according to the second exemplary embodiment, the touch sensor panel according to the third exemplary embodiment, the touch sensor panel according to the sixth exemplary embodiment, and the touch sensor panel according to the seventh exemplary embodiment. Hereinafter, this will be described in detail with reference toFIGS.21A-F. FIG.21Ais a portion covered by a touch window area having a predetermined size in the touch sensor panel in the related art illustrated inFIG.2,FIG.21Bis a portion covered by the touch window area having the predetermined size in the touch sensor panel according to the first exemplary embodiment illustrated inFIG.8A,FIG.21Cis a portion covered by the touch window area having the predetermined size in the touch sensor panel according to the fourth exemplary embodiment illustrated inFIG.11A,FIG.21Dis a portion covered by the touch window area having the predetermined size in the touch sensor panel according to the second exemplary embodiment illustrated inFIG.9A,FIG.21Eis a portion covered by the touch window area having the predetermined size in the touch sensor panel according to the third exemplary embodiment illustrated inFIG.10A, andFIG.21Fis portions covered by the touch window areas having the predetermined size in the touch sensor panels according to the sixth and seventh exemplary embodiments illustrated inFIGS.13A and14A. InFIGS.21A-F, the touch window area having the predetermined size may be defined as a larger area than a touch area of the other fingers, like a touch area of a thumb. In particular, the predetermined size (or area) of the touch window area may be implemented to be about 15 mm×15 mm or more and about 20 mm×20 mm or less, but preferably, may be implemented with a size of about 16 mm×16 mm. InFIGS.21A-F, when a result value obtained by multiplying the number of unit cells configuring the same driving electrodes TX the number of unit cells configuring the same receiving electrodes RX disposed in the touch window area is minimized, it is possible to reduce the effect of the LGM jamming signal. Herein, an area of one unit cell may be defined as 4 mm×2 mm. InFIG.21A, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 1 and the number of unit cells configuring the same receiving electrodes RX is 16, so that the result value of multiplying the numbers is 16. InFIG.21B, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 4 and the number of unit cells configuring the same receiving electrodes RX is 4, so that the result value of multiplying the numbers is 16. InFIG.21C, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 8 and the number of unit cells configuring the same receiving electrodes RX is 2, so that the result value of multiplying the numbers is 16. In the meantime, inFIG.21D, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 2 and the number of unit cells configuring the same receiving electrodes RX is 4, so that the result value of multiplying the numbers is 8. InFIG.21E, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 2 and the number of unit cells configuring the same receiving electrodes RX is 4, so that the result value of multiplying the numbers is 8. InFIG.21F, the number of unit cells configuring the same driving electrodes TX disposed in the touch window area is 1 and the number of unit cells configuring the same receiving electrodes RX is 4, so that the result value of multiplying the numbers is 4. Herein, in the caseFIG.21F, the two receiving electrodes RX0and RX3located at both sides of one driving electrode TX0are different from each other, and four LGM signals are included in each of the receiving electrodes RX0and RX3, so that the final result value after the remap process is 8. In the cases ofFIGS.21A-C, the result value of the multiplication of the number of unit cells configuring the same driving electrodes TX and the number of unit cells configuring the same receiving electrodes RX disposed in the touch window area is 16, but in the cases ofFIGS.21D-F, the result value of the multiplication of the number of unit cells configuring the same driving electrodes TX and the number of unit cells configuring the same receiving electrodes RX disposed in the touch window area is 8, which was reduced by ½. When the result value is decrease, the size of the LGM signal is also decreased by ½. As a result, the effect of the LGM jamming signal is reduced by decreasing the number of same driving electrodes and/or the same receiving electrodes included in the touch window area, and simultaneously minimizing the result value of the multiplication of the number of unit cells configuring the same driving electrodes TX and the number of unit cells configuring the same receiving electrodes RX disposed in the touch window area to less than 16 (predetermined value). However, the predetermined value (16) is merely an example of the present invention, and the scope of the present invention is not limited thereto, and the predetermined value may be defined with various numerical values. The aforementioned characteristic, structure, effect, and the like described in the exemplary embodiments are included in one exemplary embodiment of the present invention, and are not essentially limited to only one exemplary embodiment. Further, the characteristic, structure, effect, and the like described in each exemplary embodiment may be carried out in other exemplary embodiments through combination or modification by those skilled in the art to which the exemplary embodiments pertain. Accordingly, it shall be construed that contents relating to the combination and the modification are included in the scope of the present invention. In addition, although the exemplary embodiments have been described above, these are only examples, and do not limit the present invention, and those skilled in the art will know that various modifications and applications which are not exemplified above are possible within the scope without departing from the essential characteristics of the present exemplary embodiment. For example, each component specifically presented in the exemplary embodiment may be modified and implemented. Further, it should be interpreted that the differences in relation to the modification and the application are included in the scope of the present invention defined in the accompanying claims. | 97,159 |
11861113 | DESCRIPTION OF EMBODIMENTS A contactless touchscreen interface100may comprise an application computer110displaying information in a user interface118on a digital display119. The digital display118may interface with the application computer110via HDMI or similar video interface. The application computer111may be for various applications, including ATMs, medical equipment, point-of-sale systems, airline check-in interfaces and the like. The interface100may comprise an interface controller101which interfaces with the proximity detector115overlaid the digital display119. The interface controller101comprises a processor110for processing digital data. In operable communication with the processor110across a system bus109is a memory device108. The memory device108is configured for storing digital data including computer program code instructions which may be logically divided into various computer program code controllers107and associated data103. In use, the processor110fetches these computer program code instructions from the memory device108for interpretation and execution for the implementation of the functionality described herein. The controllers107may comprise an image processing controller106, parallax adjustment controller105and an input device (HRD) controller104. The interface controller101may comprise an I/O interface124for interfacing with the proximity detector115. With reference toFIG.2, the proximity detector115detects user interaction at a virtual touch intersection plane122. The virtual touch intersection plane122is set at a distance d from the digital display119. In embodiments, the distance d may be controlled according to an offset setting102within the data103of the interface controller101. In one form, the proximity detector115may take the form of a screen bezel which has light beam interrupt sensors casting an orthogonal arrangement of horizontal and vertical parallel light beams thereacross. The light beams may be infrared. Interruption of an orthogonal pair of light beams is detected by the sensors to determine XY offset-plane interaction coordinates120at the XY virtual touch intersection plane122. If more than two parallel light beams are interrupted, the proximity detector115may determine the centre thereof. In another form, the proximity detector115takes the form of a transparent capacitive sensitive overlay configured to detect capacitive coupling of a user's hand or finger when near the capacitive sensitive overlay. The capacitive sensitive overlay may comprise a matrix of transparent conductive plates, each of which acts as a capacitive plate to detect capacitive coupling with a user's hand or finger. The XY offset-plane interaction coordinates120may be determined by a region of capacitive plates having greatest capacitive coupling. The capacitive plates may be coupled to an operational amplifier wherein the gain thereof may be used to virtually adjust the distance d from the digital display119to the virtual touch intersection plane122. In further embodiments the proximity detector115may detect proximity using visual sensing using at least one an image sensor116. The image sensor116may capture visible image data of the user's hand in relation to the digital display119to determine the relative positioning thereof. In embodiments, the proximity detector115is configured to determine a plurality of relative spatial points (point cloud) lying on contours or extremities of a user's hand or finger to determine the XY offset plane interaction coordinates120. For example, the proximity detector115may use the image processing controller106to map the visual boundaries of the user's hand or finger to determine the plurality of relative spatial points. The most extreme point may be determined therefrom indicative of the position of the user's fingertip. In embodiments, the image sensor116may be a stereoscopic image sensor116and wherein the plurality of relative spatial points are determined from differential comparison from stereoscopic image data obtained from the stereoscopic image sensor116. In accordance with this embodiment, the plurality of relative spatial points may further map the contours of the hand. Using a stereoscopic image sensor116may allow the utilisation of a single image sensor116to determine the relative position of the user's hand or finger. As shown inFIGS.4and5, the image sensor116of the proximity detector115may be located right at an edge130of the digital display119. In embodiments, the digital display119may be surrounded by a screen bezel and wherein the image sensor116is located on or within the bezel. As is further illustrated inFIGS.4and5, so as to be able to obtain a field of view across the entire surface of the digital display119, the proximity detector115may comprise a plurality of image sensors116around the digital display119. As is further shown inFIGS.4and5, to account for limited field of view of the image sensors116within the confines of the bezel, the image sensors116may capture image data from opposite regions129. For example, the plurality of image sensors116may comprise a first image sensor116A operable to detect interactions at a first region129A opposite the first image sensor116A. Furthermore, a second image sensor116B opposite the first image sensor116A is operable to detect interactions at a second region129B of the digital display119opposite the second image sensor116B. The interface100further comprises a gaze determining imaging system to determine a gaze relative offset with respect to the digital display119. In the embodiment shown inFIG.1, the gaze determining imaging system comprises an image sensor116which captures facial image data of a user's face in front of the digital display119and wherein the image processing controller106determines the gaze relative offset using the facial image data. The image processing controller106may use facial detection to detect a position of a face within the field of view of the image sensor116. The relative gaze offset may be calculated in accordance with a centroid of a facial area detected by the image processing controller106or, in further embodiments, the image processing controller106may further recognise locations of eyes with an official region. The determination of a facial area centroid or locations of the eyes may require less processing power as compared to detecting the actual orientation of the eyes whilst yet providing a relatively accurate parallax adjustment. The parallax adjustment controller105is configured to convert the offset-plane interaction coordinates120Xi and Yi to on-screen apparent coordinates123Xa and Ya. In embodiments, the gaze determining image system may comprise an image sensor116at a top of the bezel whereas the proximity detector115may comprise a pair of image sensors116either side of the bezel. The superior location of the gaze determining image system image sensor116allows unobstructed view of the user's face to determining the gaze relative offset whereas the side-by-side location of the image sensors116of the proximity detector115allows for comprehensive coverage of the surface of the display119within the confines of a tight bezel therearound. In other embodiments, the image sensor116may be an infrared image sensor to detect a heat signature of the hand to determine the relative positioning thereof. In embodiments, the infrared image sensor116may locate behind the digital display119to detect infrared through the digital display. The controllers107may comprise a human input device (HID) controller104which converts the on-screen apparent coordinates123to an HID input via the USB interface112or other HID input of the application computer111. As such, in effect, the HID controller104may emulate a mouse input device from the perspective of the application computer101. As such, the application computer111may display a mouse cursor117or other interaction indication at the calculated on-screen apparent coordinates123. With reference toFIGS.2and3, there is shown the digital display119and the virtual touch intersection plane122offset a distance d from the digital display119. As alluded to above, the distance d may be physically set by the construction of the proximity detector115(such as the positioning of the light beam interrupts with respect to the digital display119) or alternatively virtually adjusted, such as by adjusting the gain of an operational amplifier interfacing the aforedescribed capacitive touch sensitive overlay according to the offset setting102of the interface controller101. In embodiments, the interface controller101may dynamically adjust the offset setting102. For example, in one manner, for colder temperatures, the interface controller101may increase the offset setting102to account for when users wear bulkier gloves in cold weather. In alternative embodiments, the interface controller101may dynamically adjust the offset setting102according to user specific interactions. For example, using the aforedescribed capacitive touch sensor, the interface controller101may detect an offset at which the user prefers to virtually tap the virtual touch intersection plane122, such as by determining peak values of measured capacitive coupling of the capacitive plates. For example, some users may inherently prefer to tap the virtual touch intersection plane closer to the digital display119as compared to others. As such, in this way, the interface controller101dynamically adjusts the positioning of the virtual touch intersection plane according to the specific user behaviour. FIGS.2and3show the image sensor116capturing image data of the user's face125wherein the image processing controller106may determine relative angles thereof both in the horizontal and vertical plane.FIG.3illustrates these angles being resolved into gaze relative offset Xg and Yg. The gaze relative offsets may be determined with respect to a reference point of the digital display119, such as a bottom left-hand corner thereof. Furthermore,FIGS.2and3show the interaction point coordinates120Xi and Yi, being the position at which the user's forefinger intersects the virtual touch intersection plane122. Where the digital display119is in a relatively low position, the interaction point coordinates120may be beneath the gaze of the user at the virtual touch intersection plane122given that the trajectory of the user's finger and the gaze are nonparallel but coincident at the on-screen apparent coordinates123. The parallax adjustment controller105may adjust the offset between the interaction point coordinates120and the on-screen apparent coordinates123depending on the angle of the user gaze. FIGS.2and3furthermore display on-screen apparent coordinates123Xa and Ya calculated by the parallax adjustment controller105. In embodiments, the interface100comprises a feedback interface114to provide feedback when the users forefinger intersects the virtual touch intersection plane122. The feedback interface114may generate an audible output, such as a beep sound every time the user's forefinger intersects the virtual touch intersection plane122. Alternatively, the feedback interface114may display a pointer indication117within the user interface118when the user's forefinger intersects the virtual touch intersection plane122. In further embodiments, the feedback interface114generates haptic feedback. In embodiment, the feedback interface140may comprise a plurality of ultrasonic transducers128which emit ultrasound which induces mid-air tactile feedback of the user's finger when intersecting the virtual touch intersection plane122. As shown inFIG.6, the ultrasonic transducers128may be located at an edge of the digital display119. In this regard, the ultrasonic transducers128may be located at or within a bezel of the digital display119. Furthermore, the ultrasonic transducers128may be orientated in towards the digital display119to direct ultrasound inwardly. As shown inFIG.6, the ultrasonic transducers128may comprise ultrasonic transducers128B which are recessed beneath a surface plane of the digital display119, being advantageous for flush mounted application. However, for enhanced ultrasound transmission, the ultrasonic transducers128may comprise ultrasonic transducers128A which extend above the surface plane of the digital display119. The ultrasonic transducers128may emit ultrasound at between 20 kHz-60 kHz, preferably at approximately 40 kHz. As is further shown inFIG.6, the ultrasonic transducers128may be located at opposite edges of the digital display119so that ultrasound admitted thereby coincides from opposite directions. In embodiments, the feedback interface114may control the timing or phase of the operation of the ultrasonic transducers128so that ultrasound admitted thereby coincides substantially simultaneously at a focal point at the XY offset-plane interaction coordinates120. Specifically, the ultrasonic transducers may comprise a first set of ultrasonic transducers128A located at a first side of the digital display119and a second set of ultrasonic transducers1288located at an opposite side of the digital display119. The feedback interface114may control the timing of the operation of the transducers128A and128B taking into account the speed of sound so that ultrasonic pressure waves131generated by the transducers128coincide at a focal point at the XY offset-plane interaction coordinates120. For example, with reference to the orientation ofFIG.6, the first set of ultrasonic transducers128A on the left would fire just before the ultrasonic transducers1288on the right so as to coincide simultaneously at the XY offset-plane interaction coordinates120. Alternatively, the feedback interface114may control the phase of the ultrasound generated by the transducers128A and128B so that their maximum amplitudes coincide at the focal point at the XY offset plane interaction corners120. In embodiments, the feedback interface114controls the frequency of ultrasound generated by the ultrasonic transducers128to create a standing wave at the XY offset-plane interaction coordinates120. For example, the feedback interface114may generate a 40 kHz signal and a 60 kHz signal which coincide to generate a standing wave at the XY offset-plane interaction coordinates. In embodiments, the feedback interface114may provide continuous feedback whilst the user's finger penetrates the virtual touch intersection plane122. Furthermore, the feedback interface114may generate different types of haptic feedback depending on the on-screen gestures. For example, a mouseclick may be signified by a high amplitude ultrasonic pulse whereas a drag gesture may be signified by a continuous train of lower amplitude pulses. Further haptic feedback may be provided to signify key click gestures and the like. In embodiments, the interface may comprise an interaction depth indicator indicating to a user whether the user is interacting with the intersection plane122at an appropriate depth. In accordance with this embodiment, the digital display may comprise a visual indicator, such as one located at a bezel of the digital display which, for example, may display green when the user is interacting at the appropriate depth. If the proximity detector detects continuous intersection of the intersection plane122, the depth indicator may indicate to the user that the user is interacting too close. Conversely, if the proximity detector detects intermittent interaction with the virtual touch intersection plane, the depth indicator may indicate to the user that the user is interacting at the appropriate depth. In embodiments, the interface100may utilise a redundant touch interface such as the aforedescribed capacitive touch interface and/or a haptic overlay which detects physical touches on the digital display119in the event that the proximity detector115is non-functional or wherein the redundant touch interface is used in combination with the detection of approximately detector115. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practise the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed as obviously many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention. The term “approximately” or similar as used herein should be construed as being within 10% of the value stated unless otherwise indicated. | 17,194 |
11861114 | DETAILED DESCRIPTION Exemplary embodiments, examples of which are illustrated in the accompanying drawings, are elaborated below. The following description may refer to the accompanying drawings, in which identical or similar elements in two drawings are denoted by identical reference numerals unless indicated otherwise. Implementations set forth in the following exemplary embodiments do not represent all implementations in accordance with the subject disclosure. Rather, they are mere examples of the apparatus (i.e., device) and method in accordance with certain aspects of the subject disclosure as recited in the accompanying claims. The exemplary implementation modes may take on multiple forms, and should not be taken as being limited to examples illustrated herein. Instead, by providing such implementation modes, embodiments herein may become more comprehensive and complete, and comprehensive concept of the exemplary implementation modes may be delivered to those skilled in the art. Implementations set forth in the following exemplary embodiments do not represent all implementations in accordance with the subject disclosure. Rather, they are merely examples of the apparatus and method in accordance with certain aspects herein as recited in the accompanying claims. A term used in an embodiment herein is merely for describing the embodiment instead of limiting the subject disclosure. A singular form “a” and “the” used in an embodiment herein and the appended claims may also be intended to include a plural form, unless clearly indicated otherwise by context. Further note that a term “and/or” used herein may refer to and contain any combination or all possible combinations of one or more associated listed items. Note that although a term such as first, second, third may be adopted in an embodiment herein to describe various kinds of information, such information should not be limited to such a term. Such a term is merely for distinguishing information of the same type. For example, without departing from the scope of the embodiments herein, the first information may also be referred to as the second information. Similarly, the second information may also be referred to as the first information. Depending on the context, a “if” as used herein may be interpreted as “when” or “while” or “in response to determining that.” In addition, described characteristics, structures or features may be combined in one or more implementation modes in any proper manner. In the following descriptions, many details are provided to allow a full understanding of embodiments herein. However, those skilled in the art will know that the technical solutions of embodiments herein may be carried out without one or more of the details, alternatively, another method, component, device, option, etc., may be adopted. Under other conditions, no detail of a known structure, method, device, implementation, material or operation may be shown or described to avoid obscuring aspects of embodiments herein. A block diagram shown in the accompanying drawings may be a functional entity which may not necessarily correspond to a physically or logically independent entity. Such a functional entity may be implemented in form of software, in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices. In addition, a term such as “first”, “second”, and the like, may serve but for description purposes and should not be construed as indication or implication of relevancy, or implication of a quantity of technical features under consideration. Accordingly, a feature with an attributive “first”, “second”, etc., may expressly or implicitly include at least one such feature. Herein by “multiple”, it may mean two or more unless indicated otherwise expressly. FIG.1is a diagram of electronic equipment100according to an exemplary embodiment herein. As shown inFIG.1, the electronic equipment100includes a display101, an ultrasound emitter102, an ultrasound receiver103, and a processor104. The ultrasound emitter102is adapted to emitting a first ultrasound signal into at least a space the display faces. The ultrasound receiver103is adapted to receiving a second ultrasound signal. The second ultrasound signal is an echo of the first ultrasound signal reflected by an object. The processor104is connected respectively to the ultrasound emitter and the ultrasound receiver. The processor is adapted to acquiring a floating touch signal by locating the object in three-dimensional space according to charactering information charactering the first ultrasound signal and the second ultrasound signal, and executing an instruction corresponding to the floating touch signal. Electronic equipment100may send an ultrasound signal. Electronic equipment100may receive an ultrasound signal. An ultrasound signal is an acoustic signal of a frequency higher than 20,000 Hz. Electronic equipment100) may include a mobile User Equipment (UE), a fixed UE, etc., including a display. For example, electronic equipment100may include a mobile phone, a tablet computer, a television, and the like. An ultrasound emitter102may be an existing device in electronic equipment100, such as a speaker capable of playing an audio. An ultrasound receiver103may also be an existing device in electronic equipment100, such as a microphone capable of collecting an audio. A processor140may include an Application Processor (AP), a microprocessor unit (MCU), etc. Of course, an ultrasound emitter102, an ultrasound receiver103, etc., may be provided separately in electronic equipment. Embodiments herein are not limited thereto. A space a display101faces may refer to a space in a direction a display surface of a screen of the display101faces. An ultrasound emitter102may emit a first ultrasound signal into a space a display101faces. Then, an ultrasound receiver103may detect a second ultrasound signal. The second ultrasound signal may be an echo of the first ultrasound signal reflected by an object. An object may be any object within a range covered by the first ultrasound signal. For example, the object may include, but is not limited to, a finger of a user, a stylus held by a user, and the like. A processor104may be connected respectively to an ultrasound emitter102and an ultrasound receiver103. A processor104may locate an object in three-dimensional space according to charactering information that characters a first ultrasound signal and a second ultrasound signal. Charactering information that characters a first ultrasound signal and a second ultrasound signal may include at least a time of emitting the first ultrasound signal, a time of receiving the second ultrasound signal, etc. The charactering information that characters the first ultrasound signal and the second ultrasound signal may further include an angle of emitting the first ultrasound signal, an angle of receiving the second ultrasound signal, and the like. A processor104may locate an object in three-dimensional space according to charactering information that characters a first ultrasound signal and a second ultrasound signal. That is, any object within a spatial range coverable by a first ultrasound signal may be located. Further, the processor104may acquire a floating touch signal by locating the object in three-dimensional space. The processor may execute an instruction corresponding to the floating touch signal. A floating touch signal may be acquired by locating an object in three-dimensional space as follows. A location of a projection of an object on a display may be determined according to a location of the object in three-dimensional space. A floating touch signal corresponding to the location of the projection of the object on the display may be acquired. For example, a first correspondence between a location of a projection of an object on a display and an application may be stored in electronic equipment. The electronic equipment may determine the location of the projection of an object on the display. The electronic equipment may acquire, based on the first correspondence, a floating touch signal for opening an application corresponding to the location of the projection of the object on the display. The electronic equipment may open the application. A floating touch signal may be acquired by locating an object in three-dimensional space as follows. A trajectory of an object may be determined according to a location of the object in three-dimensional space within a preset duration. A floating touch signal corresponding to the trajectory may be acquired. For example, a second correspondence between a trajectory of an object and an instruction for displaying a page via a display may be stored in electronic equipment. The electronic equipment may locate the object in three-dimensional space within a predetermined period. Then, the electronic equipment may determine a trajectory of the object. The electronic equipment may acquire, based on the second correspondence, a floating touch signal for controlling display of a page. For example, when a trajectory of the object is moving away from the bottom of a display, a floating touch signal for controlling display of a page sliding up may be acquired. When a trajectory of the object is approaching a left edge of a display gradually, a floating touch signal for controlling display of switching to a next page may be acquired. Electronic equipment locates an object in three-dimensional space based on charactering information that characters a first ultrasound signal emitted and a second ultrasound signal. Then, the electronic equipment acquires a floating touch signal according to a location of the object in three-dimensional space, thereby implementing floating control. On one hand, this allows a user to control electronic equipment without having to touch a display of the electronic equipment. This avoids failure of a user to control the display effectively due to a wet finger as may occur with contact-based control, improving user experience. On the other hand, compared to a mode in which a gesture of a user is identified according to an ultrasound signal and then floating touch is performed, with embodiments herein, location-based control at a more fine-rained level may be implemented by floating touch based on locating an object in three-dimensional space. The ultrasound receiver103and the ultrasound emitter102may be provided as separate pieces. There may be N ultrasound receivers103. At least two ultrasound receivers103of the N ultrasound receivers may be located along different edges of the display. The N may be a positive integer no less than 3. The N ultrasound receivers103may be adapted to acquiring N second ultrasound signals by receiving the second ultrasound signal respectively. The processor104may be adapted to locating the object in three-dimensional space according to a time of emitting the first ultrasound signal, a time of receiving each of the N second ultrasound signals, and information on locations of the N ultrasound receivers. A processor104may determine a difference between a time of emitting a first ultrasound signal and a time of receiving a second ultrasound signal by an ultrasound receiver103. The processor may determine a distance between an object and the ultrasound receiver103according to the difference and a speed at which an ultrasound signal propagates. For example, an ultrasound emitter102may emit a first ultrasound signal at a time T0. An ultrasound receiver103of N ultrasound receivers103may receive a second ultrasound signal at time T1. The second ultrasound signal may be an echo reflected by an object. It is known that an ultrasound travels at a speed V. Then, a distance L between the object and the ultrasound receiver103may be determined using a formula (1) as follows. L=(T1−T0)*V(1) A first location of an object determined by a processor104may refer to distances between the object and the respective N ultrasound receivers103, as determined by the processor104. The processor104may determine the location of the object in three-dimensional space by determining distances between the object and the respective N ultrasound receivers103. At least two ultrasound receivers103of the N ultrasound receivers may be located along different edges of the display101. Accordingly, a polyhedron may be formed by connecting any two ultrasound receivers103of the known N ultrasound receivers103and connecting an object and each of the ultrasound receivers103. A length of an edge of the polyhedron may be the distance between any two of the ultrasound receivers103, or the distance between the object and an ultrasound receiver103. The location of the object in three-dimensional space may be determined based on a fixed structure of the polyhedron. Accordingly, electronic equipment may acquire a floating touch signal by locating the object in three-dimensional space, implementing floating touch. The ultrasound emitter102may be provided along a first edge of the display101. At least one ultrasound receiver103may be provided along a second edge of the display101. The second edge may be opposite the first edge. Alternatively, the second edge may be adjacent to the first edge. An ultrasound emitter102may radiate a first ultrasound signal into a space a display101faces. Accordingly, the first ultrasound signal radiated may be reflected by an object, sending back multiple second ultrasound signals. Accordingly, by providing at least one ultrasound receiver103of multiple ultrasound receivers103and an ultrasound emitter102respectively along different edges, a second ultrasound signal may be received within a greater range. That is, the location of an object within a greater three-dimensional space may be determined. Accordingly, electronic equipment may support floating touch within a greater range. If the second edge is opposite the first edge, the second edge may be identical to the first edge in length. If the second edge is adjacent to the first edge, the second edge may be greater than the first edge in length. A display101of electronic equipment may be of a rectangular shape. If the second edge is opposite the first edge, the second edge may be identical to the first edge in length. The second edge may be less than an adjacent edge in length. A display101of electronic equipment may be of a rectangular shape, for example. When a second edge of the display is opposite a first edge of the display, the first edge and the second edge are short edges of the rectangle. That is, an ultrasound emitter102provided along the first edge may be far away from at least one ultrasound receiver103provided along the second edge. Accordingly, when a first ultrasound signal is radiated using the ultrasound emitter102and reflected by an object, a second ultrasound signal may be sent back at any angle. Compared to a case where an ultrasound receiver103and an ultrasound emitter102are provided at one location, second ultrasound signals may be received at a wider range of angles of reflection. Accordingly, electronic equipment may support floating touch within a greater range. For example,FIG.2is a diagram of layout of an ultrasound emitter and an ultrasound receiver applicable to a mobile phone according to an exemplary embodiment herein. As shown inFIG.2, the mobile phone may include a display101A, a speaker102A, a microphone103A, a microphone103B, and a microphone103C. The speaker102A may be the ultrasound emitter102according to one or more embodiments herein. The microphone103A, the microphone1038, and the microphone103C may be the N ultrasound receivers103according to one or more embodiments herein. In this example, the N may be 3. Note that the N is not limited to 3. The speaker102A and the microphone103C may be provided along a top edge, i.e., a first edge, of the mobile phone. The microphones103A and103B may be provided along a bottom edge, i.e., a second edge of the mobile phone. With this layout, second ultrasound signals in a wider range may be detected. Based onFIG.2,FIG.3is a diagram of emitting an ultrasound signal by an ultrasound emitter according to an exemplary embodiment herein.FIG.3shows a side view of a mobile phone. A speaker102A located at the top of the mobile phone may radiate a first ultrasound signal into a space a display101A faces. Based onFIG.2andFIG.3,FIG.4Ais a diagram of emitting an ultrasound signal by an ultrasound emitter and receiving an ultrasound signal by an ultrasound receiver according to an exemplary embodiment herein.FIG.4Bis a diagram of emitting an ultrasound signal by an ultrasound emitter and receiving an ultrasound signal by an ultrasound receiver according to an exemplary embodiment herein. As shown inFIG.4AandFIG.4B, a finger may hang above a display101A. An echo signal of a first ultrasound signal emitted by a speaker102A reflected by the finger may be received respectively by a microphone103A, a microphone103B, and a microphone103C. The echo signal may be a second ultrasound signal. As shown inFIG.4AandFIG.4B, a microphone may receive a second ultrasound signal. There may be 3 second ultrasound signals. For example, based on the layout shown inFIG.2and the formula (1), a processor in a mobile phone may determine a distance L1between a finger and a microphone103A, a distance L2between the finger and a microphone103B, and a distance L3between the finger and a microphone103C, respectively. The location of the finger in three-dimensional space may be computed based on a tetrahedron formed by L1, L2, L3and a distance between any two of the microphones. A floating touch signal may be determined according to the location of the finger in three-dimensional space, achieving floating control. An ultrasound emitter102may be adapted to periodically radiating a first ultrasound signal into a space a display101faces. For example, an ultrasound emitter102may emit first ultrasound signals at multiple angles at a first moment. The ultrasound emitter may emit first ultrasound signals at multiple angles again after an interval T. The interval T may depend on a preset distance within which electronic equipment supports floating touch. For example, the interval may increase with a maximum floating distance supporting floating touch. The greater a maximum detection distance within which electronic equipment detects floating touch, the greater the interval T may be. By radiating a first ultrasound signal at intervals, a second ultrasound signal may be received within a greater range and with an improved reception rate. Accordingly, electronic equipment may support floating touch within a greater range. Effective floating touch may be ensured. The ultrasound receiver103and the ultrasound emitter102may be provided as one piece. The ultrasound emitter102may be adapted to emitting the first ultrasound signal into the space the display101faces by scan at a predetermined angular step. The ultrasound receiver103may be adapted to receiving the second ultrasound signal at an angle of emitting the first ultrasound signal. The processor104may be adapted to locating the object in three-dimensional space according to the angle of emitting the first ultrasound signal, a time of emitting the first ultrasound signal, and a time of receiving the second ultrasound signal. An ultrasound emitter102may emit a first ultrasound signal into a space a display faces. An ultrasound receiver103and the ultrasound emitter102may be provided as one piece, such as in the middle at the bottom of the display101. The ultrasound receiver103may receive the second ultrasound signal at an angle of emitting the first ultrasound signal by the ultrasound emitter102. For example,FIG.5is a plan view of sending an ultrasound signal and receiving an ultrasound signal in electronic equipment according to an exemplary embodiment herein. As shown inFIG.5, an (ultrasound) emitter102B and an (ultrasound) receiver103D may be provided as one piece. The emitter102B may emit a first ultrasound signal at an angle B with respect to a direction X. The receiver103D may receive a second ultrasound signal also at the angle B. The second ultrasound signal may be an echo of the first ultrasound signal reflected by a finger. The electronic equipment may be large in size. For example, the electronic equipment may be a television. The ultrasound emitter may be adapted to emitting the first ultrasound signal by maintaining a first angle of a first degree of freedom of the space the display face, while scan by varying a second angle of a second degree of freedom of the space the display faces. The first degree of freedom may be orthogonal to the second degree of freedom. An ultrasound emitter102B may emit a first ultrasound signal into a space a display101faces by maintaining a constant angle of a first degree of freedom. The ultrasound emitter may vary an angle of emitting the first ultrasound signal by varying a second angle of a second degree of freedom orthogonal to the first degree of freedom. By adjusting the angle of emitting the first ultrasound signal in the second degree of freedom, the first ultrasound signal may cover the entire range of the second degree of freedom. Accordingly, electronic equipment may support floating touch within a greater range of the second degree of freedom. The first degree of freedom may refer to a dimension of latitude. The second degree of freedom may refer to a dimension of longitude. A latitude line and a longitude line are two lines on a sphere that intersect each other at a right angle at an intersection. A latitude line extends along the dimension of latitude. A longitude line extends along the dimension of longitude. An ultrasound emitter102may maintain a first angle of the dimension of latitude of a space a display101faces, and adjust a second angle of the dimension of longitude by a predetermined angular step in a range of 0 degree to 360 degrees. For example, the ultrasound emitter102may maintain an angle of 5 degrees of the dimension of latitude, and adjust an angle of emitting the first ultrasound signal in the dimension of longitude by an angular step of 30 degrees, until the ultrasound emitter102completes emission of 360-degree scan. The ultrasound emitter102may be adapted to varying the first angle of the first degree of freedom by an angular step. An ultrasound emitter102may vary a first angle of a first degree of freedom by an angular step. Accordingly, a first ultrasound signal may cover an entire detection space within one emission cycle. That is, the entire range of the first degree of freedom and the second degree of freedom may be covered. The detection space may be a subspace of a three-dimensional space a display101faces. Accordingly, electronic equipment may support floating touch within a greater range of the first degree of freedom and the second degree of freedom. Once the first angle of the dimension of latitude has been adjusted, emission at the first angle of the dimension of latitude may be maintained. The angle of emitting the first ultrasound signal may be adjusted by varying the second angle of the dimension of longitude. Exemplarily, the angle of emission by the ultrasound emitter102in the dimension of latitude may be adjusted from 5 degrees to 10 degrees. The angle of emitting the first ultrasound signal may be adjusted by an angular step of 30 degrees in the dimension of longitude, until the ultrasound emitter102completes emission of 360-degree scan. When an ultrasound emitter102emits a first ultrasound signal into a space a display101faces at various angles, the first ultrasound signal may be emitted directionally by serpentine scan according to rows and columns. When an ultrasound emitter102and an ultrasound receiver103are provided as one piece, the way the emitter102emits a first ultrasound signal is not limited to embodiments herein. An ultrasound emitter102may emit a first ultrasound signal into a space a display101faces at various angles.FIG.6is a diagram of a three-dimensional structure according to an exemplary embodiment herein in a spherical coordinate system, which illustrates sending an ultrasound signal and receiving an ultrasound signal as shown inFIG.5. As shown inFIG.6, a projection of a finger on a display101is at a point C. A line connecting the finger and the (ultrasound) emitter102B may form an angle A0with the display101. A line connecting the point C and the (ultrasound) emitter102B or the (ultrasound) receiver103D may form an angle B0with the axis X. The angle A0may be taken as the first angle of the dimension of latitude to be maintained. The angle B0may be taken as the second angle of the dimension of longitude to be rotated. Based onFIG.6, a location of an object with respect to the emitter102B or the receiver103D may be computed using the formula (2) as follows. X0=L′*cosA0*cosB0(2)Y0=L′*cosA0*sinB0Z0=L′*sinA0 The L′ may refer to a distance between the object and the emitter102B or the receiver103C, computed by the formula (1). As shown inFIG.6, the emitter102B or the receiver103D may be taken as the origin. The X0may refer to a coordinate value of the projection of the object on the axis X of the display101B. The Y0may refer to a coordinate value of the projection of the object on the axis Y of the display101B. The Z0may refer to a height of the object with respect to the display101B. A location of an object in three-dimensional space may be determined based on the location of the object determined with respect to an ultrasound emitter102or an ultrasound receiver103. Accordingly, floating touch may be implemented on electronic equipment. Locations of the ultrasound emitter102or the ultrasound receiver103on the display101are known. Accordingly, a location of the projection of an object on the display101may be computed, thereby implementing floating touch. To improve accuracy of touch control, floating touch may be executed when a detected height of an object with respect to the display101is less than a preset distance threshold. For example, floating touch may be executed when the Z0is less than the preset distance threshold. The electronic equipment may further include a temperature detector105. The temperature detector105may be adapted to detecting an ambient temperature of the electronic equipment. The processor104may be adapted to controlling the ultrasound emitter102to emit the first ultrasound signal in response to determining that the ambient temperature is within a preset temperature range. A speed at which ultrasound propagates in the air tends to be affected by temperature. For example, under an excessively high temperature, an ultrasound signal may propagate at an excessively high speed. Accordingly, the distance of an object with respect to the ultrasound receiver103as determined using the formula (1) may turn out to be excessively large. Under an excessively low temperature, an ultrasound signal may propagate at an excessively low speed. Accordingly, the distance of an object with respect to the ultrasound receiver103as determined using the formula (1) may turn out to be excessively small. A distance detected under an extreme temperature may fail to reflect a true distance of an object with respect to the ultrasound receiver103. Accordingly, with one or more embodiments herein, a temperature detector105may be provided in electronic equipment. The temperature detector may activate floating touch under a suitable temperature to control the ultrasound emitter102to emit the first ultrasound signal, improving precision in floating touch of the electronic equipment. A display101of electronic equipment may serve for both display and detecting a contact-based touch operation. That is, the display101may also serve as a touch screen. The electronic equipment may support contact-based touch control when floating touch is deactivated. When floating touch is activated, the electronic equipment may output a reminder message to remind a user of the electronic equipment to use floating touch as well. After floating touch is activated, contact-based touch control may be disabled. Alternatively, contact-based touch control may remain activated, which is not limited. The electronic equipment may further include a living object detector106. The living object detector106may be adapted to detecting whether there is a living object within a preset distance. The processor104may be adapted to, in response to determining that there is a living object within the preset distance, controlling the ultrasound emitter102to emit the first ultrasound signal. A first ultrasound signal may be reflected by any object. By providing a living object detector106in electronic equipment, an ultrasound emitter102may be controlled to emit the first ultrasound signal only when an object is determined as a living object, effectively reducing accidental operation unintended. A living object detector106may include a camera installed in electronic equipment opposite a display101. A processor104in the electronic equipment may control the camera to collect an image of a space the display101faces within a preset distance. If it is determined that the collected image includes a moving/varying object, such as a face with a varying expression, it may be determined that there is a living object within the preset distance. Floating touch may be activated when it is determined that there is a living object within a preset distance, to control an ultrasound emitter102to emit a first ultrasound signal. FIG.7is a flowchart of a method for controlling electronic equipment according to an exemplary embodiment herein. The method is applicable to electronic equipment100provided herein. As shown inFIG.7, the method may include a step as follows. In step S201, a first ultrasound signal is emitted via an ultrasound emitter. In step S202, a second ultrasound signal is received via an ultrasound receiver. The second ultrasound signal is an echo of the first ultrasound signal reflected by an object. In step S203, a floating touch signal is acquired by locating the object in three-dimensional space according to charactering information charactering the first ultrasound signal and the second ultrasound signal. In step S204, an instruction corresponding to the floating touch signal is executed. The first ultrasound signal may be emitted as follows. A first ultrasound signal may be emitted by scan at a predetermined angular step. The second ultrasound signal may be received as follows. The second ultrasound signal may be received at an angle of emitting the first ultrasound signal. The floating touch signal may be acquired by locating the object in three-dimensional space according to the charactering information charactering the first ultrasound signal and the second ultrasound signal as follows. The object may be located in three-dimensional space according to the angle of emitting the first ultrasound signal, a time of emitting the first ultrasound signal, and a time of receiving the second ultrasound signal. The first ultrasound signal may be emitted by scan at the predetermined angular step as follows. The first ultrasound signal may be emitted by maintaining a first angle of a first degree of freedom while scan by varying a second angle of a second degree of freedom. The first degree of freedom may be orthogonal to the second degree of freedom. The first ultrasound signal may be emitted by scan at the predetermined angular step as follows. The first angle of the first degree of freedom may be varied by an angular step. There may be N ultrasound receivers. At least two ultrasound receivers of the N ultrasound receivers may be located along different edges of the electronic equipment. The N may be a positive integer no less than 3. The second ultrasound signal may be received as follows. N second ultrasound signals as echoes of the first ultrasound signal reflected by the object may be received. The floating touch signal may be acquired by locating the object in three-dimensional space according to the charactering information charactering the first ultrasound signal and the second ultrasound signal as follows. The floating touch signal may be acquired by locating the object in three-dimensional space according to a time of emitting the first ultrasound signal, a time of receiving each of the N second ultrasound signals, and information on locations of the N ultrasound receivers. The first ultrasound signal may be emitted as follows. The first ultrasound signal may be radiated periodically. The method may further include a step as follows. An ambient temperature of the electronic equipment may be detected. S201may be as follows. The first ultrasound signal may be emitted in response to determining that the ambient temperature is within a preset temperature range. The method may further include a step as follows. It may be detected whether there is a living object within a preset distance to the electronic equipment. S201may be as follows. The first ultrasound signal may be emitted in response to determining that there is a living object within the preset distance. A step of the method according to at least one embodiment herein may be performed in a mode elaborated in at least one embodiment of the device herein, which will not be repeated here. FIG.8is a block diagram of electronic equipment800according to an exemplary embodiment. For example, the electronic equipment800may be a mobile phone, a computer, a TV, digital broadcast UE, messaging equipment, a gaming console, tablet equipment, medical equipment, fitness equipment, a personal digital assistant, and the like. Referring toFIG.8, the electronic equipment800may include at least one of a processing component802, memory804, a power supply component806, a multimedia component808, an audio component810, an Input/Output (I/O) interface812, a sensor component814, a communication component816, and the like. The processing component802may generally control an overall operation of the electronic equipment800, such as operations associated with display, a telephone call, data communication, a camera operation, a recording operation, and the like. The processing component802may include one or more processors820to execute instructions so as to complete all or a part of an aforementioned method. For example, the electronic equipment800may be the electronic equipment100, and the processor820may be the processor104. In addition, the processing component802may include one or more modules to facilitate interaction between the processing component802and other components. For example, the processing component802may include a multimedia portion to facilitate interaction between the multimedia component808and the processing component802. The memory804may be adapted to storing various types of data to support the operation at the electronic equipment800. Examples of such data may include instructions of any application or method adapted to operating on the electronic equipment800, contact data, phonebook data, messages, pictures, videos, etc. The memory804may be realized by any type of transitory or non-transitory storage equipment or a combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic memory, flash memory, a magnetic disk, a compact disk, and the like. The power supply component806may supply electric power to various components of the electronic equipment800. The power supply component806may include a power management system, one or more power sources, and other components related to generating, managing, and distributing electricity for the electronic equipment800. The multimedia component808may include a screen that provides an output interface between the electronic equipment800and a user. The screen may include a Liquid Crystal Display (LCD), a Touch Panel (TP), and the like. If the screen includes a TP, the screen may be realized as a touch screen to receive a signal input by a user. The TP may include one or more touch sensors for sensing touch, slide, and gestures on the TP. The one or more touch sensors not only may sense the boundary of a touch or slide move, but also detect the duration and pressure related to the touch or slide move. The multimedia component808may include at least one of a front camera or a rear camera. When the electronic equipment800is in an operation mode such as a photographing mode or a video mode, at least one of the front camera or the rear camera may receive external multimedia data. Each of the front camera or the rear camera may be a fixed optical lens system or may have a focal length and be capable of optical zooming. The audio component810may be adapted to outputting and/or inputting an audio signal. For example, the audio component810may include a microphone (MIC). When the electronic equipment800is in an operation mode, such as a call mode, a recording mode, a voice recognition mode, and the like, the MIC may be adapted to receiving an external audio signal. The received audio signal may be further stored in the memory804or may be sent via the communication component816. The audio component810may further include a loudspeaker adapted to outputting the audio signal. The I/O interface812may provide an interface between the processing component802and a peripheral interface portion. Such a peripheral interface portion may be a keypad, a click wheel, a button, and the like. Such a button may include but is not limited to at least one of a homepage button, a volume button, a start button, or a lock button. The sensor component814may include one or more sensors for assessing various states of the electronic equipment800. For example, the sensor component814may detect an on/off state of the electronic equipment800and relative positioning of components such as the display and the keypad of the electronic equipment800. The sensor component814may further detect a change in the position of the electronic equipment800or of a component of the electronic equipment800, whether there is contact between the electronic equipment800and a user, the orientation or acceleration/deceleration of the electronic equipment800, a change in the temperature of the electronic equipment800, etc. The sensor component814may include a proximity sensor adapted to detecting existence of a nearby object without physical contact. The sensor component814may further include an optical sensor such as a Complementary Metal-Oxide-Semiconductor (CMOS) or a Charge-Coupled-Device (CCD) image sensor used in an imaging application. The sensor component814may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a temperature sensor, etc. The communication component816may be adapted to facilitating wired or wireless communication between the electronic equipment800and other equipment. The electronic equipment800may access a wireless network based on any communication standard, such as Wi-Fi, 2G, 3G . . . , or a combination thereof. The communication component816may broadcast related information or receive a broadcast signal from an external broadcast management system via a broadcast channel. The communication component816may include a Near Field Communication (NFC) module for short-range communication. For example, the NFC module may be based on technology such as Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra-Wideband (UWB) technology, Bluetooth (BT), etc. In an exemplary embodiment, the electronic equipment800may be realized by one or more electronic components such as an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, and the like, to implement the method. In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, such as memory804including instructions, may be provided. The instructions may be executed by the processor820of the electronic equipment800to implement an aforementioned method. For example, the non-transitory computer-readable storage medium may be Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, optical data storage equipment, and the like. A non-transitory computer-readable storage medium has stored therein instructions which, when executed by a processor of electronic equipment, allow the electronic equipment to implement a method for controlling electronic equipment. The electronic equipment includes a display, an ultrasound emitter, an ultrasound receiver, and a processor. The ultrasound emitter is adapted to emitting a first ultrasound signal into at least a space the display faces. The ultrasound receiver is adapted to receiving a second ultrasound signal. The second ultrasound signal is an echo of the first ultrasound signal reflected by an object. The processor is connected respectively to the ultrasound emitter and the ultrasound receiver. The processor is adapted to acquiring a floating touch signal by locating the object in three-dimensional space according to charactering information charactering the first ultrasound signal and the second ultrasound signal, and executing an instruction corresponding to the floating touch signal. Other implementations of the subject disclosure will be apparent to a person having ordinary skill in the art that has considered the specification and or practiced the subject disclosure. The subject disclosure is intended to cover any variation, use, or adaptation of the subject disclosure following the general principles of the subject disclosure and including such departures from the subject disclosure as come within common knowledge or customary practice in the art. The specification and the embodiments are intended to be exemplary only, with a true scope and spirit of the subject disclosure being indicated by the appended claims. It should be understood that the subject disclosure is not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes can be made to the subject disclosure without departing from the scope of the subject disclosure. It is intended that the scope of the subject disclosure is limited only by the appended claims. | 42,878 |
11861115 | DETAILED DESCRIPTION In the following description of various examples, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the various examples. This relates to acoustic touch and/or force sensing systems and methods for acoustic touch and/or force sensing. The position of an object touching a surface can be determined using time-of-flight (TOF) techniques, for example. Acoustic touch and/or force sensing can utilize transducers, such as piezoelectric transducers, to transmit ultrasonic waves along a surface and/or through the thickness of one or more materials (e.g., a thickness of an electronic device housing). As the wave propagates along the surface and/or through the thickness of the one or more materials, an object (e.g., finger, stylus, etc.) in contact with the surface can interact with the transmitted wave, causing a reflection of at least a portion of the transmitted wave. Portions of the transmitted wave energy after interaction with the object can be measured to determine the touch location of the object on the surface of the device. For example, one or more transducers (e.g., acoustic transducers) coupled to a surface of a device can be configured to transmit an acoustic wave along the surface and/or through the thickness of the one or more materials and can receive a portion of the wave reflected back when the acoustic wave encounters a finger or other object touching the surface. The location of the object can be determined, for example, based on the amount of time elapsing between the transmission of the wave and the detection of the reflected wave. Acoustic touch sensing can be used instead of, or in conjunction with, other touch sensing techniques, such as resistive, optical, and/or capacitive touch sensing. In some examples, the acoustic touch sensing techniques described herein can be used on a metal housing surface of a device, which may be unsuitable for capacitive or resistive touch sensing due to interference (e.g., of the housing with the capacitive or resistive sensors housed in the metal housing). In some examples, the acoustic touch sensing techniques described herein can be used on a glass surface of a display or touch screen. In some examples, an acoustic touch sensing system can be configured to be insensitive to contact on the device surface by water, and thus acoustic touch sensing can be used for touch sensing in devices that may become wet or fully submerged in water. Additionally or alternatively, a force applied by the object on the surface can also be determined using TOF techniques. For example, one or more transducers can transmit ultrasonic waves through the thickness of a deformable material, and reflected waves from the opposite edge of the deformable material can be measured to determine a TOF or a change in TOF. The TOF, or change in TOF (ATOF), can correspond to the thickness of the deformable material (or changes in thickness) due to force applied to the surface. Thus, the TOF or change in TOF (or the thickness or change in thickness) can be used to determine the applied force. In some examples, using acoustic touch and force sensing can reduce the complexity of the touch and force sensing system by reducing the sensing hardware requirements (e.g., transducers, sensing circuitry/controllers, etc. can be integrated/shared). FIGS.1A-1Gillustrate exemplary systems with touch screens that can include acoustic sensors for detecting contact between an object (e.g., a finger or stylus) and a surface of the system according to examples of the disclosure. Detecting contact can include detecting a location of contact and/or an amount of force applied to a touch-sensitive surface.FIG.1Aillustrates an exemplary mobile telephone136that includes a touch screen124and can include an acoustic touch and/or force sensing system according to examples of the disclosure.FIG.1Billustrates an example digital media player140that includes a touch screen126and can include an acoustic touch and/or force sensing system according to examples of the disclosure.FIG.1Cillustrates an example personal computer144that includes a touch screen128and a track pad146, and can include an acoustic touch and/or force sensing system according to examples of the disclosure.FIG.1Dillustrates an example tablet computing device148that includes a touch screen130and can include an acoustic touch and/or force sensing system according to examples of the disclosure.FIG.1Eillustrates an example wearable device150(e.g., a watch) that includes a touch screen152and can include an acoustic touch and/or force sensing system according to examples of the disclosure. Wearable device150can be coupled to a user via strap154or any other suitable fastener.FIG.1Fillustrates another example wearable device, over-ear headphones160, that can include an acoustic touch and/or force sensing system according to examples of the disclosure.FIG.1Gillustrates another example wearable device, in-ear headphones170, that can include an acoustic touch and/or force sensing system according to examples of the disclosure. It should be understood that the example devices illustrated inFIGS.1A-1Gare provided by way of example, and other types of devices can include an acoustic touch and/or force sensing system for detecting contact between an object and a surface of the device. Additionally, although the devices illustrated inFIGS.1A-1Einclude touch screens, in some examples, the devices may have a non-touch-sensitive display (e.g., the devices illustrated inFIGS.1F and1G). Acoustic sensors can be incorporated in the above described systems to add acoustic touch and/or force sensing capabilities to a surface of the system. For example, in some examples, a touch screen (e.g., capacitive, resistive, etc.) can be augmented with acoustic sensors to provide a touch and/or force sensing capability for use in wet environments or under conditions where the device may get wet (e.g., exercise, swimming, rain, washing hands) or for use with non-conductive or partially-conductive touch objects (e.g., gloved or bandaged fingers) or poorly grounded touch objects (e.g., objects not in contact with the system ground of the device). In some examples, an otherwise non-touch sensitive display screen can be augmented with acoustic sensors to provide a touch and/or force sensing capability. In such examples, a touch screen can be implemented without the stack-up required for a capacitive touch screen. In some examples, the acoustic sensors can be used to provide touch and/or force sensing capability for a non-display surface. For example, the acoustic sensors can be used to provide touch and/or force sensing capabilities for a track pad (e.g., trackpad146of personal computer144), a button, a scroll wheel, part or all of the housing or any other surfaces of the device (e.g., on the front, rear or sides). For example, acoustic sensors can be integrated into over-ear headphones160(e.g., in exterior circular region162, interior circular region164, and/or over-head band166) or in-ear headphones170(e.g., in earbud172or protrusion174) to provide touch and/or force input (e.g., single-touch or multi-touch gestures including tap, hold and swipe). The acoustic sensing surfaces for acoustic touch and/or force sensing can be made of various materials (e.g., metal, plastic, glass, etc.) or a combination of materials. FIG.2illustrates an exemplary block diagram of an electronic device including an acoustic touch and/or force sensing system according to examples of the disclosure. In some examples, housing202of device200(e.g., corresponding to devices136,140,144,148, and150above) can be coupled (e.g., mechanically) with one or more acoustic transducers204. In some examples, transducers204can be piezoelectric transducers, which can be made to vibrate by the application of electrical signals when acting as a transmitter, and generate electrical signals based on detected vibrations when acting as a receiver. In some examples, transducers204can be formed from a piezoelectric ceramic material (e.g., PZT or KNN) or a piezoelectric plastic material (e.g., PVDF or PLLA). Similarly, transducers204can produce electrical energy as an output when vibrated. In some examples, transducers204can be bonded to housing202by a bonding agent (e.g., a thin layer of stiff epoxy). In some examples, transducers204can be deposited on one or more surfaces (e.g., a cover glass of touch screen208and/or a deformable material as described in more detail below) through processes such as deposition, lithography, or the like. In some examples, transducers204can be bonded to the one or more surfaces using electrically conductive or non-conductive bonding materials. When electrical energy is applied to transducers204it can cause the transducers to vibrate, the one or more surfaces in contact with the transducers can also be caused to vibrate, and the vibrations of the molecules of the surface material can propagate as an acoustic wave through the one or more surfaces/materials. In some examples, vibration of transducers204can be used to produce ultrasonic acoustic waves at a selected frequency over a broad frequency range (e.g., 500 kHz-10 MHz) in the medium of the surface of the electronic device which can be metal, plastic, glass, wood, or the like. It should be understood that other frequencies outside of the exemplary range above can be used while remaining within the scope of the present disclosure. In some examples, transducers204can be partially or completely disposed on (or coupled to) a portion of a touch screen208. For example, touch screen208(e.g., capacitive) may include a glass panel (cover glass) or a plastic cover, and a display region of the touch screen may be surrounded by a non-display region (e.g., a black border region surrounding the periphery of the display region of touch screen208). In some examples, transducers204can be disposed partially or completely in the black mask region of touch screen208(e.g., on the back side of the glass panel behind the black mask) such that the transducers are not visible (or are only partially visible) to a user. In some examples, transducers204can be partially or completely disposed on (or coupled to) a portion of a deformable material (not shown). In some examples, the deformable material can be disposed between touch screen208and a rigid material (e.g., a portion of housing202). In some examples, the deformable material can be silicone, rubber or polyethylene. In some examples, the deformable material can also be used for water sealing of the device. Device200can further include acoustic touch and/or force sensing circuitry206, which can include circuitry for driving electrical signals to stimulate vibration of transducers204(e.g., transmit circuitry), as well as circuitry for sensing electrical signals output by transducers204when the transducer is stimulated by received acoustic energy (e.g., receive circuitry). In some examples, timing operations for acoustic touch and/or force sensing circuitry206can optionally be provided by a separate acoustic touch and/or force sensing controller210that can control timing of and other operations by acoustic touch and/or force sensing circuitry206. In some examples, touch and/or force sensing controller210can be coupled between acoustic touch and/or force sensing circuitry206and host processor214. In some examples, controller functions can be integrated with acoustic touch and/or force sensing circuitry206(e.g., on a single integrated circuit). In particular, examples integrating touch and force sensing circuitry and controller functionality into a single integrated circuit can reduce the number of transducers (sensor elements) and electronic chipsets for a touch and force sensing device. Output data from acoustic touch and/or force sensing circuitry206can be output to a host processor214for further processing to determine a location of and a force applied by an object contacting the device as will be described in more detail below. In some examples, the processing for determining the location of and a force applied by the contacting object can be performed by acoustic touch and/or force sensing circuitry206, acoustic touch and/or force sensing controller210or a separate sub-processor of device200(not shown). In addition to acoustic touch and/or force sensing, device200can include additional touch circuitry212and optionally a touch controller (not shown) that can be coupled to the touch screen208. In examples including a touch controller, the touch controller can be disposed between touch circuitry212and host processor214. Touch circuitry212can, for example, be capacitive or resistive touch sensing circuitry, and can be used to detect contact and/or hovering of objects (e.g., fingers, styli) in contact with and/or in proximity to touch screen208, particularly in the display region of the touch screen. Thus, device200can include multiple types of sensing circuitry (e.g., touch circuitry212and acoustic touch and/or force sensing circuitry206) for detecting objects (and their positions and/or applied force) in different regions of the device and/or for different purposes, as will be described in more detail below. Although described herein as including a touch screen, it should be understood that touch circuitry212can be omitted, and in some examples, touch screen208can be replaced by an otherwise non-touch-sensitive display (e.g., but-for the acoustic sensors). Host processor214can receive acoustic or other touch outputs (e.g., capacitive) and/or force outputs and perform actions based on the touch outputs and/or force outputs. Host processor214can also be connected to program storage216and touch screen208. Host processor214can, for example, communicate with touch screen208to generate an image on touch screen208, such as an image of a user interface (UI), and can use touch sensing circuitry212and/or acoustic touch and/or force sensing circuitry206(and, in some examples, their respective controllers) to detect a touch on or near touch screen208and/or an applied force, such as a touch input and/or force input to the displayed UI. The touch input and/or force input can be used by computer programs stored in program storage216to perform actions that can include, but are not limited to, moving an object such as a cursor or pointer, scrolling or panning, adjusting control settings, opening a file or document, viewing a menu, making a selection, executing instructions, operating a peripheral device connected to the host device, answering a telephone call, placing a telephone call, terminating a telephone call, changing the volume or audio settings, storing information related to telephone communications such as addresses, frequently dialed numbers, received calls, missed calls, logging onto a computer or a computer network, permitting authorized individuals access to restricted areas of the computer or computer network, loading a user profile associated with a user's preferred arrangement of the computer desktop, permitting access to web content, launching a particular program, encrypting or decoding a message, and/or the like. Host processor214can also perform additional functions that may not be related to touch and/or force processing. Note that one or more of the functions described herein can be performed by firmware stored in memory and executed by touch circuitry212and/or acoustic touch and/or force sensing circuitry206(or their respective controllers), or stored in program storage216and executed by host processor214. The firmware can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “non-transitory computer-readable storage medium” can be any medium (excluding a signal) that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The non-transitory computer readable medium storage can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such a CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like. The firmware can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “transport medium” can be any medium that can communicate, propagate or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic or infrared wired or wireless propagation medium. It is to be understood that device200is not limited to the components and configuration ofFIG.2, but can include other or additional components in multiple configurations according to various examples. Additionally, the components of device200can be included within a single device, or can be distributed between multiple devices. Additionally, it should be understood that the connections between the components is exemplary and different unidirectional or bidirectional connections can be included between the components depending on the implementation, irrespective of the arrows shown in the configuration ofFIG.2. FIG.3Aillustrates an exemplary process300for acoustic touch and/or force sensing of an object in contact with a touch and/or force sensitive surface according to examples of the disclosure.FIG.3Billustrates an exemplary system310, which can perform an exemplary process300for acoustic touch and/or force sensing of an object in contact with a touch and/or force sensitive surface, according to examples of the disclosure. At302, acoustic energy can be transmitted (e.g., by one or more transducers204) along a surface and/or through the thickness of a material in the form of an ultrasonic wave, for example. For example, as illustrated inFIG.3B, transducer314can generate a transmit ultrasonic wave322in cover glass312(or other material capable of propagating an ultrasonic wave). In some examples, the wave can propagate as a compressive wave, a guided wave such as a shear horizontal wave, a Rayleigh wave, a Lamb wave, a Love wave, a Stoneley wave, or a surface acoustic wave. Other propagation modes for the transmitted acoustic energy can also exist based on the properties of the surface material, geometry and the manner of energy transmission from the transducers to the surface of the device. In some examples, the surface can be formed from glass, plastic, or sapphire crystal (e.g., touch screen208, cover glass312) or the surface can be formed from metal, ceramics, plastic, or wood (e.g., housing202). Transmitted energy can propagate along the surface (e.g., cover glass312) and/or through the thickness until a discontinuity in the surface is reached (e.g., an object, such as a finger320, in contact with the surface), which can cause a portion of the energy to reflect. In some examples, a discontinuity can occur at edges (e.g., edge330) of the surface material (e.g., when the ultrasonic wave propagates to the edge of the surface opposite the transducer). When the transmitted energy reaches one of the discontinuities described above, some of the energy can be reflected, and a portion of the reflected energy (e.g., object-reflected wave326, edge-reflected wave328) can be directed to one or more transducers (e.g., transducers204,314). In some examples, water or other fluids in contact with the surface of the device (e.g., device200) will not act as a discontinuity to the acoustic waves, and thus the acoustic touch sensing process can be effective for detecting the presence of an object (e.g., a user's finger) even in the presence of water drops (or other low-viscosity fluids) on the surface of the device or even while the device is fully submerged. At304, returning acoustic energy can be received, and the acoustic energy can be converted to an electrical signal by one or more transducers (e.g., transducers204). For example, as illustrated inFIG.3B, object-reflected wave326and edge-reflected wave328can be received by transducer314and converted into an electrical signal. At306, the acoustic sensing system can determine whether one or more objects is contacting the surface of the device, and can further detect the position of one or more objects based on the received acoustic energy. In some examples, a distance of the object from the transmission source (e.g., transducers204) can be determined from a time-of-flight between transmission and reception of reflected energy, and a propagation rate of the ultrasonic wave through the material. In some examples, baseline reflected energy from one or more intentionally included discontinuities (e.g., edges) can be compared to a measured value of reflected energy corresponding to the one or more discontinuities. The baseline reflected energy can be determined during a measurement when no object (e.g., finger) is in contact with the surface. Deviations of the reflected energy from the baseline can be correlated with a presence of an object touching the surface. Although process300, as described above, generally refers to reflected waves received by the same transducer(s) that transmitted the waves, in some examples, the transmitter and receiver functions can be separated such that the transmission of acoustic energy at302and receiving acoustic energy at304may occur at different co-located transducers (e.g., one transducer in a transmit configuration and one transducer in a receive configuration). In some examples, the acoustic energy can be transmitted along and/or through the surface (e.g., cover glass312) by one or more transducers (e.g., transducer314) and received on an opposite edge (e.g., edge330) of the surface by one or more additional transducers (not shown). The attenuation of the received acoustic energy can be used to detect the presence of and/or identify the position of one or more objects (e.g., finger320) on the surface (e.g., cover glass312). Exemplary device configurations and measurement timing examples that can be used to implement process300will be described in further detail below. In some examples, the transmitted acoustic energy from transducer314can be received at the transmitting transducer and also received at one or more other non-transmitting transducers located in different positions (e.g., at different edges of the surface (e.g., cover glass312). Energy can reflect from one or more objects at multiple angles, and the energy received at all of the receiving transducers can be used to determine the position of the one or more objects. In some examples, the non-transmitting transducers can be free of artifacts that can be associated with transmitting acoustic energy (e.g., ringing). In some examples, the energy can be received at two transducers perpendicular to the transmitting transistor. In some examples, the acoustic energy transmitted and received through a deformable material can be used to determine changes in the thickness of the deformable material and/or an applied force. For example, at302, acoustic energy can be transmitted (e.g., by transducer314) through the thickness of deformable material316in the form of a transmit ultrasonic wave324. Transmitted energy can propagate through the deformable material316until it reaches a discontinuity at the rigid material318(e.g., at the opposite edge of the deformable material316). When the transmitted energy reaches the discontinuity, some of the energy can be reflected, and a portion of the reflected energy can be directed back to transducer314. At304, returning acoustic energy can be received, and the acoustic energy can be converted to an electrical signal by transducers314. At306, the acoustic sensing system can determine an amount of force applied by one or more objects contacting the surface (e.g., cover glass312) based on the received acoustic energy. In some examples, a thickness of deformable material316can be determined from a time-of-flight between transmission and reception of reflected energy, and a propagation rate of the ultrasonic wave through the material. Changes in the thickness of the deformable material (or the time-of-flight through the deformable material) can be used to determine an amount of applied force, as described in more detail below. FIG.4illustrates an exemplary configuration of an acoustic touch and/or force sensing circuit400according to examples of the disclosure. Acoustic touch and/or force sensing circuit400can include transmit circuitry (also referred to herein as Tx circuitry or transmitter)402, switching circuitry404, receive circuitry (also referred to herein as Rx circuitry or receiver)408and input/output (I/O) circuit420(which together can correspond to acoustic touch and/or force sensing circuitry206) and acoustic scan control logic422(which can correspond to acoustic touch and/or force sensing controller210). Transmitter402, switching circuitry404, receiver408, I/O circuit420and/or acoustic scan control logic422can be implemented in an application specific integrated circuit (ASIC) in some examples. In some examples, acoustic touch and/or force sensing circuit400can also optionally include transducers406(which can correspond to transducers204). In some examples, a transmitter402can generate an electrical signal for stimulating movement of one or more of a plurality of transducers406. In some examples, the transmitted signal can be a differential signal, and in some examples, the transmitted signal can be a single-ended signal. In some examples, transmitter402can be a simple buffer, and the transmitted signal can be a pulse (or burst of pulses at a particular frequency). In some examples, transmitter402can include a digital-to-analog converter (DAC)402A and an optional filter402B that can be optionally used to smooth a quantized output of DAC402A. In some examples, characteristics of the transducer itself can provide a filtering property and filter402B can be omitted. DAC402A can be used to generate transmit waveform (e.g., any transmit waveform suitable for the touch and/or force sensing operations discussed herein). In some examples, the transmit waveform output can be pre-distorted to equalize the channel. In some examples, the characteristics of each channel, such as the properties of the surface material (and/or deformable material) coupled to transducers406, the discontinuities in the surface material and/or deformable material, and the reflection characteristics of an edge of the device or deformable material can be measured and stored. In some examples, the channel characteristics can be measured as a manufacturing step (or factory calibration step), and in other examples the characteristics can be measured as a periodic calibration step (i.e., once a month, once a year, etc. depending on how quickly the channel characteristics are expected to change). In some examples, the channel characteristics can be converted to a transfer function of the channel, and the transmit waveform can be configured using the inverse of the channel transfer function such that the returning signal is equalized (e.g., returning signal can be detected as a pulse or a burst of pulses despite the transmitted waveform having a seemingly arbitrary waveform). In some examples, a single differential pulse can be used as a transmit waveform. For example, a bipolar square pulse (where the voltage applied to the transducer can be both positive and negative) can be used as the transmit waveform, and the bipolar square pulse can be implemented using a single-ended or differential implementation. In some examples, an energy recovery architecture can be used to recover some of the energy required for charging and discharging the transducer. Switching circuitry404can include multiplexers (MUXs) and/or demultiplexers (DEMUXs) that can be used to selectively couple transmitter402and/or receiver408to one of transducers406that can be the active transducer for a particular measurement step in a measurement cycle. In a differential implementation, switching circuitry404can include two MUXs and two DEMUXs. In some examples, a DEMUX can have a ground connection, and the non-selected DEMUX outputs can be shorted, open, or grounded. In some examples, the same transducer406can be coupled to transmitter402by switching circuitry404(e.g., DEMUXs) during the drive mode and coupled to receiver408by switching circuitry404(e.g., MUXs) during the receive mode. Thus, in some examples, a single transducer406can be used both for transmitting and receiving acoustic energy. In some examples, a first transducer can be coupled to transmitter402by switching circuitry404(e.g. DEMUXs) and a second transducer can be coupled by switching circuitry404(e.g., MUXs) to receiver408. For example, the transmitting transducer and the receiving transducer can be discrete piezoelectric elements, where the transmitting transducer can be designed for being driven by higher voltages (or currents) to produce sufficient motion in transducer406to generate an acoustic wave in the surface of a device (e.g., device200above), and the receiving transducer can be designed for receiving smaller amplitude reflected energy. In such a configuration, the transmit-side circuitry (e.g., transmitter402and DEMUXs of switching circuitry404) can be optionally implemented on a high voltage circuit, and the receive-side circuitry (e.g., receiver408and MUXs of switching circuitry404) can be optionally implemented on a separate low voltage circuit. In some examples, switching circuitry404(MUXs and DEMUXs) can also be implemented on the high voltage circuit to properly isolate the remaining receive-side circuitry (e.g., receiver408) during transmission operations by transmit side circuitry. Additionally or alternatively, in some examples, the transmit circuit can include an energy recovery architecture that can be used to recover some of the energy required for charging and discharging the transducer. It should be understood that for a single-ended implementation, switching circuitry404can include a single DEMUX and MUX. In such a configuration, transmitter402and receiver408can be single-ended as well. Differential implementations, however, can provide improved noise suppression over a single-ended implementation. Receiver408can include an amplifier410such as a low-noise amplifier (LNA) configured to sense the transducer. Receiver408can also include a gain and offset correction circuit412. The gain and offset correction circuit can include a programmable gain amplifier (PGA) configured to apply gain to increase (or in some cases decrease) the amplitude of the signals received from LNA. The PGA can also be configured to filter (e.g., low pass) the signals received from the LNA to remove high frequency components. Additionally, the PGA circuit can also be configured to perform baselining (offset correction). In some examples, the output of gain and offset correction circuit412can optionally be coupled to one or more analog processing circuits. In some examples, the output of gain and offset correction circuit412can be coupled to a demodulation circuit414configured to demodulate the received signals (e.g., by I/Q demodulation). In some examples, the output of the gain and offset correction circuit412can be coupled to an envelope detection circuit415configured to perform envelope detection on the received signals. In some examples, the output of gain and offset correction circuit412can be filtered at filter416. In some examples, these blocks/circuits can be placed in a different order. In some examples, the processing of one or more of these analog processing circuits can be performed in the digital domain. The received signals, whether raw or processed by one or more of demodulation circuit414, envelope detection circuit415or filter416, can be passed to an analog-to-digital converter (ADC)418for conversion to a digital signal. In some examples, an input/output (I/O) circuit420can be used to transmit received data for processing. In some examples, the output of I/O circuit420can be transferred to a host processor of the device, or to an auxiliary processor (sub-processor) separate from the host processor. For example, as illustrated, the output of I/O circuit420can be coupled to a processor system-on-chip (SoC)430, which can include one or more processors. In some examples, processor SoC430can include a host processor432(e.g., an active mode processor) and an auxiliary processor434(e.g., a low power processor). In some examples, some digital signal processing can be performed (e.g., by acoustic touch and/or force sensing circuit400) before transmitting the data to other processors in the system (e.g., processor SoC430). In some examples, the I/O circuit420is not only used for data transfer to processor SoC430(e.g., host processor432), but also is used for writing the control registers and/or firmware download from processor SoC430. The components of receiver circuitry408described above can be implemented to detect touch (e.g., presence and location of a touch on a surface). In some examples, receiver408can also include a force detection circuit424to detect applied force (e.g., of the touch on the surface). In some examples, the force detection circuit424can include the same or similar components as described above (e.g., amplifier, gain and offset correction, etc.). In some examples, the function of force detection circuit424can be performed using the same components described above that are used to determine time-of-flight for touch detection. In some examples, a low-power time gating circuit can be used to determine time-of-flight for force detection. Data from force sensing circuit424can be transferred to I/O circuit420and/or processor SoC430for further processing of force data in a similar manner as described above for touch data. In some examples the same circuitry for touch detection can be used to detect force. A control circuit, acoustic scan control circuit422, can be used to control timing and operations of the circuitry of acoustic touch and/or force sensing circuit400. Acoustic scan control circuit422can be implemented in hardware, firmware, software or a combination thereof. In some examples, acoustic scan control circuit422can include digital logic and timing control. Digital logic can provide the various components of acoustic touch and/or sensing circuit400with control signals. A timing control circuit can generate timing signals for acoustic touch and/or sensing circuit400and generally sequence the operations of acoustic touch and/or force sensing circuit400. In some examples, the acoustic touch and/or force sensing circuit400can receive a master clock signal from an external source (e.g., clock from the host processor, crystal oscillator, ring oscillator, RC oscillator, or other high-performance oscillator). In some examples, an on-chip oscillator can be used to generate the clock. In some examples, a master clock signal can be generated by an on-chip phase locked loop (PLL), included as part of acoustic touch and/or force sensing circuit400, using an external clock as the input. In some examples, a master clock signal can be routed to the acoustic touch sensing circuit from processor SoC430. The appropriate master clock source can be determined based on a tradeoff between area, thickness of the stack-up, power and electromagnetic interference. It is to be understood that the configuration ofFIG.4is not limited to the components and configuration ofFIG.4, but can include other or additional components (e.g., memory, signal processor, etc.) in multiple configurations according to various examples. Additionally, some or all of the components illustrated inFIG.4can be included in a single circuit, or can be divided among multiple circuits while remaining within the scope of the examples of the disclosure. As described herein, various acoustic sensing techniques can be used to determine the position of an object touching a surface and/or its applied force on the surface. In some examples, one or more time-of-flight measurements can be performed using one or more acoustic transducers to determine boundaries of the position of the contacting object.FIGS.5A-5Cillustrate exemplary system configurations and timing diagrams for acoustic touch sensing to determine position using time-of-flight measurements according to examples of the disclosure.FIG.5Aillustrates an exemplary acoustic touch sensing system configuration using four acoustic transducers502A-D mounted along (or otherwise coupled to) four edges of a surface500(e.g., corresponding to cover glass312). Transducers502A-D can be configured to generate acoustic waves (e.g., shear horizontal waves) and to receive the reflected acoustic waves. Propagation of shear horizontal waves can be unaffected by water on surface500because low viscosity fluids and gases (such as water and air) have a very low shear modulus, and therefore do not perturb the boundary conditions that affect wave propagation. Shear horizontal waves can be highly directional waves such that the active detection region (or active area)504can be effectively defined based on the position and dimensions of the acoustic transducers502A-D. It should be understood, however, that active area can change based on the directionality property of the acoustic waves and the size and placement of acoustic transducers502A-D. Additionally, it should be understood that although illustrated as transmit and receive transducers (i.e., transceivers), in some examples, the transmit and receive functions can be divided (e.g., between two transducers in proximity to one another, rather than one transmit and receive transducer). The position of a touch506from an object in contact with surface500can be determined by calculating TOF measurements in a measurement cycle using each of acoustic transducers502A-D. For example, in a first measurement step of the measurement cycle, acoustic transducer502A can transmit an acoustic wave and receive reflections from the acoustic wave. When no object is present, the received reflection can be the reflection from the acoustic wave reaching the opposite edge of surface500. However, when an object is touching surface500(e.g., corresponding to touch506), a reflection corresponding to the object can be received before receiving the reflection from the opposite edge. Based on the received reflection corresponding to the object received at transducer502A, the system can determine a distance to the edge (e.g., leading edge) of touch506, marked by boundary line510A. Similar measurements can be performed by transducers502B,502C and502D to determine a distance to the remaining edges of touch506, indicated by boundary lines510B,510C and510D. Taken together, the measured distances as represented by boundary lines510A-510D can form a bounding box508. In some examples, based on the bounding box, the acoustic touch sensing system can determine the area of the touch (e.g., the area of the bounding box). Based on the bounding box, the acoustic touch sensing system can determine position of touch506(e.g., based on a centroid and/or area of the bounding box). The acoustic touch sensing scan described with reference toFIG.5Acan correspond to the acoustic touch detection described above with reference toFIGS.3A and3B. Acoustic waves transmitted and received along or through cover glass312can be used to determine the position/location of an object touching the surface of cover glass312. FIG.5Billustrates an exemplary timing diagram560for an acoustic touch sensing scan described inFIG.5Aaccording to examples of the disclosure. As illustrated inFIG.5B, each of the transducers can transmit acoustic waves and then receive reflected waves in a series of measurement steps. For example, from t0 to t1 a first transducer (e.g., acoustic transducer502A) can be stimulated, and reflections at the first transducer can be received from t1 to t2. From t2 to t3 a second transducer (e.g., acoustic transducer502B) can be stimulated, and reflections at the second transducer can be received from t3 to t4. From t4 to t5 a third transducer (e.g., acoustic transducer502C) can be stimulated, and reflections at the third transducer can be received from t5 to t6. From t6 to t7 a fourth transducer (e.g., acoustic transducer502D) can be stimulated, and reflections at the fourth transducer can be received from t7 to t8. Although the transmit (Tx) and receive (Rx) functions are shown back-to-back inFIG.5Bfor each transducer, in some examples, gaps can be included between Tx and Rx functions for a transducer (e.g., to minimize capturing portions of the transmitted wave at the receiver), and or between the Tx/Rx functions of two different transducers (such that acoustic energy and the transients caused by multiple reflections from a scan by one transducer does not impact a scan by a second transducer). In some examples, unused transducers can be grounded (e.g., by multiplexers/demultiplexers in switching circuitry404). The distance between an object touching the surface and a transducer can be calculated based on TOF principles. The acoustic energy received by transducers can be used to determine a timing parameter indicative of a leading edge of a touch. The propagation rate of the acoustic wave through the material forming the surface can be a known relationship between distance and time. Taken together, the known relationship between distance and time and the timing parameter can be used to determine distance.FIG.5Cillustrates an exemplary timing diagram according to examples of the disclosure.FIG.5Cillustrates the transducer energy output versus time. Signal550can correspond to the acoustic energy at the transducer from the generation of the acoustic wave at a first edge of the surface. Signal552can correspond to the acoustic energy at the transducer received from the wave reflected off of a second edge opposite the first edge of the surface. Due to the known distance across the surface from the first edge to the opposite second edge and the known or measured propagation rate of the acoustic signal, the reflection off of the opposite edge of the surface occurs at a known time. Additionally, one or more objects (e.g., fingers) touching the surface can cause reflections of energy in the time between the generation of the wave and the edge reflection (i.e., between signals550and552). For example, signals554and556can correspond to reflections of two objects touching the surface (or a leading and trailing edge of one object). It should be understood that signals550-556are exemplary and the actual shape of the energy received can be different in practice. In some examples, the timing parameter can be a moment in time that can be derived from the reflected energy. For example, the time can refer to that time at which a threshold amplitude of a packet of the reflected energy is detected. In some examples, rather than a threshold amplitude, a threshold energy of the packet of reflected energy can be detected, and the time can refer to that time at which a threshold energy of the packet is detected. The threshold amplitude or threshold energy can indicate the leading edge of the object in contact with the surface. In some examples, the timing parameter can be a time range rather than a point in time. To improve the resolution of a TOF-based sensing scheme, the frequency of the ultrasonic wave and sampling rate of the receivers can be increased (e.g., so that receipt of the reflected wave can be localized to a narrower peak that can be more accurately correlated with a moment in time). In some examples (e.g., as illustrated inFIG.5B), transducers502A-D can operate in a time multiplexed manner, such that each transducer transmits and receives an acoustic wave at a different time during a measurement cycle so that the waves from one transducer do not interfere with waves from another transducer. In other examples, the transducers can operate in parallel or partially in parallel in time. The signals from the respective transducers can then be distinguished based on different characteristics of the signals (e.g., different frequencies, phases and/or amplitudes). Although four transducers are illustrated inFIG.5A, in some examples, fewer transducers can be used. For example, when using an input object with known dimensions (e.g., stylus or a size-characterized finger), as few as two transducers mounted along two perpendicular edges can be used. Based on the known dimensions of an object, a bounding box518can be formed by adding the known dimensions of the object to the first and second distances, for example. Additionally, althoughFIG.5Aillustrates detection of a single object (e.g., single touch), in some examples, the acoustic touch sensing system can use more transducers and be configured to detect multiple touches (e.g., by replacing each of transducers502A-D with multiple smaller transducers). TOF schemes described with reference toFIGS.5A-5Ccan provide for touch sensing capability using a limited number of transducers (e.g., as compared with a number of electrodes/touch nodes of a capacitive touch sensing system) which can simplify the transmitting and receiving electronics, and can reduce time and memory requirements for processing. AlthoughFIGS.5A-5Cdiscuss using a bounding box based on TOF measurements to determine position of an object, in other examples, different methods can be used, including applying matched filtering to a known transmitted ultrasonic pulse shape, and using a center of mass calculation on the filtered output (e.g., instead of a centroid). In some examples, a time-of-flight measurement can be performed using one or more acoustic transducers to determine an amount of force applied by an object touching a surface.FIGS.6A-6Dillustrate exemplary system configurations and timing diagrams for acoustic force sensing to determine an amount of applied force using a time-of-flight measurement according to examples of the disclosure.FIG.6Aillustrates an exemplary acoustic force sensing system stack-up600including a deformable material604in between two rigid surfaces. One of the rigid surfaces can be a cover glass601(e.g., corresponding to cover glass312). The second of the rigid surfaces can be a portion of a device housing, for example (e.g., corresponding to housing202). An acoustic transducer602(e.g., corresponding to transducer314) can mounted to (or otherwise coupled to) the deformable material604. For example, as illustrated inFIG.6A, transducer602can be disposed between cover glass601and deformable material604. Transducer602can be configured to generate acoustic waves (e.g., shear horizontal waves) and to receive the reflected acoustic waves from the discontinuity at the edge between deformable material604and rigid material606. It should be understood that although illustrated as transmit and receive transducers (i.e., transceivers), in some examples, the transmit and receive functions can be divided (e.g., between two transducers in proximity to one another, rather than one transmit and receive transducer). Shear horizontal waves can be highly directional waves such that the time of flight can be effectively measure the thickness of the deformable material. A baseline thickness (or time-of-flight) can be determined for a no-force condition, such that changes in thickness (Δd) (or time-of-flight) can be measured. Changes in thickness or time-of-flight can correspond to amount of applied force. For example, plot630ofFIG.6Dillustrates an exemplary relationship between time-of-flight (or thickness) and applied force according to examples of the disclosure. For example, in a steady state condition, where there is no change in time-of-flight across the deformable material604, the applied force can be zero. As the time-of flight varies (e.g., decreases), the applied force can vary as well (e.g., increase). Plot630illustrates a linear relationship between TOF and force, but in some examples, the relationship can be non-linear. The relationship between TOF and applied force can be empirically determined (e.g., at calibration) using a correlation. In some examples, the calibration can include linearizing the inferred applied force and normalizing the measurements (e.g., removing gain and offset errors). In some examples, the Young's modulus of the deformable material can be selected below a threshold to allow a small applied force to introduce a detectable normal deformation. FIG.6Billustrates another exemplary acoustic force sensing system stack-up610including a deformable material614in between two rigid surfaces (e.g., between cover glass611and rigid material618). An acoustic transducer612can mounted to (or otherwise coupled to) one side of deformable material614, and a second acoustic transducer616can be mounted to (or otherwise coupled to) a second side (opposite the first side) of deformable material614. For example, as illustrated inFIG.6B, transducer612can be disposed between cover glass611and deformable material614and transducer616can be disposed between rigid material618and deformable material614. Transducer612can be configured to generate acoustic waves (e.g., shear horizontal waves) and transducer616can be configured to receive the acoustic waves. The configuration of transducers in stack-up610can be referred to as a “pitch-catch” configuration in which one transducer on one side of a material transmits acoustic waves to a second transducer on an opposite side, rather than relying on a reflected acoustic wave. The time-of-flight between the time of transmission and the time of receipt of the acoustic wave can be measured to determine the amount of applied force in a similar manner as discussed above with respect toFIG.6D. FIG.6Cillustrates an exemplary timing diagram640according to examples of the disclosure.FIG.6Cillustrates the transducer energy output versus time. Signal620can correspond to the acoustic energy at transducer602from the generation of the acoustic wave at a first edge of the deformable material604. Signal622can correspond to the acoustic energy at transducer602received from a first wave reflected off of a second edge, opposite the first edge, of the deformable material604. Due to the known distance across the surface from the first edge to the opposite, second edge (under steady-state) and the known or measured propagation rate of the acoustic signal, the reflection off of the opposite edge of the surface occurs at a known time. In some examples, rather than using the first reflection, a different reflection of the acoustic energy can be used to determine time of flight. For example, signal624can refer to the acoustic energy at transducer602received from a second wave reflected off of the second edge of deformable material604(e.g., signal622can reflect off of the first side of604deformable material and reflect a second time off of the second edge of deformable material604). In some examples, signal556can correspond to an integer number reflection after repeated reflections between the two edges of deformable material604. It should be understood that signals620-626are exemplary and the actual shape of the energy received can be different in practice. In some examples, the choice of which reflection to use for the time-of-flight calculation for force sensing can be a function of the thickness of the material and the frequency of the transmitted wave. In some examples, rather than using time-of-flight measurements to determine thickness of the deformable material, other methods can be used. For example, transducer602can stimulate the deformable material604with ultrasonic waves at a resonant frequency. As the deformable material604changes in thickness due to applied force, the resonant frequency can shift. The change in resonant frequency can be measured to determine the applied force. Using a resonant frequency can result in better signal-to-noise ratio (SNR) performance and better accuracy as compared with the time-of-flight method. As described above with reference toFIGS.3A-3B, in some examples acoustic touch and force sensing can both be performed. In some examples, the two operations can be time-multiplexed. Transducers502A-D (e.g., one of which can correspond to transducer314) can generate transmit waveforms and receive reflections to determine a location/position of touch on a surface (e.g., cover glass312) as described with reference to timing diagram560during an acoustic touch sensing phase. Transducer602(e.g., corresponding to transducer314) can generate a transmit waveform and receive a reflection to determine an amount of force applied to the surface (e.g., cover glass312) as described with reference to timing diagram640during an acoustic force sensing phase. In some examples, the acoustic touch and force sensing can be performed using transmit waveforms generated at the same time.FIG.7illustrates a timing diagram700for acoustic touch and force sensing according to examples of the disclosure. Signal702can correspond to a transmit waveform generated by a transducer (e.g., transducer314) to simultaneously propagate in deformable material316and in cover glass312. Signal704can correspond to a reflection (e.g., a first reflection) from the boundary between deformable material316and rigid material318. Signal706can correspond to a reflection from an object (e.g., a finger) on the surface of cover glass312. Signal708can correspond to a reflection from the opposite edge of cover glass312. Based on the timing of signal704, the acoustic touch and force sensing circuitry can measure a time-of-flight across the deformable material. Based on the timing of signals706and/or708, the acoustic touch and force sensing circuitry can measure the time-of-flight along the surface of cover glass312to an object (or an edge when no object is contacting the cover glass). The time-of-flight measurements for touch can be repeated for each transducer502A-D (e.g., four times) to determine the location/position of the object. The time-of-flight measurements can optionally be repeated (e.g., for each of transducers502A-D) to measure force applied to the cover glass312. In some examples, an average force measurement can be determined from repeated force measurements. In some examples, the repeated measurements can indicate relative force applied to different edges of the cover glass. In some examples, the measurements and different edges of the cover glass can be combined to determine an applied force. Performing acoustic touch and force sensing using one or more shared transducers can provide for both touch and force information with one set of ultrasonic transducers (e.g.,502A-D) and one sensing circuit (e.g., acoustic touch and/or force sensing circuit400). As a result, the touch and force sensing systems can potentially be reduced in size, in complexity and in power consumption. Performance of ultrasonic touch and force sensing using ultrasonic waves transmitted into deformable material316and cover glass312at the same time can depend, in some examples, on the separation between the transmitted ultrasonic waves for touch and for force. For example,FIG.7illustrates signals704and706corresponding to force and touch reflections, respectively, that can be well separated in time (e.g., such that the force reflections arrive in a dead zone for touch reflections). In practice, an integration of acoustic touch and force sensing can subject each measurement (touch/force) to noise/interference from the other measurement (force/touch). In some examples, interference between ultrasonic waves in the deformable material and the cover glass can be reduced or eliminated based on the design of the deformable material. For example, the deformable material can be selected to have an ultrasonic attenuation property above a threshold, such that the signal in the deformable material can be damped before reflections in the cover glass are received. In some examples, the thickness of the deformable material can be selected to allow for one or more reflections through the deformable material to be received before reflections from the cover glass. In some examples, the reflection (e.g., first, second, nth) through the deformable material can be selected such that the reflection of interest occurs between reflections from the cover glass can be received. In some examples, an absorbent material can be coupled to the deformable material to further dampen ringing of ultrasonic signals in the deformable material. In some examples (e.g., when force and touch ultrasonic waves do not overlap in time), more than one of the transducers (and in some cases all of the transducers) can transmit a wave and receive the reflections at the same time to measure the force applied. Then, individual transducers can transmit waves and receive reflected waves sequentially for touch detection. Processing data from acoustic touch and/or force detection scans can be performed by different processing circuits of an acoustic touch and/or force sensing system. For example, as described above with respect toFIG.4, an electronic device can include an acoustic touch and force sensing circuit400and a processor SoC430(e.g., including a host processor432and an auxiliary processor/sub-processor434). As described in detail below, processing of touch and/or force data can be performed by one or more of these processors/circuits, according to various examples. For example, according to the various examples, the processing of touch and/or force data can be performed by the acoustic touch and force sensing circuit, by the processor SoC, or partially by the acoustic touch and force sensing circuit and partially by the processor SoC. The description of the data processing below first addresses touch data processing and then addresses force data processing. As described below in more detail, in some examples, raw touch sensing data can be transmitted to a processor SoC to be processed by one or more processors of processor SoC (e.g., host processor432and an auxiliary processor/sub-processor434). In some examples, the touch sensing data can be processed in part by analog processing circuits (e.g., as described above with reference toFIG.4) and/or digital processing circuits (e.g., averaging of ADC outputs) of an acoustic touch (and/or force) sensing circuit. The partially processed touch sensing data can be transmitted to the processor SoC for further processing. In some examples, an acoustic touch (and/or force) sensing circuit can process the touch sensing data and supply the processor SoC with high level touch information (e.g., the centroid of the touch). The acoustic touch and force sensing circuit can be referred to as an acoustic touch sensing circuit to simplify the description of touch data processing among the various processors and circuits below. In some examples, an auxiliary processor (e.g., auxiliary processor434) can be a low power processor that can remain active even when a host processor (e.g., host processor432) can be idle and/or powered down. An acoustic touch sensing circuit (e.g., corresponding to acoustic touch and force sensing circuit400) can perform acoustic touch sensing scans and generate acoustic touch data. The acoustic touch data can be transferred to the auxiliary processor for processing according one or more touch sensing algorithms. For example, in a low-power mode, the acoustic touch sensing circuit can perform a low power touch detection scan. The low power touch detection scan can include receiving reflections from a barrier (e.g., surface edge) opposite a transducer for one or more transducers (e.g., from one transducer rather than the four illustrated inFIG.5A). The acoustic touch data corresponding to the received reflections from the barrier(s) can be transmitted to the auxiliary processor via a communication channel and processed by the auxiliary processor to determine the presence or absence of an object touching the sensing surface. Once an object is detected touching the sensing surface, the system can transition from the low-power mode to an active mode, and the acoustic touch sensing circuit can perform an active mode touch detection scan. Additionally or alternatively, in some examples, a low power force detection scheme (e.g., performed using one transducer) can be used in the low-power mode. The active mode touch detection scan can include, for example, scanning the sensing surface as described above with respect toFIG.5A. The acoustic touch data corresponding to the active mode touch detection scan can be transmitted to the auxiliary processor via a communication channel and processed by the auxiliary processor to determine the location of the object. In some examples, determining the location of the object can include determining the area and/or centroid of the object. The host processor can receive the location of the object touching the surface from the auxiliary processor and perform an action based thereon. In some examples, the acoustic touch sensing circuit can perform some processing before sending acoustic touch data to the auxiliary processor. For example, to reduce the requirements for the data communication channel between the acoustic touch sensing circuit and the auxiliary processor, the acoustic touch sensing circuit can include a digital signal processor which can average samples from the ADC output. Averaging the samples can compress the amount of acoustic touch data to be communicated to the auxiliary processor. The averaging performed by the digital signal processor can be controlled by control circuitry (e.g., acoustic scan control logic422) in the acoustic touch sensing circuit. In some examples, the transmit signal can be coded to allow for averaging without a time penalty. Although averaging is described, in other examples, other forms of processing can be applied to the acoustic touch data before transferring the acoustic touch data. In some examples, the data communication channel between the acoustic touch sensing circuit and the auxiliary processor can be a serial bus, such as a serial peripheral interface (SPI) bus. In addition, the communication channel can be bidirectional so information can also be transmitted from the auxiliary processor to the acoustic touch sensing circuit (e.g., register information used for programming acoustic touch sensing circuit). Additionally, the acoustic touch sensing circuit can receive one or more synchronization signals from the auxiliary processor configured to synchronize acoustic touch sensing scanning operations by the acoustic touch sensing circuit. Additionally, the acoustic touch sensing circuit can generate an interrupt signal configured to provide for proper acoustic data transfer from the acoustic touch sensing circuit to the auxiliary processor. In some examples, the detection and the processing for the low power touch detection mode can be done on-chip (e.g., by the acoustic touch sensing circuit). In these examples, interrupt signals can be used to indicate (e.g., to the auxiliary processor) when a finger is detected on the surface of the device. In some examples, the acoustic touch sensing circuit can perform acoustic touch sensing scans and generate acoustic touch data. The acoustic touch data can be transferred to the auxiliary processor and/or the host processor for processing according one or more touch sensing algorithms. For example, in a low-power mode, the acoustic touch sensing circuit can perform a low power detection scan as described herein. The acoustic touch data can be transmitted to the auxiliary processor via a communication channel and processed by the auxiliary processor to determine the presence or absence of an object touching the sensing surface. Once an object is detected touching the sensing surface, the system can transition from the low-power mode to an active mode, and the acoustic touch sensing circuit can perform an active mode detection scan as described herein. The acoustic touch data corresponding to the active mode detection scan can be transmitted to the host processor via a high-speed communication channel and processed by the host processor to determine the location of the object. In some examples, the data transfer via the high-speed communication channel can be done in a burst mode. In some examples, determining the location of the object can include determining the area and/or centroid of the object. The host processor can perform an action based on the location. In some examples, the high-speed communication channel can provide sufficient bandwidth to transfer raw acoustic touch data to the host processor, without requiring processing by the acoustic touch sensing circuit. In some examples, the high-speed communication channel can include circuitry to serialize the acoustic touch data (e.g., a serializer) and transfer the serialized acoustic touch data using a low-voltage differential signal (LVDS) communication circuit. In some examples, other I/O blocks can be utilized for the data transfer. In some examples, the acoustic touch sensing circuit can perform some processing (e.g., averaging) before sending acoustic touch data to the host processor. In some examples, the amount of data resulting from a low power detection scan can be relatively small (compared with an active mode detection scan) such that the raw acoustic touch data can be transferred to the auxiliary processor without requiring processing by the acoustic touch sensing circuit. In some examples, the acoustic touch sensing circuit can perform some processing (e.g., averaging) before sending acoustic touch data to the host processor. The other aspects of operation (e.g., data transfer from the auxiliary processor to acoustic touch sensing circuit, synchronization signals and interrupt signals, etc.) can be the same as or similar to the description above. Although described above as processing acoustic touch data from low power detection scans in the auxiliary processor and acoustic touch data from active mode detection scans in the host processor, it should be understood that in some examples, the host processor can perform processing for both low power detection scans and active mode detection scans. In some examples, the acoustic touch sensing circuit can include an acoustic touch digital signal processor (DSP). In some examples, the acoustic touch DSP can be a separate chip coupled between the acoustic touch sensing circuit and the processor SoC. The acoustic touch sensing circuit can perform acoustic touch sensing scans and generate acoustic touch data. The acoustic touch data can be transferred to the acoustic touch DSP for processing according one or more touch sensing algorithms. For example, in a low-power mode, the acoustic touch sensing circuit can perform a low power detection scan as described herein. The acoustic touch data can be transmitted to the acoustic touch DSP via a communication channel and processed by the acoustic touch DSP to determine the presence or absence of an object touching the sensing surface. In some examples, the acoustic touch sensing circuit can process the acoustic touch data to determine the presence or absence of the object touching the surface. Once an object is detected touching the sensing surface, the system can transition from the low-power mode to an active mode, and the acoustic touch sensing circuit can perform an active mode detection scan as described herein. The acoustic touch data corresponding to the active mode detection scan can be transmitted to the acoustic touch DSP via a high-speed communication channel and processed by the acoustic touch DSP to determine the location of the object. In some examples, determining the location of the object can include determining the area and/or centroid of the object. The location can be passed to the auxiliary processor and/or the host processor, and the auxiliary processor and/or the host processor can perform an action based on the location. In some examples, the high-speed communication channel can provide sufficient bandwidth to transfer raw acoustic touch data to the acoustic touch DSP, without requiring processing by the acoustic touch sensing circuit. In some examples, the high-speed communication channel can include circuitry to serialize the acoustic touch data (e.g., CMOS serializer) and transfer the serialized acoustic touch data using a low-voltage differential signal (LVDS) communication circuit. In some examples, the acoustic touch sensing circuit can perform some processing (e.g., averaging) before sending acoustic touch data to the acoustic touch DSP. In some examples, the amount of data resulting from a low power detection scan can be relatively small (compared with an active mode detection scan) such that the raw acoustic touch data can be transferred to the acoustic touch DSP without requiring processing by the acoustic touch sensing circuit. In some examples, the data from low power detection scans can also be transferred to the acoustic touch DSP via the high-speed communication channel. Data transfer from the auxiliary processor to the acoustic touch sensing circuit, synchronization signals and interrupt signals can be the same as or similar to the description above, except that, in some examples, the various signals and data can pass through the acoustic touch DSP. In some examples, the acoustic touch sensing circuit can perform acoustic touch sensing scans and generate acoustic touch data. The acoustic touch data (e.g., for a low-power detection scan) can be processed by the acoustic touch sensing circuit to determine the presence or absence of the object touching the surface. Once an object is detected touching the sensing surface, the system can transition from the low-power mode to an active mode, and the acoustic touch sensing circuit can perform an active mode detection scan as described herein. The acoustic touch data corresponding to the active mode detection scan can be processed by the acoustic touch sensing circuit to determine the location of the object. In some examples, determining the location of the object can include determining the area and/or centroid of the object. The presence and/or location of the object can be passed to the auxiliary processor and/or the host processor, and the auxiliary processor and/or the host processor can perform an action based on the presence and/or location of the object. In some examples, the amount of post-processing information (e.g., centroid) can be relatively small (compared with raw acoustic touch data) such that the information can be transferred to the auxiliary processor and/or the host processor via a serial communication bus (e.g., SPI), without a high-speed data channel. Data transfer from the auxiliary processor to acoustic touch sensing circuit, synchronization signals and interrupt signals can be the same as or similar to the description above. In some examples, separate data communication channels can be provided between the acoustic touch sensing circuit and each of the auxiliary processor and the host processor. In some examples, the data communication channel can be a shared bus (e.g., shared SPI bus) between the acoustic touch sensing circuit and each of the auxiliary processor and the host processor. The acoustic touch sensing circuit, as described herein, can be powered down or put in a low power state when not in use. In some examples, the acoustic touch sensing circuit can be on only during acoustic touch detection scans (e.g., during Tx and Rx operations). In some examples, the acoustic touch sensing circuit can be on in a low power state at all time (e.g., running at a low frame rate, performing a low power detection scan), and can transition into an active mode state when an object is detected. In a similar manner, processing force data can be performed by different processing circuits of an acoustic touch and/or force sensing system. For example, as described above with respect toFIG.4, an electronic device can include an acoustic touch and force sensing circuit400and a processor SoC430(e.g., including a host processor432and an auxiliary processor/sub-processor434). In some examples, force detection circuit424can duplicate (or reuse) the touch sensing circuitry ofFIG.4to collect and/or processes force data. In some examples, raw force sensing data can be transmitted by a force detection circuit424to a processor SoC to be processed by one or more processors of processor SoC (e.g., host processor432and an auxiliary processor/sub-processor434). In some examples, the force sensing data can be processed in part by analog processing circuits and/or digital processing circuits of an acoustic force (and/or touch) sensing circuit. The partially processed force sensing data can be transmitted to the processor SoC for further processing. In some examples, an acoustic force (and/or touch) sensing circuit can process the force sensing data and supply the processor SoC with force information (e.g., an amount of applied force). Additionally, a low power force detection scan can be used in addition to or in place of a low power touch detection scan described above (e.g., to cause the device to exit a low power or idle mode). The low power force detection scan can include, for example, determining force applied to the surface using fewer than all transducers (e.g., one transducer). In some examples, force detection circuit424can be simplified with respect to touch detection circuitry to reduce power and hardware requirements.FIGS.8A-Cillustrate exemplary circuits for force detection according to examples of the disclosure. It should be understood that the circuits ofFIGS.8A-Care exemplary, and other circuits can be used for force sensing. Additionally, although the circuits ofFIGS.8A-Ccan be single-ended circuits, partially or fully differential circuits can also be used.FIG.8Aillustrates an exemplary force detection circuit800according to examples of the disclosure. Force detection circuit800can include a gate (or switch)801, a programmable gain amplifier (PGA)802, an analog comparator804, a time-to-digital signal converter806and, optionally, a digital comparator808. A gate timing signal can be used to activate gate801(e.g., close a switch) between the input from the transducer used to measure force and the PGA802. The gate timing signal can also be used to start timing by time-to-digital signal converter806. The output of PGA802can be input into comparator804, which can be used for finding a reliable transition edge of the receive signal. When the comparator transitions, the timing by the time-to-digital signal converter806stops. The digital output (e.g., a digitized number) of the time-to-digital signal converter806, which can be proportional to the applied force, can be sent from the acoustic force (and/or touch) sensing circuit to a processor. In some examples, an optional digital comparator808can be used to transmit force reading exceeding a threshold amount of force. In some examples, a time window can be selected and all or some of the threshold crossing time stamps can be sent from the acoustic force (and/or touch) sensing circuit to the processor SoC, and the time stamps can be used to detect the time-of-flight change (and therefore the force applied). In some examples, the digitized data for a given time window can be sampled at two different times (one time without and one time with the force applied) and the correlation between the two time-of-flight measurements can be used to determine the change in time-of-flight (and therefore applied force). FIG.8Billustrates an exemplary force detection circuit810according to examples of the disclosure. Force detection circuit810can include a gate (or switch)811, a PGA812, a differential-to-single-ended converter circuit812, an analog comparator814, a logical AND gate816, a digital counter818and a clock820. A gate timing signal can be used to activate gate811(e.g., close a switch) between the input from the transducer used to measure force and the differential-to-single-ended converter circuit812. The single-ended output of the differential-to-single-ended converter circuit812can be provided to PGA812. The gate timing signal can also be output to logical AND gate816. When the gate timing signal and the output of analog comparator814can both be high, counter818can start timing based on a clock signal from clock820. The output of PGA812can be input into comparator814, which can be used for finding a reliable transition edge of the receive signal. When the comparator transitions, the timing by the counter818can be stopped. The digital output (e.g., a digitized number) from counter818, which can be proportional to the applied force, can be sent from the acoustic force (and/or touch) sensing circuit to a processor. It should be understood exemplary force detection circuits800and810can be reconfigured to output the threshold crossing on a rising edge, a falling edge or both edges of the received signal. Force detection circuits800and810as illustrated inFIGS.8A and8Boutput the rising edge threshold crossings after each rising edge of the time gating signal. In some examples, threshold crossings can be detected on both rising and falling edges of the input signal.FIG.8Cillustrates an exemplary force detection circuit830according to examples of the disclosure. Force detection circuit830can include a gate (or switch)831, a PGA832, an analog comparator834, a logical inverter836, n-bit D-Flip Flops838and840, a clock842and a digital counter844. A reset signal can be used to reset D-Flip Flops838and840. A time window signal can be used to activate gate831between the input from the transducer used to measure force and PGA832. The time window signal can also enable counter844to start timing based on a clock signal from clock842. The output of PGA832can be input into comparator834, which can be used for finding reliable transition edges of the receive signal. The output of comparator834can be used to clock D-Flip Flops838and840. D-Flip Flop838can be clocked with an inverted version of the comparator output to detect the opposite edge. D-Flip Flops838and840can receive the output of counter844as data inputs, and output the count of counter844for a rising and falling edge transition, respectively. The digital outputs (e.g., digitized numbers) of D-Flip Flops838and840, which can be proportional to the applied force, can be sent from the acoustic force (and/or touch) sensing circuit to a processor. As discussed above, in some examples, the force data can be sampled at two different times (one time without and one time with the force applied) and the correlation between the two time-of-flight measurements can be used to determine the change in time-of-flight (and therefore applied force).FIG.9illustrates an exemplary configuration of an acoustic touch and/or force sensing circuit according to examples of the disclosure. The circuitry illustrated inFIG.9can correspond to the corresponding circuitry illustrated inFIG.4, implemented to detect force, for example. UnlikeFIG.4, the acoustic touch and/or force sensing circuitry ofFIG.9can include a correlator950. Correlator950can be a digital correlator configured to correlate force data for a no-applied force case (e.g., baseline) with measured force data that may include an applied force. The correlation can indicate a change in the time of flight (or resonance) in the deformable material, and thereby indicate an applied force. As described above, acoustic touch and force sensing scans performed by an acoustic touch and force sensing circuit can involve stimulating and sensing one or more transducers.FIGS.10A-10Eillustrate exemplary integration of an acoustic touch and force sensing circuit and/or one or more processors (e.g., processor SoC) with transducers mechanically and acoustically coupled to a surface (e.g., glass, plastic, metal, etc.) and/or a deformable material (e.g., silicone, rubber, etc.) according to examples of the disclosure.FIG.10Aillustrates an exemplary acoustic touch and force sensing system configuration1000using four acoustic transducers1004A-D mounted along (or otherwise coupled to) four edges of a surface1002(e.g., underside of a cover glass). Transducers1004A-D can be configured to generate acoustic waves (e.g., shear horizontal waves) and to receive the reflected acoustic waves. Additionally, the acoustic transducers1004A-D can also be mounted over (or otherwise coupled to) a deformable material (e.g., gasket) disposed between the surface1002and a rigid material (e.g., a portion of the housing). One or more acoustic touch and force sensing circuits can be included. For example,FIG.10Aillustrates a first acoustic touch and force sensing circuit1006positioned proximate to neighboring edges of transducers1004C and1004D. Likewise, a second acoustic touch and force sensing circuit1006′ can be positioned proximate to neighboring edges of transducers1004A and1004B. Placement of acoustic touch and force sensing circuits as illustrated can reduce routing between transducers1004A-D and the respective acoustic touch and force sensing circuits. Processor SoC1008can be coupled to the one or more acoustic touch and force sensing circuits to perform various processing as described herein. In some examples, some or all of the drive circuitry (Tx circuitry) and/or some or all of the receive circuitry (Rx circuitry) of the touch and force sensing circuit can be implemented on different silicon chips. In some examples, transducers1004A-D can be coupled to one or more acoustic touch and force sensing circuits via a flex circuit (e.g., flexible printed circuit board).FIG.10Billustrates a view1010of exemplary acoustic touch and force sensing system configuration1000along view AA ofFIG.10A. As illustrated inFIG.10B, transducer1004D can be coupled to surface1002by a bonding between a bonding material layer1014on an underside of surface1002and a first signal metal layer1012A on one side of transducer1004D. In some examples, the bonding material layer1014can be electrically conductive (e.g., a metal layer). In some examples, the bonding material layer1014can be electrically non-conductive. The first signal metal layer1012A on one side of transducer1004D and a second signal metal layer1012B on a second side of transducer1004D can provide two terminals of transducer1004D to which stimulation signals can be applied and reflections can be received. The first signal metal layer1012A can wrap around from one side of transducer1004D to an opposite side to enable bonding of both signal metal layers of the transducer1004D on one side of transducer1004D. InFIG.10B, acoustic touch and force sensing circuit1006can be coupled to a flex circuit1016and the flex circuit can be respectively bonded to signal metal layers1012A and1012B of transducer1004D (e.g., via bonds1018). Likewise, transducer1004C can be coupled to surface1002(e.g., via bond metal layer/first signal metal layer bonding) and to acoustic touch and force sensing circuit1006by bonding a flex circuit to signal metal layers on the transducer side opposite the surface. Similarly, transducers1004A and1004B can be coupled to surface1002and second acoustic touch and force sensing circuit1006′. Transducers1004A-D can also be coupled to deformable material1003. For example, deformable material1003can be a gasket disposed between the surface1002and a rigid material1007. When assembled, deformable material1003(e.g., gasket) can form a water-tight seal between surface1002(e.g., cover glass) and a rigid material1007(e.g., housing). Transducers1004A-D in contact with deformable material1003can apply stimulation signals to and receive reflections from the deformable material1007. In a similar manner, transducers1004A-D can also be coupled to deformable material1003as illustrated inFIGS.10C-E. In some examples, transducers1004A-D can be coupled to acoustic touch and force sensing circuits via an interposer (e.g., rigid printed circuit board).FIG.10Cillustrates a view1020of exemplary acoustic touch and force sensing system configuration1000along view AA. Transducers1004C and1004D can be coupled to surface1002as illustrated in and described with respect toFIG.10B. Rather than coupling acoustic touch and force sensing circuit1006to a flex circuit1016and bonding the flex circuit to signal metal layers1012A and1012B of transducer1004D, however, inFIG.10C, an interposer1022can be bonded to signal metal layers1012A and1012B of transducer1004D (e.g., via bonds1024). Acoustic touch and force sensing circuit1006can be bonded or otherwise coupled to interposer1022. Similarly, transducers1004A and1004B can be coupled to surface1002and second acoustic touch and force sensing circuit1006′. In some examples, transducers1004A-D can be directly bonded to acoustic touch and force sensing circuits.FIG.10Dillustrates a view1030of exemplary acoustic touch and force sensing system configuration1000along view AA. Transducers1004C and1004D can be coupled to surface1002as illustrated in and described with respect toFIG.10B. Rather than coupling acoustic touch and force sensing circuit1006to a flex circuit or interposer and bonding the flex circuit/interposer to signal metal layers1012A and1012B of transducer1004D, however, inFIG.10D, an acoustic touch and force sensing circuit1006can be bonded to signal metal layers1012A and1012B of transducer1004D (e.g., via bonds1032). Similarly, transducers1004A and B can be coupled to surface1002and second acoustic touch and force sensing circuit1006′. InFIGS.10B-D, signal metal layer1012A was routed away from surface1002and both signal metal layers1012A and1012A were bonded to an acoustic touch and force sensing circuit via bonding on a side of transducer1004D separate from surface1002(e.g., via flex circuit, interposer or direct bond). In some examples, the acoustic touch and force sensing circuits can be bonded to routing on surface1002.FIG.10Eillustrates a view1040of exemplary acoustic touch and force sensing system configuration1000along view AA. Unlike inFIG.10A, for example, transducer1004D can be coupled to surface1002via two separate portions of metal bond layer. A first portion of the metal bond layer1042A can be bonded to a first signal metal layer1044A (using a metal to metal conductive bonding), and a second portion of the metal bond layer1042B can be bonded to a second signal metal layer1044B (which can optionally be wrapped around transducer1004D). Although not shown, the first and second portions of the metal bond layer1042A and1042B can be routed along the underside of surface1002and bond connections can be made with a flex circuit or interposer including an acoustic touch and force sensing circuit, or directly to the acoustic touch and force sensing circuit. Likewise, transducer1004C can be coupled to surface1002and acoustic touch and force sensing circuit1006via routing on the surface. Similarly, transducers1004A and1004B can be coupled to surface1002and coupled to second acoustic touch and force sensing circuit1006′ via routing on the surface. It should be noted, that one advantage of the integration illustrated inFIG.10Eover the integrations ofFIGS.10B-D, can be that the deformable material1003can have a more uniform shape around the perimeter of the device. In contrast, as illustrated inFIGS.10B-D, the deformable material may include a cutout or notch or have different properties (e.g., different thickness) where the acoustic touch and force sensing circuit (and/or flex circuit or interposer) is located. Alternatively, the transducer can be made thinner in the electrical connection area to accommodate for the electrical connection inFIGS.10B-Dwithout a notch or cutout. In some examples, pitch-catch force sensing can be used. In such examples, a receive transducer can be added between the deformable material1003and rigid material1007(e.g., as illustrated inFIG.6B.) It should be understood that the exemplary integration of an acoustic touch and force sensing circuit, transducers and a surface described herein are exemplary and many other techniques can be used. Transducers can be attached to the edge of the cover glass (e.g., on a side of the cover glass) or underneath the cover glass. In some examples, the transducers can be integrated in a notch in the cover glass. In all of the integrations of the transducers and the cover glass, the attachment and the bonding should be done in a way that can allow for the desired acoustic wave to be generated and propagated in the cover glass (or on top of the cover glass). In some examples, matching or backing materials can be added to the transducers to increase their performance as well as the matching to the target surface medium (e.g., cover glass). Likewise, matching or backing materials can be added to the transducers interfacing with deformable material1003to increase performance of force detection as well as the matching to the deformable material medium. In some examples, transducers for touch detection can be implemented on the edges of the cover glass and the transducers for force detection can be implemented on the corners of the cover glass. As described above, in some examples, the transmitter and receiver functions can be separated such that the transmission of acoustic energy at302and the receiving of acoustic energy at304may not occur at the same transducer. In some examples, the transmit transducer and the receive transducer can be made of different materials to maximize the transmit and receive efficiencies, respectively. In some examples, having separate transmit and receive transducers can allow for high voltage transmit circuitry and low voltage receive circuitry to be separated (for touch and/or force sensing circuits).FIG.11illustrates an exemplary configuration of an acoustic touch and force sensing circuit1100according to examples of the disclosure. The configuration ofFIG.11, like the configuration ofFIG.4, can include an acoustic touch and force sensing circuit1100and a processor SoC1130. As described above, processor SoC1130can include a host processor1132(e.g., corresponding to processor432) and an auxiliary processor1134(corresponding to auxiliary processor434). Likewise, acoustic touch and force sensing circuit1100can include transmitter1102(corresponding to transmitter402), transmit switching circuitry1104A (corresponding to demultiplexers of switching circuitry404), receive switching circuitry1104B (e.g., corresponding to multiplexers of switching circuitry404), an amplifier1110(e.g., corresponding to amplifier410), gain and offset correction circuit1112(e.g., corresponding to gain and offset correction circuit412), demodulation circuit, envelope detection circuit, and/or filter1114-1116(e.g., corresponding to demodulation circuit414, envelope detection circuit415, and/or filter416), ADC1118(e.g., corresponding to ADC418) and I/O circuit1120(e.g., corresponding to I/O circuit420). Acoustic touch and force sensing circuit1100can also include a force detection circuit1124(e.g., corresponding to force detection circuit424). The operation of these components can be similar to that described above with respect toFIG.4, and is omitted here for brevity. UnlikeFIG.4, which includes transducers406performing both transmit and receive operations, the configuration illustrated inFIG.11can include transducers1106A operating as transmitters and separate transducers1106B operating as receivers. Transducers1106A and1106B can co-located at locations where transmit and receive transducers are previously described. For example, transducer502A can be replaced by a first transducer configured to transmit and a second transducer configured to receive. It is to be understood that the configuration ofFIG.11is not limited to the components and configuration ofFIG.11, but can include other or additional components in multiple configurations according to various examples. Additionally, some or all of the components illustrated inFIG.11can be included in a single circuit, or can be divided among multiple circuits while remaining within the scope of the examples of the disclosure. In some examples, some or all of the transmit circuitry1102and transmit switching circuitry1104A can be implemented in one chip and some or all of the receive circuitry408and receive switching circuitry404B can be implemented in a second chip. The first chip including transmit circuitry can receive and/or generate via a voltage boosting circuit a high voltage supply for stimulating the surface. The second chip including the receive circuitry can operate without receiving or generating a high voltage supply. In some examples, more than two chips can be used, and each chip can accommodate a portion of the transmit circuitry and/or receive circuitry. FIGS.12A-12Eillustrate exemplary integration of an acoustic touch and force sensing circuit and/or one or more processors (e.g., processor SoC) with groups of transducers (e.g., one transmitting and one receiving) mechanically and acoustically coupled to a surface (e.g., glass, plastic, metal, etc.) and/or a deformable material (e.g., silicone, rubber, etc.) according to examples of the disclosure.FIG.12Aillustrates an exemplary acoustic touch and force sensing system configuration1200using eight acoustic transducers, including four transmit transducers1204A-D and four receive transducers1205A-D mounted along (or otherwise coupled to) four edges of a surface1202(e.g., cover glass). Transmit transducers1204A-D can be configured to generate acoustic waves (e.g., shear horizontal waves) and receive transducers1205A-D can be configured to receive the reflected acoustic waves. Additionally, the acoustic transducers1204A-D and1205A-D can also be mounted over (or otherwise coupled to) a deformable material (e.g., gasket) disposed between the surface1002and a rigid material (e.g., a portion of the housing). One or more acoustic touch and force sensing circuits can be included. For example,FIG.12Aillustrates a first acoustic touch and force sensing circuit1206positioned proximate to neighboring edges of transmit transducers1204C-D and receive transducers1205C-D. Likewise, a second acoustic touch and force sensing circuit1206′ can be positioned proximate to neighboring edges of transmit transducers1204A-B and receive transducers1205A-B. Placement of acoustic touch and force sensing circuits as illustrated can reduce routing between transducers and corresponding acoustic touch and force sensing circuits. Processor SoC1208can be coupled to the one or more acoustic touch and force sensing circuits. In some examples, transducers1204A-D/1205A-D can be coupled to acoustic touch and force sensing circuits via a flex circuit (e.g., flexible printed circuit board).FIG.12Billustrates a view1210of exemplary acoustic touch and force sensing system configuration1200along view AA ofFIG.12A. As illustrated inFIG.12B, receiver transducer1205D can be coupled to surface1202by a bonding between a bond material layer1214on an underside of surface1202and a first signal metal layer1212A on one side of receive transducer1205D. In some examples, the bonding material layer1214can be electrically conductive (e.g., a metal layer). In some examples, the bonding material layer1214can be electrically non-conductive. The first signal metal layer1212A on one side of receive transducer1205D and a second signal metal layer1212B on a second side of receive transducer1205D can provide two terminals of receive transducer1205D from which reflections can be received. The first signal metal layer1212A can wrap around from one side of receive transducer1205D to an opposite side to enable bonding of both signal metal layers of receive transducer1205D on one side of receive transducer1205D. InFIG.12B, acoustic touch and force sensing circuit1206can be coupled to a flex circuit1216and the flex circuit can be respectively bonded to signal metal layers1212A and1212B of receive transducer1205D (e.g., via bonds1218). Similarly transmit circuit1204D (not shown) can be coupled to surface1202and can provide two terminals to which stimulation signals can be applied. The flex circuit can be bonded to respective signal metal layers of transmit transducer1204D. Likewise, transmit transducer1204C and receive transducer1204D can be coupled to surface1202(e.g., via bond metal layer/first signal mental layer bonding) and to acoustic touch and force sensing circuit1206by bonding the flex circuit to signal metal layers on the side of the transducer opposite the surface. Similarly, transmit transducers1204A-B and receive transducers1205A-B can be coupled to surface1202and second acoustic touch and force sensing circuit1206′. Transducers1204A-D and1205A-D can also be coupled to deformable material1203. For example, deformable material1203can be a gasket disposed between the surface1202and a rigid material1207. When assembled, deformable material1203(e.g., gasket) can form a water-tight seal between surface1202(e.g., cover glass) and a rigid material1207(e.g., housing). Transducers1204A-D and1205A-D in contact with deformable material1203can apply stimulation signals to and receive reflections from the deformable material1207. In a similar manner, transducers1204A-D and/or1205A-D can also be coupled to deformable material1203as illustrated inFIGS.12C-E. In some examples, transmit transducers1204A-D and receive transducers1205A-D can be coupled to acoustic touch and force sensing circuits via an interposer (e.g., rigid printed circuit board).FIG.12Cillustrates a view1220of exemplary acoustic touch and force sensing system configuration1200along view AA. Transmit transducers1204C-D and receive transducers1205C-D can be coupled to surface1202as illustrated in and described with respect toFIG.12B. Rather than coupling acoustic touch and force sensing circuit1206to a flex circuit1216and bonding the flex circuit to signal metal layers1212A and1212B of receive transducer1205D, however, inFIG.12C, an interposer1222can be bonded to signal metal layers1212A and1212B of receive transducer1205D (e.g., via bonds1224). Acoustic touch and force sensing circuit1206can be bonded or otherwise coupled to interposer1222. Similarly, the remaining transducers (transmit and receive) can be coupled to surface1202and the first or second acoustic touch and force sensing circuits1206and1206′. In some examples, transmit transducers1204A-D and receive transducers1205A-D can be directly bonded to acoustic touch and force sensing circuits.FIG.12Dillustrates a view1230of exemplary acoustic touch and force sensing system configuration1200along view AA. Transmit transducers1204C-D and receive transducers1205C-D can be coupled to surface1202as illustrated in and described with respect toFIG.12B. Rather than coupling acoustic touch and force sensing circuit1206to a flex circuit or interposer and bonding the flex circuit/interposer to signal metal layers1212A and1212B of receive transducer1205D, however, inFIG.12D, an acoustic touch and force sensing circuit1206can be bonded to signal metal layers1212A and1212B of receive transducer1205D (e.g., via bonds1232). Similarly, the remaining transducers (transmit and receive) can be coupled to surface1202and the first or second acoustic touch and force sensing circuits1206and1206′. InFIGS.12B-D, signal metal layer1212A was routed away from surface1202and both signal metal layers1212A and1212B were bonded to an acoustic touch and force sensing circuit via bonding on a side of receive transducer1205D separate from surface1202(e.g., via flex circuit, interposer or direct bond). In some examples, the acoustic touch and force sensing circuits can be bonded to routing on surface1202instead, similar to the description above with respect toFIG.10E, for example. AlthoughFIG.12Aillustrates transmit transducers1204A-D as being side-by-side with receive transducers1205A-D, in some examples, transmit transducers1204A-D and receiver transducers1205A-D can be stacked on one another.FIG.12Eillustrates a view1240of exemplary acoustic touch and force sensing system configuration1200along view AA. As illustrated inFIG.12E, receiver transducer1205D can be coupled to surface1202by a bonding between a bond metal layer1242on an underside of surface1202and a first signal metal layer1246A on one side of receive transducer1205D. Transmit transducer1204D can be coupled to receive transducer1205D via a common second signal metal layer1244on a second side of receive transducer1205D. A first metal layer1246B can be deposited on the second side of transmit transducer1204D. First signal metal layer1246A and common second signal metal layer1244can provide two terminals of receive transducer1205D from which reflections can be received. First signal metal layer1246B and common second signal metal layer1244can provide two terminals of transmit transducer1204D to which transmit waves can be applied. In some examples, the common signal metal layer can be a common ground for the transmit and receive transducers. In some examples, the metal connections for the transmit and receive transducers can be separated from each other and differential or single ended transmit and receive circuitry can be used. Although not shown, routing of signal metal layers1244,1246A and1246B can be placed so that acoustic touch and force sensing circuit1206can be coupled to routing on surface1202or exposed surfaces of transmit transducer1204D and/or receive transducer1205D to enable direct or indirect bonding of the acoustic touch and force sensing circuit to routing on surface1202or on transducers1204D/1205D. In some examples, bond metal1242can be bonded to1246A signal metal (using a metal to metal conductive bonding). It should be noted, that one advantage of the integration illustrated inFIG.12Eover the integrations ofFIGS.12B-D, can be that the deformable material1003can have a more uniform shape around the perimeter of the device. In contrast, as illustrated inFIGS.12B-D, the deformable material may include a cutout or have different properties (e.g., different thickness) where the acoustic touch and force sensing circuit (and/or flex circuit or interposer) is located. FIGS.13-19illustrate various configurations for integrating touch and force sensing functionality within an electronic device. Each of theFIGS.13-19includes a cover glass that can correspond to cover glass312above, a display stackup, a housing that can correspond to rigid material318above, a transducer that can correspond to transducer314above, and a deformable material (e.g., that can be included in a force sensing stackup) that can correspond to deformable material316above. In some examples, the display stackup can include a stackup for touch sensing circuitry (e.g., capacitive touch sensing). Each of the different configurations can be used to create a device that has both touch sensing and force sensing capability, as will be described in more detail below. FIG.13illustrates a first exemplary configuration for integrating touch sensing and force sensing circuitry with housing1304and cover glass1302of an electronic device. In some examples, transducer1308can be coupled to a side of the cover glass1302. In some examples, cover glass1302can be disposed over a display stackup1306. In some examples, the display stackup1306can include a touch sensor stackup, e.g., a capacitive touch sensor stackup. In some examples, the transducer1308can have a height in the y-axis dimension that can be close to the thickness in the y-axis dimension of the cover glass1302. In some examples, this can allow the transducer1308to produce a uniform acoustic wave throughout the thickness of the cover glass1302. In some examples, by placing the transducer1308on the side of the cover glass, stimulating the transducer with a voltage or current can produce a horizontal shear wave, Rayleigh wave, Lamb wave, Love wave, Stoneley wave, or surface acoustic wave in the cover glass1302travelling along the x-axis direction. In some examples, more than one transducer1304can be disposed around the perimeter of the cover glass1302to provide touch measurements having two-dimensional coordinates on the cover glass surface (e.g., as described with respect to transducers502A-502D above). The transducer1308can be disposed on a backing material1310that can in turn provide mechanical coupling between the transducer and the housing1304. In some examples, an encapsulant1316can be provided to hide the transducer1308and backing material1310from being visible to a user as well as providing additional mechanical stability. In some examples, the encapsulant1316can be a part of the housing1304and in some examples the encapsulant can be a separate material from the housing (e.g., glass, zircon, titanium, sapphire, etc.). In some examples, a force sensor stackup1312can be positioned behind the cover glass1302, and can operate to detect force as described in at leastFIGS.3and6-7above. FIG.14illustrates a second exemplary configuration for integrating touch sensing and force sensing circuitry with housing1404and cover glass1402of an electronic device.FIG.14illustrates a similar configuration toFIG.13showing the transducer1408coupled to a side of the cover glass1402. In some examples, the transducer1408can have a height in the y-axis dimension that can be close to the thickness in the y-axis dimension of the cover glass1402. In some examples, by placing the transducer1408on the side of the cover glass, stimulating the transducer with a voltage or current can produce a horizontal shear wave in the cover glass1402travelling along the x-axis direction. In some examples, more than one transducer1404can be disposed around the perimeter of the cover glass1402to provide touch measurements having two-dimensional coordinates on the cover glass surface (e.g., as described with respect to transducers502A-502D above). In some examples, each transducer1408can produce a shear wave oriented in a different direction. In addition to the encapsulant1416(which can correspond to the encapsulant1316above) a second encapsulant can be used to provide a mechanical base for the cover glass1402, transducer1408and backing material1410. Inclusion of the second encapsulant1418can simplify the structure of the housing1404by requiring one less notch in the housing. In some examples, the force sensor stackup1412can be supported directly by the housing1404, and can operate to detect force as described in at leastFIGS.3and6-7above. FIG.15illustrates a third exemplary configuration for integrating touch sensing and force sensing circuitry with housing1504and curved cover glass1502of an electronic device. Unlike the configurations ofFIGS.13and14above, the orientation of the transducer1508does not necessarily need to match with the direction of acoustic wave propagation (e.g., along the x-axis). In the illustrated configuration, the transducer1508can be attached to an edge of the curved cover glass1502and backing material1510can be disposed between the transducer1580and the housing1504. In some examples, the transducer1508and backing material1510can be positioned within a notch or groove in the housing1504as illustrated inFIG.15. In some examples, the acoustic energy produced by the transducer1508can be guided along the curved edge1502′ of the cover glass and can continue to propagate along the surface to perform touch detection as described above with regards toFIGS.2-5. In some examples, a gradual curvature of the cover glass1502can be used to guide the wave along the curved edge1502′ of the cover glass toward the flat surface. Force sensor stackup1512can be supported by the housing1504, and a standoff1514can be coupled to the cover glass1502to transfer a force applied to the cover glass into the force sensor stackup as described in at leastFIGS.3and6-7above. In particular, because the force sensor stackup1512can be located beneath the curved edge1502′ of the cover glass1502, the standoff1514can be included to translate the force onto a flat force sensor stackup. FIG.16illustrates a variation of the third configuration ofFIG.15with the addition of an encapsulant material1616(which can correspond to encapsulant materials1316,1416, and1418above) that can be used to mechanically secure the transducer1608and backing1610to the housing1604as well as visually obscure the transducer assembly from a user of the electronic device. Similar toFIG.10, force sensor stackup1612can be located beneath the curved edge1602′ of cover glass1602and a standoff1614can be coupled to the cover glass1602to transfer a force applied to the cover glass into the force sensor stackup. FIG.17illustrates a fourth exemplary configuration for integrating touch sensing and force sensing circuitry with housing1704and cover glass1702. Transducer1708can be disposed on a backing material1710within a cavity formed behind the cover glass1702. An acoustic wave generated by stimulating the transducer1708can approximate the stimulation directly at the side of the cover glass1702as illustrated inFIGS.13and14while maintaining a curved edge1702′ of cover glass surface as illustrated inFIGS.15and16. In other words, the transducer1708can be used to generate a wave that travels along the flat surface of the cover glass1702in the x-axis direction directly, without relying on guiding the wave through the curved edge1702′ of the cover glass. Reflection of the transmitted acoustic energy can be used for touch detection as described above (e.g., with respect toFIGS.2-5). Force sensing stackup1712can be disposed between the cover glass1702and the housing1704to perform force sensing as described in at leastFIGS.3and6-7above. FIG.18illustrates a fifth exemplary configuration for integrating touch sensing and force sensing circuitry with housing1804and cover glass1802. In some examples, transducer1808and backing material1810can be disposed on a back side of the cover glass1802. In some examples, acoustic energy from the transducer1808can begin propagating along the y-axis direction, can reflect from the curved edge1802′ of the cover glass1802, and can travel along the x-axis direction as in the examples described above. In some examples, the amount of curvature of the curved edge1802′ can determine the dispersion of the reflected acoustic energy. In some examples, this dispersion can lead to dispersion in the measured time of flight for reflected acoustic energy and can have an effect on touch detection as described inFIGS.2-5above. Force sensor stackup1812can be coupled to the housing1804to perform force sensing as described inFIGS.6-7above. FIGS.19A and19Billustrate exemplary configurations for integrating touch sensing and force sensing circuitry with shared elements with housing1904and cover glass1902of an electronic device. In some examples, the illustrations ofFIGS.19A and19Bcan be implementations for integrating the touch sensing and force sensing as described inFIGS.2-7above, with particular reference toFIGS.3B,5A,6A, and6B.FIGS.19A and19Bdiffer in the shape of the cover glass1902. InFIG.19A, the illustrated cover glass1902can have a flat back side, and the transducer1908can be disposed directly to the back side of the cover glass. InFIG.19B, the illustrated cover glass1902can have a downwardly extending portion at edges of the cover glass, and the transducer1908can be disposed on the downwardly extending portion of the cover glass. In other examples, the transducer1908can be attached to a curved cover glass1902such as those illustrated inFIGS.15-17above. Similar to the configuration described forFIG.18, acoustic energy from the transducer1808can begin propagating along the y-axis direction, can reflect from the bezel portion1902′ of the cover glass1902, and can travel along the x-axis direction. In the illustrated examples ofFIGS.19A and19B, the bezel1902′ is drawn as a perfectly formed 45 degree angle, which can produce a 90 degree change in orientation of the acoustic energy from the reflection at the bezel. It should be understood that the same principles apply to the curved cover glass1802ofFIG.18, and that acceptable performance can be obtained in the presence of a non-flat bezel1902′, such as a curved edge1802′ above. The illustrated flat bezel1902′ could be used to provide a desirable reflection, but can result in a sharp edge that could be unpleasant for a user to touch. In some examples, a portion of the bezel1902′ can be flat, while sharp edges of the bezel can be avoided by rounding of the edges. In some examples, the length (e.g., x-axis dimension) of the transducer1908can be made equal to or nearly equal to the thickness (e.g., y-axis dimension) of the cover glass1902so that a uniform acoustic wave1920can be transmitted throughout the thickness of the cover glass material. Using the principles described above inFIGS.2-5, the transducer1908can be used to detect the touch position of object1922on the cover glass. As should be understood,FIGS.19A and19Billustrate how the configuration ofFIG.3Bcan be integrated into an electronic device cover glass for performing touch sensing. In addition, by placing a deformable material1910behind the transducer (e.g., as a backing material), the force sensing described inFIGS.3-7above can simultaneously be performed using the same transducer1908. For example, as compared toFIG.6A, the cover glass1902, transducer1908, deformable material1910, and housing1904can correspond to cover glass601, transducer602, deformable material604, and rigid material606respectively. Also, although not shown, a second transducer can be included between the deformable material1910and the housing1904to match the configuration illustrated inFIG.6B. Therefore, according to the above, some examples of the disclosure are directed to An electronic device, comprising: a cover surface; a deformable material disposed between the cover surface and a housing of the electronic device; an acoustic transducer coupled to the cover surface and the deformable material and configured to produce a first acoustic wave in the cover surface and a second acoustic wave in the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the deformable material and cover surface are further configured such that the first acoustic wave is capable of being propagated in a first direction and the second acoustic wave is capable of being propagated in a second direction, different from the first direction. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first acoustic wave is incident upon a bezel portion of the cover glass in a third direction and reflected by the bezel portion of the cover glass in the first direction, different from the third direction. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first and third directions are opposite to one another. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first and third direction are orthogonal. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the deformable material is included in a gasket positioned between the housing and a first side of the cover surface. Some examples of the disclosure are directed to a touch and force sensitive device. The device can comprise: a surface, a deformable material disposed between the surface and a rigid material, such that force on the surface causes a deformation of the deformable material, a plurality of transducers coupled to the surface and the deformable material, and processing circuitry coupled to the plurality of transducers. The processing circuitry can be capable of: stimulating the plurality of transducers to transmit ultrasonic waves to the surface and the deformable material, receiving, from the plurality of transducers, reflected ultrasonic waves from the surface and the deformable material, determining a location of a contact by an object on the surface based the reflected ultrasonic waves propagating in the surface received at the plurality of transducers, and determining an applied force by the contact on the surface based on one or more reflected ultrasonic waves propagating in the deformable material received from one or more of the plurality of transducers. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the surface can comprise an external surface of the device. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the rigid material can comprise a portion of a housing of the device. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the deformable material can form a gasket between the portion of the housing and the external surface of the device. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the plurality of transducers can comprise at least four transducers bonded to the surface. Each of the four transducers can be disposed proximate to a different one of four respective edges of the surface and can be disposed over a portion of the gasket proximate to a respective edge of the housing of the device. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the processing circuitry can comprise one or more acoustic touch and force sensing circuits. The acoustic touch and force sensing circuit can be coupled to the plurality of transducers via direct bonding between the plurality of transducers and the one or more acoustic touch and force sensing circuits, via bonding between the plurality of transducers and a flexible circuit board coupled to the one or more acoustic touch and force sensing circuits, or via bonding between the plurality of transducers and a rigid circuit board coupled to the one or more acoustic touch and force sensing circuits. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the device can further comprise routing deposited the surface proximate to the plurality of transducers. The processing circuitry can comprise one or more acoustic touch and force sensing circuits. The one or more acoustic touch and force sensing circuits can be coupled to the plurality of transducers via coupling of the one or more acoustic touch and force sensing circuits to the routing deposited on the surface. Additionally or alternatively to one or more of the examples disclosed above, in some examples, stimulating the plurality of transducers to transmit ultrasonic waves to the surface and the deformable material and receiving, from the plurality of transducers, reflected ultrasonic waves from the surface and the deformable material can comprise: stimulating a first transducer of the plurality of transducers to transmit a first ultrasonic wave to the surface and receiving a first reflected ultrasonic wave from the first transducer from the surface in response to the transmitted first ultrasonic wave; stimulating a second transducer of the plurality of transducers to transmit a second ultrasonic wave to the surface and receiving a second reflected ultrasonic wave from the second transducer from the surface in response to the transmitted second ultrasonic wave; stimulating a third transducer of the plurality of transducers to transmit a third ultrasonic wave to the surface and receiving a third reflected ultrasonic wave from the third transducer from the surface in response to the transmitted third ultrasonic wave; and stimulating a fourth transducer of the plurality of transducers to transmit a fourth ultrasonic wave to the surface and receiving a fourth reflected ultrasonic wave from the fourth transducer from the surface in response to the transmitted fourth ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first ultrasonic wave, second ultrasonic wave, third ultrasonic wave and fourth ultrasonic wave can be transmitted in series to reduce interference between the plurality of transducers. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the location of the contact by the object on the surface can be based the first reflected ultrasonic wave, the second reflected ultrasonic wave, the third reflected ultrasonic wave and the fourth reflected ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, stimulating the plurality of transducers to transmit ultrasonic waves to the surface and the deformable material and receiving, from the plurality of transducers, reflected ultrasonic waves from the surface and the deformable material can further comprise: stimulating the first transducer of the plurality of transducers to transmit a fifth ultrasonic wave to the deformable material and receiving a fifth reflected ultrasonic wave from the first transducer from the deformable material in response to the transmitted fifth ultrasonic wave; stimulating the second transducer of the plurality of transducers to transmit a sixth ultrasonic wave to the deformable material and receiving a sixth reflected ultrasonic wave from the second transducer from the deformable material in response to the transmitted sixth ultrasonic wave; stimulating the third transducer of the plurality of transducers to transmit a seventh ultrasonic wave to the deformable material and receiving a seventh reflected ultrasonic wave from the third transducer from the deformable material in response to the transmitted seventh ultrasonic wave; and stimulating the fourth transducer of the plurality of transducers to transmit an eighth ultrasonic wave to the deformable material and receiving an eighth reflected ultrasonic wave from the fourth transducer from the deformable material in response to the transmitted eighth ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the fifth ultrasonic wave, the sixth ultrasonic wave, the seventh ultrasonic wave and the eighth ultrasonic wave can be transmitted in series to reduce interference between the plurality of transducers. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the applied force by the contact on the surface can be based the fifth reflected ultrasonic wave, the sixth reflected ultrasonic wave, the seventh reflected ultrasonic wave and the eighth reflected ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the applied force by the contact on the surface can comprise averaging time of flight measurements corresponding to the fifth reflected ultrasonic wave, sixth reflected ultrasonic wave, seventh reflected ultrasonic wave and eighth reflected ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, stimulating the plurality of transducers to transmit ultrasonic waves to the surface and the deformable material and receiving, from the plurality of transducers, reflected ultrasonic waves from the surface and the deformable material can comprise: stimulating a first transducer of the plurality of transducers to simultaneously transmit a first ultrasonic wave to the surface and to the deformable material; receiving a first reflected ultrasonic wave from the surface from the first transducer in response to the first ultrasonic wave transmitted to the surface and a first reflected ultrasonic wave from the deformable material from the first transducer in response to the first ultrasonic wave transmitted to the deformable material; stimulating a second transducer of the plurality of transducers to simultaneously transmit a second ultrasonic wave to the surface and to the deformable material; receiving a second reflected ultrasonic wave from the surface from the second transducer in response to the second ultrasonic wave transmitted to the surface and a second reflected ultrasonic wave from the deformable material from the second transducer in response to the second ultrasonic wave transmitted to the deformable material; stimulating a third transducer of the plurality of transducers to simultaneously transmit a third ultrasonic wave to the surface and to the deformable material; receiving a third reflected ultrasonic wave from the surface from the third transducer in response to the third ultrasonic wave transmitted to the surface and a third reflected ultrasonic wave from the deformable material from the third transducer in response to the third ultrasonic wave transmitted to the deformable material; and stimulating a fourth transducer of the plurality of transducers to simultaneously transmit a fourth ultrasonic wave to the surface and to the deformable material; receiving a fourth reflected ultrasonic wave from the surface from the fourth transducer in response to the fourth ultrasonic wave transmitted to the surface and a fourth reflected ultrasonic wave from the deformable material from the fourth transducer in response to the fourth ultrasonic wave transmitted to the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first ultrasonic wave, the second ultrasonic wave, the third ultrasonic wave and the fourth ultrasonic wave can be transmitted in series to reduce interference between the plurality of transducers. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the location of the contact by the object on the surface can be based the first reflected ultrasonic wave from the surface, the second reflected ultrasonic wave from the surface, the third reflected ultrasonic wave from the surface and the fourth reflected ultrasonic wave from the surface. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the applied force by the contact on the surface can be based the first reflected ultrasonic wave from the deformable material, the second reflected ultrasonic wave from the deformable material, the third reflected ultrasonic wave from the deformable material and the fourth reflected ultrasonic wave from the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the processing circuitry can comprise a force detection circuit. The force detection circuit can be configured to use time gating to detect one or more transitions in a reflected ultrasonic wave to determine a time of arrival of the reflected ultrasonic wave. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the processing circuitry can comprise one or more acoustic touch and force sensing circuits. Each of the one or more acoustic touch and force sensing circuits can comprise an acoustic touch sensing circuit implemented on a first integrated circuit and an acoustic force sensing circuit implemented on a second integrated circuit, separate from the first integrated circuit. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the processing circuitry can comprise one or more acoustic touch and force sensing circuits. Each of the one or more acoustic touch and force sensing circuits can comprise an acoustic transmit circuit and an acoustic receive circuit. The acoustic transmit circuit can be implemented on a first integrated circuit and the acoustic receive circuit can be implemented on a second integrated circuit, separate from the first integrated circuit. Some examples of the disclosure are directed to a non-transitory computer readable storage medium. The non-transitory computer readable storage medium can store instructions, which when executed by a device comprising a surface, a deformable material, a plurality of acoustic transducers coupled to the surface and the deformable material, and processing circuitry, cause the processing circuitry to: for each of the plurality of acoustic transducers: simultaneously transmit an ultrasonic wave in the surface toward an opposite edge of the surface and transmit an ultrasonic wave through the deformable material; receive an ultrasonic reflection from the deformable material in response to the ultrasonic wave transmitted through the deformable material traversing the thickness of the deformable material; receive an ultrasonic reflection from the surface; determine a first time-of-flight between the ultrasonic wave transmitted through the deformable material and the ultrasonic reflection from the deformable material; and determine a second time-of-flight between the ultrasonic wave transmitted in the surface and the ultrasonic reflection from the surface. The instructions can further cause the processing circuitry to determine a position of an object on the surface based on respective second time-of-flight measurements corresponding to the plurality of transducers; and determine an amount of applied force by the object on the surface based on respective first time-of-flight measurements corresponding to the plurality of transducers. Some examples of the disclosure are directed to a method for determining a position of an object on a surface and an amount of applied force by the object on the surface. The method can comprise: for each of a plurality of acoustic transducers: transmitting an first ultrasonic wave in the surface toward an opposite edge of the surface; receiving a first ultrasonic reflection from the surface; and determining a first time-of-flight between the first ultrasonic wave transmitted in the surface and the first ultrasonic reflection from the surface; determining the position of the object on the surface based on respective first time-of-flight measurements corresponding to the plurality of transducers. The method can further comprise: for each of a plurality of acoustic transducers: transmitting a second ultrasonic wave through the deformable material; receiving a second ultrasonic reflection from the deformable material in response to the second ultrasonic wave transmitted through the deformable material traversing the thickness of the deformable material; and determining a second time-of-flight between the second ultrasonic wave transmitted through the deformable material and the second ultrasonic reflection from the deformable material. The method can further comprise determining the amount of applied force by the object on the surface based on respective second time-of-flight measurements corresponding to the plurality of transducers. Some examples of the disclosure are directed to a touch and force sensitive device. The device can comprise: a surface, a deformable material disposed between the surface and a rigid material, such that force on the surface causes a deformation of the deformable material, one or more transducers coupled to the surface and the deformable material and configured to transmit ultrasonic waves to and receive ultrasonic waves from the surface and the deformable material, and a processor. The processor can be capable of determining a location of a contact by an object on the surface based on ultrasonic waves propagating in the surface and determining an applied force by the contact on the surface based on ultrasonic waves propagating in the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the surface can comprise a glass or sapphire external surface of the device, the rigid material can comprise a portion of a metal housing of the device, and the deformable material can form a gasket between the metal housing and the surface. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the one or more transducers can comprise at least a first transducer coupled to the deformable material. The first transducer can be configured to transmit an ultrasonic wave through the thickness of the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the first transducer can also be configured to receive one or more ultrasonic reflections from a boundary between the deformable material and the rigid material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the one or more transducers can comprise at least a second transducer coupled between the deformable material and the rigid material. The second transducer can be configured to receive the ultrasonic wave transmitted through the thickness of the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the one or more transducers can comprise at least one transducer configured to simultaneously transmit an ultrasonic wave in the surface and an ultrasonic wave through the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the one or more transducers can comprise four transducers. Each of the four transducers can be disposed proximate to a respective edge of the surface. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the device can further comprise an ultrasonic absorbent material coupled to the deformable material. The ultrasonic absorbent material can be configured to dampen ultrasonic ringing in the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the location of the contact by the object on the surface can comprise: determining a first time-of-flight of an ultrasonic wave propagating between a first edge the surface and a first leading edge of the object proximate to the first edge, determining a second time-of-flight of an ultrasonic wave propagating between a second edge the surface and a second leading edge of the object proximate to the second edge, determining a third time-of-flight of an ultrasonic wave propagating between a third edge the surface and a third leading edge of the object proximate to the third edge, and determining a fourth time-of-flight of an ultrasonic wave propagating between a fourth edge the surface and a fourth leading edge of the object proximate to the fourth edge. Additionally or alternatively to one or more of the examples disclosed above, in some examples, determining the applied force by the contact on the surface can comprise determining a time-of-flight of an ultrasonic wave propagating from a first side of the deformable material and reflecting off of a second side, opposite the first side, of the deformable material. Some examples of the disclosure are directed to a method. The method can comprise transmitting ultrasonic waves in a surface, receiving ultrasonic reflections from the surface, transmitting ultrasonic waves through a deformable material, receiving ultrasonic reflections from the deformable material, determining a position of an object in contact with the surface from the ultrasonic reflections received from the surface, and determining a force applied by the object in contact with the surface from the ultrasonic reflections received from the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, at least one of the ultrasonic waves transmitted in the surface and at least one of the ultrasonic waves transmitted in the deformable material are transmitted simultaneously. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the at least one of the ultrasonic waves transmitted in the surface and the at least one of the ultrasonic waves transmitted in the deformable material are transmitted by a common transducer. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the method can further comprise determining a time-of-flight through the deformable material based on a time difference between transmitting an ultrasonic wave through the deformable material and receiving an ultrasonic reflection from the deformable material. The force applied by the object can be determined based on the time-of-flight through the deformable material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the ultrasonic reflection from the deformable material can result from the ultrasonic wave transmitted through the deformable material reaching a boundary between the deformable material and a rigid material. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the ultrasonic reflection from the deformable material can be received before the ultrasonic reflection from the surface. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the method can further comprise determining a time-of-flight in the surface based on a time difference between transmitting an ultrasonic wave in the surface and receiving an ultrasonic reflection from the surface corresponding to the object in contact with the surface. Determining the position of the object comprises determining a distance from an edge of the surface to a leading edge of the object proximate to the edge of the surface can be based on the time-of-flight in the surface. Some examples of the disclosure are directed to a non-transitory computer readable storage medium. The non-transitory computer readable storage medium can store instructions, which when executed by a device comprising a surface, a plurality of acoustic transducers coupled to edges of the surface, an acoustic touch and force sensing circuit, and one or more processors, cause the acoustic touch and force sensing circuit and the one or more processors to: for each of the plurality of acoustic transducers: simultaneously transmit an ultrasonic wave in the surface toward an opposite edge of the surface and transmit an ultrasonic wave through a deformable material; receive an ultrasonic reflection from the deformable material in response to the ultrasonic wave transmitted through the deformable material traversing the thickness of the deformable material; receive an ultrasonic reflection from the surface; determine a first time-of-flight between the ultrasonic wave transmitted through the deformable material and the ultrasonic reflection from the deformable material; and determine a second time-of-flight between the ultrasonic wave transmitted in the surface and the ultrasonic reflection from the surface. The instructions can further cause the acoustic touch and force sensing circuit and the one or more processors to determine a position of an object on the surface based on respective second time-of-flight measurements corresponding to the plurality of transducers and determine an amount of applied force by the object on the surface based on respective first time-of-flight measurements corresponding to the plurality of transducers. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the ultrasonic wave transmitted in the surface and the ultrasonic wave transmitted through the deformable material can comprise shear waves. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the ultrasonic reflection from the deformable material can be received before the ultrasonic reflection from the surface. Although examples of this disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of examples of this disclosure as defined by the appended claims. | 141,946 |
11861116 | DETAILED DESCRIPTION OF THE EMBODIMENTS Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings. Like reference numerals may refer to like elements throughout the accompanying drawings. It will be understood that when a component such as a film, a region, a layer, etc., is referred to as being “on”, “connected to”, “coupled to”, or “adjacent to” another component, it can be directly on, connected, coupled, or adjacent to the other component, or intervening components may be present. It will also be understood that when a component is referred to as being “between” two components, it can be the only component between the two components, or one or more intervening components may also be present. It will also be understood that when a component is referred to as “covering” another component, it can be the only component covering the other component, or one or more intervening components may also be covering the other component. Other words used to describe the relationships between components should be interpreted in a like fashion. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element discussed below could be termed a second element without departing from the teachings of the present disclosure. Similarly, the second element could also be termed the first element. It should be understood that descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments, unless the context clearly indicates otherwise. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Spatially relative terms, such as “beneath”, “below”, “lower”, “under”, “above”, “upper”, etc., may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” or “under” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” can encompass both an orientation of above and below. Herein, when two or more elements or values are described as being substantially the same as or about equal to each other, it is to be understood that the elements or values are identical to each other, the elements or values are equal to each other within a measurement error, or if measurably unequal, are close enough in value to be functionally equal to each other as would be understood by a person having ordinary skill in the art. For example, the term “about” as used herein is inclusive of the stated value and means within an acceptable range of deviation for the particular value as determined by one of ordinary skill in the art, considering the measurement in question and the error associated with measurement of the particular quantity (e.g., the limitations of the measurement system). For example, “about” may mean within one or more standard deviations as understood by one of the ordinary skill in the art. Further, it is to be understood that while parameters may be described herein as having “about” a certain value, according to embodiments, the parameter may be exactly the certain value or approximately the certain value within a measurement error as would be understood by a person having ordinary skill in the art. Other uses of these terms and similar terms to describe the relationships between components should be interpreted in a like fashion. FIG.1is a view showing a configuration of a touch input system according to an embodiment of the present disclosure.FIG.2is a block diagram of a touch input device and a display device shown inFIG.1according to an embodiment of the present disclosure. Referring toFIGS.1and2, a display device10may be employed by portable electronic devices such as, for example, a mobile phone, a smartphone, a tablet PC, a mobile communications terminal, an electronic notebook, an electronic book, a portable multimedia player (PMP), a navigation device and an ultra-mobile PC (UMPC). In an embodiment, the display device10may be used as a display unit of a television, a laptop computer, a monitor, an electronic billboard, or an Internet of Things (IOT) device. In an embodiment, the display device10may be applied to wearable devices such as a smartwatch, a watch phone, a glasses-type display, and a head-mounted display (HMD) device. The display device10includes a display panel100, a display driver200, a touch driver400, a main processor500, and a communications unit600. A touch input device20includes a code detector21, a piezoelectric sensor22, a code processor23, a communications module24, and a memory25. The display device10uses the touch input device20as a touch input tool. The display panel100of the display device10may include a display unit DU displaying images, and a touch sensing unit TSU sensing a touch input such as, for example, a part of a human body such as a finger and the touch input device20. The display unit DU of the display panel100may include a plurality of pixels and may display images through the plurality of pixels. The touch sensing unit TSU of the display panel100may be formed on the front side of the display panel100. The touch sensing unit TSU may include a plurality of touch electrodes to sense a user's touch by capacitive sensing. Code patterns are formed on some of the plurality of touch electrodes, so that the code patterns are sensed by the touch input device20. The code patterns of the display panel100are formed of light-blocking members that form a predetermined plane code shape by covering some of the plurality of touch electrodes with a predetermined area. Accordingly, the code patterns are sensed by the touch input device according to the plane code shape of the light-blocking members and the size of the plane code. Light-blocking dummy patterns are formed on the front surfaces of some of the touch electrodes other than the touch electrodes on which the code patterns are formed. For example, code patterns in the plane code shape are formed and disposed on a part of the front surfaces of some of the touch electrodes at a predetermined spacing. In addition, light-blocking dummy patterns that block infrared or ultraviolet light without being sensed by the touch input device20as code patterns are formed on the front surfaces of the touch electrodes on which the code patterns are not formed. The light-blocking dummy patterns may cover the front surfaces of the touch electrodes so that the front surfaces are not exposed, thereby reducing the reflective characteristics and reflectance of the touch electrodes. In this manner, it is possible to reduce the influence of reflected light by the touch electrodes to thereby increase the recognition rate and the accuracy of the code patterns of the touch input device20. The structures of the code patterns and the light-blocking dummy patterns as well as the touch sensing unit TSU of the display panel100will be described in more detail below with reference to the accompanying drawings. The display driver200may output signals and voltages for driving the display unit DU. The display driver200may supply data voltages to data lines. The display driver200may apply a supply voltage to a voltage line and may supply gate control signals to the gate driver. The touch driver400may be connected to the touch sensing unit TSU. The touch driver400may supply a touch driving signal to a plurality of touch electrodes of the touch sensing unit TSU and may sense a change in the capacitance between the plurality of touch electrodes. The touch driver400may determine whether a user's touch is input and may find the coordinates of the touch based on the amount of the change in the capacitance between the touch electrodes. The main processor500may control all of the functions of the display device10. In an embodiment, the main processor500may apply digital video data to the display driver200so that the display panel100displays images. For example, the main processor500may receive touch data from the touch driver400to determine the coordinates of the user's touch, and then may generate digitizer video data based on the coordinates or may execute an application indicated by the icon displayed at the coordinates of the user's touch. In an embodiment, the main processor500may receive coordinate data from the touch input device20to determine the coordinates of the touch input device20, and then may generate digitizer video data based on the coordinates or may execute an application indicated by the icon displayed at the touch coordinates of the touch input device20. The communications unit600may conduct wired/wireless communications with an external device. For example, the communications unit600may transmit/receive communication signals to/from the communications module24of the touch input device20. The communications unit600may receive coordinate data composed of data codes from the touch input device20and may provide the coordinate data to the main processor500. The touch input device20may be used as a touch input tool and may be implemented as, for example, an electronic pen such as a smart pen. The touch input device20is an electronic pen that optically senses display light of the display panel100or light reflected off of the display panel100. The touch input device20may detect code patterns included in the display panel100based on the sensed light and generate coordinate data. The touch input device20may be, but is not limited to, an electronic pen in the shape of a writing tool. The code detector21of the touch input device20is disposed adjacent to the pen tip of the touch input device20to sense code patterns included in the display panel100. To this end, the code detector21includes at least one light-outputting portion21(a) (also referred to as a light-emitting portion) for outputting infrared light using at least one infrared light source, and at least one light-receiving portion21(b) for detecting infrared light reflected off the code patterns with an infrared camera. The at least one infrared light source included in the light-outputting portion21(a) may be configured as an infrared LED array in a matrix pattern. The infrared camera of the light-receiving portion21(b) may include a filter that transmits infrared rays and blocks wavelength ranges other than infrared rays, a lens system for focusing the infrared rays having transmitted the filter, and an optical image sensor that converts the optical image formed by the lens system into an electrical image signal and outputs it, etc. The optical image sensor is configured as an array in a matrix pattern like the infrared LED array, and may provide shape data of the code patterns to the code processor23according to the infrared shape reflected from the code patterns. In this manner, the code detector21of the touch input device20continuously detects code patterns included in some regions of the touch sensing unit TSU according to the user's control and motion, and may continuously generate the shape data of the code patterns to provide it to the code processor23. The code processor23may continuously receive shape data of code patterns from the code detector21. For example, the code processor23may continuously receive shape data for the code patterns, and may identify the arrangement structure and shape of the code patterns. The code processor23may extract or generate data codes corresponding to the arrangement structure and shape of the code patterns, and may combine the data codes to extract or generate coordinate data corresponding to the combined data codes. The code processor23may transmit the generated coordinate data to the display device10through the communications module24. For example, the code processor23receives the shape data of the code patterns and generates data codes corresponding to the code patterns to convert them, so that coordinate data can be quickly generated without complicated calculation or correction. The communications module24may conduct wired/wireless communications with an external device. For example, the communications module24may transmit/receive communication signals to/from the communications unit600of the display device10. The communications module24may receive coordinate data composed of data codes from the code processor23and may provide the coordinate data to the communications unit600. The memory25may store data necessary for driving the touch input device20. The memory25stores shape data of the code patterns and data codes respectively corresponding to the shape data and the code patterns. In addition, the memory25stores data codes and coordinate data according to the combination of data codes. The memory25shares with the code processor23the data codes corresponding to respective shape data and code patterns, and coordinate data according to the combination of data codes. Accordingly, the code processor23may combine the data codes through the data codes and the coordinate data stored in the memory25, and may extract or generate coordinate data corresponding to the combined data codes. FIG.3is a perspective view showing the configuration of the display device shown inFIG.2according to an embodiment of the present disclosure.FIG.4is a cross-sectional view showing the configuration of the display device shown inFIG.2according to an embodiment of the present disclosure. Referring toFIGS.3and4, the display device10may have a shape similar to a quadrangle when viewed from the top (e.g., in a plan view). For example, the display device10may have a shape similar to a quadrangle having shorter sides in the x-axis direction and longer sides in the y-axis direction when viewed from the top. The corners where the shorter sides in the x-axis direction and the longer sides in the y-axis direction meet may be rounded to have a predetermined curvature or may be formed at a right angle. The shape of the display device10when viewed from the top is not limited to a quadrangular shape, but may be formed in a shape similar to, for example, other polygonal shapes, a circular shape, or an elliptical shape. The display panel100may include a main area MA and a subsidiary area SBA. The main area MA may include a display area DA having pixels for displaying images, and a non-display area NDA located around the display area DA. The display area DA may emit light from a plurality of emission areas or a plurality of opening areas. For example, the display panel100may include a pixel circuit including switching elements, a pixel-defining layer that defines the emission areas or the opening areas, and a self-light-emitting element. The non-display area NDA may be disposed on the outer side of the display area DA. The non-display area NDA may be defined as the edge area of the main area MA of the display panel100. The non-display area NDA may include a gate driver that applies gate signals to gate lines, and fan-out lines that connect the display driver200with the display area DA. The subsidiary area SBA may extend from one side of the main area MA. The subsidiary area SUB may include a flexible material that can be bent, folded, or rolled. For example, when the subsidiary area SBA is bent, the subsidiary area SBA may overlap the main area MA in the thickness direction (z-axis direction). The subsidiary area SBA may include pads connected to the display driver200and the circuit board300. In an embodiment, the subsidiary area SBA may be eliminated, and the display driver200and the pads may be disposed in the non-display area NDA. The display driver200may be implemented as an integrated circuit (IC) and may be attached on the display panel100using, for example, a chip-on-glass (COG) technique, a chip-on-plastic (COP) technique, or ultrasonic bonding. In an embodiment, the display driver200may be disposed in the subsidiary area SBA and may overlap the main area MA in the thickness direction (z-axis direction) as the subsidiary area SBA is bent. In an embodiment, the display driver200may be mounted on the circuit board300. The circuit board300may be attached on the pads of the display panel100using an anisotropic conductive film (ACF). Lead lines of the circuit board300may be electrically connected to the pads of the display panel100. The circuit board300may be, for example, a flexible printed circuit board (FPCB), a printed circuit board (PCB), or a flexible film such as a chip-on-film (COF). The touch driver400may be mounted on the circuit board300. The touch driver400may be implemented as an integrated circuit (IC). As described above, the touch driver400may supply a touch driving signal to a plurality of touch electrodes of the touch sensing unit TSU and may sense a change in the capacitance between the plurality of touch electrodes. The touch driving signal may be a pulse signal having a predetermined frequency. The touch driver400may determine whether there is touch by a part of a user's body such as, for example, a finger, and may find the coordinates of the touch, if any, based on the amount of the change in the capacitance between the touch electrodes. Referring toFIG.4, the display panel100may include a display unit DU, a touch sensing unit TSU, and a polarizing film. The display unit DU may include a substrate SUB, a thin-film transistor layer TFTL, an emission material layer EML and an encapsulation layer TFEL. The substrate SUB may be a base substrate or a base member. The substrate SUB may be a flexible substrate that can be bent, folded, or rolled. In an embodiment, the substrate SUB may include, but is not limited to, a glass material or a metal material. In an embodiment, the substrate SUB may include a polymer resin such as polyimide PI. The thin-film transistor layer TFTL may be disposed on the substrate SUB. The thin-film transistor layer TFTL may include a plurality of thin-film transistors forming pixel circuits of pixels. The thin-film transistor layer TFTL may include gate lines, data lines, voltage lines, gate control lines, fan-out lines for connecting the display driver200with the data lines, lead lines for connecting the display driver200with the pads, etc. When the gate driver is formed on one side of the non-display area NDA of the display panel100, the gate driver may include thin-film transistors. The thin-film transistor layer TFTL may be disposed in the display area DA, the non-display area NDA and the subsidiary area SBA. The thin-film transistors in each of the pixels, the gate lines, the data lines and the voltage lines in the thin-film transistor layer TFTL may be disposed in the display area DA. The gate control lines and the fan-out lines in the thin-film transistor layer TFTL may be disposed in the non-display area NDA. The lead lines of the thin-film transistor layer TFTL may be disposed in the subsidiary area SBA. The emission material layer EML may be disposed on the thin-film transistor layer TFTL. The emission material layer EML may include a plurality of light-emitting elements in each of which a first electrode, an emissive layer and a second electrode are stacked on one another sequentially to emit light, and a pixel-defining layer for defining the pixels. The plurality of light-emitting elements in the emission material layer EML may be disposed in the display area DA. The emissive layer may be an organic emissive layer containing an organic material. The emissive layer may include, for example, a hole transporting layer, an organic light-emitting layer and an electron transporting layer. When the first electrode receives a voltage and the second electrode receives a cathode voltage through the thin-film transistors on the thin-film transistor layer TFTL, the holes and electrons may move to the organic light-emitting layer through the hole transporting layer and the electron transporting layer, respectively, such that they combine in the organic light-emitting layer to emit light. For example, the first electrode may be an anode electrode while the second electrode may be a cathode electrode. However, embodiments of the present disclosure are not limited thereto. In an embodiment, the plurality of light-emitting elements may include quantum-dot light-emitting diodes including a quantum-dot emissive layer or inorganic light-emitting diodes including an inorganic semiconductor. The encapsulation layer TFEL may cover the upper and side surfaces of the emission material layer EML, and can protect the emission material layer EML. The encapsulation layer TFEL may include at least one inorganic layer and at least one organic layer for encapsulating the emission material layer EML. The touch sensing unit TSU may be disposed on the encapsulation layer TFEL. The touch sensing unit TSU may include a plurality of touch electrodes for sensing a user's touch by capacitive sensing, and touch lines connecting the plurality of touch electrodes with the touch driver400. For example, the touch sensor unit TSU may sense a user's touch by self-capacitance sensing or mutual capacitance sensing. In an embodiment, the touch sensing unit TSU may be disposed on a separate substrate disposed on the display unit DU. In such a case, the substrate supporting the touch sensing unit TSU may be a base member encapsulating the display unit DU. The plurality of touch electrodes of the touch sensing unit TSU may be disposed in a touch sensor area overlapping the display area DA. The touch lines of the touch sensing unit TSU may be disposed in a touch peripheral area overlapping the non-display area NDA. The subsidiary area SBA of the display panel100may extend from one side of the main area MA. The subsidiary area SUB may include a flexible material that can be bent, folded, or rolled. For example, when the subsidiary area SBA is bent, the subsidiary area SBA may overlap the main area MA in the thickness direction (z-axis direction). The subsidiary area SBA may include pads connected to the display driver200and the circuit board300. FIG.5is a plan view showing a display unit of a display device according to an embodiment of the present disclosure. Referring toFIG.5, the display area DA of the display unit DU may display images and may be defined as a central area of the display panel100. The display area DA may include a plurality of pixels SP, a plurality of gate lines GL, a plurality of data lines DL and a plurality of voltage lines VL. Each of the plurality of pixels SP may be defined as the minimum unit that outputs light. The plurality of gate lines GL may supply the gate signals received from the gate driver210to the plurality of pixels SP. The plurality of gate lines GL may extend in the x-axis direction and may be spaced apart from one another in the y-axis direction crossing the x-axis direction. The plurality of data lines DL may supply the data voltages received from the display driver200to the plurality of pixels SP. The plurality of data lines DL may extend in the y-axis direction and may be spaced apart from one another in the x-axis direction. The plurality of voltage lines VL may supply the supply voltage received from the display driver200to the plurality of pixels SP. The supply voltage may be at least one of a driving voltage, an initialization voltage, and a reference voltage. The plurality of voltage lines VL may extend in the y-axis direction and may be spaced apart from one another in the x-axis direction. The non-display area NDA of the display unit DU may surround the display area DA. The non-display area NDA may include the gate driver210, fan-out lines FOL, and gate control lines GCL. The gate driver210may generate a plurality of gate signals based on the gate control signal, and may sequentially supply the plurality of gate signals to the plurality of gate lines GL in a predetermined order. The fan-out lines FOL may extend from the display driver200to the display area DA. The fan-out lines FOL may supply the data voltage received from the display driver200to the plurality of data lines DL. A gate control line GCL may extend from the display driver200to the gate driver210. The gate control line GCL may supply the gate control signal received from the display driver200to the gate driver210. The subsidiary area SBA may include the display driver200, the display pad area DPA, and first and second touch pad areas TPA1and TPA2. The display driver200may output signals and voltages for driving the display panel100to the fan-out lines FOL. The display driver200may supply data voltages to the data lines DL through the fan-out lines FOL. The data voltages may be applied to the plurality of pixels SP, and the luminance of the plurality of pixels SP may be determined based on the data voltages. The display driver200may supply a gate control signal to the gate driver210through the gate control line GCL. The display pad area DPA, the first touch pad area TPA1and the second touch pad area TPA2may be disposed on or near the edge of the subsidiary area SBA. The display pad area DPA, the first touch pad area TPA1and the second touch pad area TPA2may be electrically connected to the circuit board300using a low-resistance, high-reliability material such as, for example, an anisotropic conductive film and an SAP. The display pad area DPA may include a plurality of display pads DPP. The plurality of display pads DPP may be connected to the main processor500through the circuit board300. The plurality of display pads DPP may be connected to the circuit board300to receive digital video data and may supply digital video data to the display driver200. FIG.6is a plan view showing a touch sensing unit of a display device according to an embodiment of the present disclosure. Referring toFIG.6, the touch sensing unit TSU may include a touch sensor area TSA that senses a user's touch, and a touch peripheral area TPA disposed around the touch sensor area TSA. The touch sensor area TSA may overlap the display area DA of the display unit DU, and the touch peripheral area TPA may overlap the non-display area NDA of the display unit DU. The touch sensor area TSA may include a plurality of touch electrodes SEN and a plurality of dummy electrodes DE. The plurality of touch electrodes SEN may form mutual capacitance or self-capacitance to sense a touch of an object or person. The plurality of touch electrodes SEN may include a plurality of driving electrodes TE and a plurality of sensing electrodes RE. The plurality of driving electrodes TE may be arranged in the x-axis direction and the y-axis direction. The plurality of driving electrodes TE may be spaced apart from one another in the x-axis direction and the y-axis direction. The driving electrodes TE adjacent in the y-axis direction may be electrically connected through a plurality of connection electrodes CE. The plurality of driving electrodes TE may be connected to first touch pads TP1through driving lines TL. The driving lines TL may include lower driving lines TLa and upper driving lines TLb. For example, some of the driving electrodes TE disposed on the lower side of the touch sensor area TSA may be connected to the first touch pads TP1through the lower driving lines TLa, and some others of the driving electrodes TE disposed on the upper side of the touch sensor area TSA may be connected to the first touch pads TP1through the upper driving lines TLb. The lower driving lines TLa may extend to the first touch pads TP1beyond the lower side of the touch peripheral area TPA. The upper driving lines TLb may extend to the first touch pads TP1via the upper side, the left side and the lower side of the touch peripheral area TPA. The first touch pads TP1may be connected to the touch driver400through the circuit board300. The connection electrodes CE may be bent at least once. Although the connection electrodes CE are illustrated as having the shape of an angle bracket “<” or “>”, as shown inFIG.6, the shape of the connection electrodes CE when viewed from the top (e.g., in a plan view) is not limited thereto. The driving electrodes TE adjacent to one another in the y-axis direction may be electrically connected by the plurality of connection electrodes CE. Even if one of the connection electrodes CE is disconnected, the driving electrodes TE can be stably connected through the remaining connection electrodes CE. The driving electrodes TE adjacent to each other may be connected by two connection electrodes CE. However, the number of connection electrodes CE is not limited thereto. The connection electrodes CE may be disposed on a different layer from the plurality of driving electrodes TE and the plurality of sensing electrodes RE. The sensing electrodes RE adjacent to one another in the x-axis direction may be electrically connected to one another through connection portions disposed on the same layer as the plurality of driving electrodes TE or the plurality of sensing electrodes RE. For example, the plurality of sensing electrodes RE may extend in the x-axis direction and may be spaced apart from one another in the y-axis direction. The plurality of sensing electrodes RE may be arranged in the x-axis direction and the y-axis direction, and the sensing electrodes RE adjacent to one another in the x-axis direction may be electrically connected through the connection portions. The driving electrodes TE adjacent to one another in the y-axis direction may be electrically connected through the connection electrodes CE disposed on a different layer from the plurality of driving electrodes TE or the plurality of sensing electrodes RE. The connection electrodes CE may be formed on the rear layer (or the lower layer) of the layer on which the driving electrodes TE and the sensing electrodes RE are formed. The connection electrodes CE are electrically connected to the driving electrode TE through a plurality of contact holes. Accordingly, even though the connection electrodes CE overlap the plurality of sensing electrodes RE in the z-axis direction, the plurality of driving electrodes TE and the plurality of sensing electrodes RE are insulated from each other. Mutual capacitance may be formed between the driving electrodes TE and the sensing electrodes RE. The plurality of sensing electrodes RE may be connected to second touch pads TP2through sensing lines RL. For example, some of the sensing electrodes RE disposed on the right side of the touch sensor area TSA may be connected to the second touch pads TP2through the sensing lines RL. The sensing lines RL may extend to the second touch pads TP2through the right side and the lower side of the touch peripheral area TPA. The second touch pads TP2may be connected to the touch driver400through the circuit board300. Each of the plurality of dummy electrodes DE may be surrounded by the driving electrode TE or the sensing electrode RE. Each of the plurality of dummy electrodes DE may be spaced apart from and insulated from the driving electrode TE or the sensing electrode RE. Accordingly, the dummy electrodes DE may be electrically floating. Code patterns in the plane code shape are formed at predetermined spacing on some regions of the front surface of at least one of the plurality of driving electrodes TE, the plurality of sensing electrodes RE, and the plurality of dummy electrodes DE. In addition, the light-blocking dummy patterns are formed on the front surfaces of the touch electrodes except the code patterns. The display pad area DPA, the first touch pad area TPA1and the second touch pad area TPA2may be disposed on or near the edge of the subsidiary area SBA. The display pad area DPA, the first touch pad area TPA1and the second touch pad area TPA2may be electrically connected to the circuit board300using a low-resistance, high-reliability material such as, for example, an anisotropic conductive film and an SAP. The first touch pad area TPA1may be disposed on one side of the display pad area DPA and may include a plurality of first touch pads TP1. The plurality of first touch pads TP1may be electrically connected to the touch driver400disposed on the circuit board300. The plurality of first touch pads TP1may supply touch driving signals to the plurality of driving electrodes TE through the plurality of driving lines TL. The second touch pad area TPA2may be disposed on the opposite side of the display pad area DPA and may include a plurality of second touch pads TP2. The plurality of second touch pads TP2may be electrically connected to the touch driver400disposed on the circuit board300. The touch driver400may receive a touch sensing signal through the plurality of sensing lines RL connected to the plurality of second touch pads TP2, and may sense a change in the capacitance between the driving electrodes TE and the sensing electrodes RE. In an embodiment, the touch driver400may supply a touch driving signal to each of the plurality of driving electrodes TE and the plurality of sensing electrodes RE, and may receive a touch sensing signal from each of the plurality of driving electrodes TE and the plurality of sensing electrodes RE. The touch driver400may sense a change in the amount of charges in each of the plurality of driving electrodes TE and the plurality of sensing electrodes RE based on the touch sensing signal. FIG.7is an enlarged view showing the code patterns and the light-blocking patterns formed in area μl ofFIG.6according to an embodiment of the present disclosure.FIG.8is an enlarged view showing area μl in which the code patterns and the light-blocking patterns are disposed according to an embodiment of the present disclosure. Referring toFIGS.7and8, a plurality of driving electrodes TE, a plurality of sensing electrodes RE and a plurality of dummy electrodes DE may be disposed on the same layer and may be spaced apart from one another. The plurality of driving electrodes TE may be arranged in the x-axis direction and the y-axis direction. The plurality of driving electrodes TE may be spaced apart from one another in the x-axis direction and the y-axis direction. The driving electrodes TE adjacent in the y-axis direction may be electrically connected through a plurality of connection electrodes CE. The plurality of sensing electrodes RE may extend in the x-axis direction and may be spaced apart from one another in the y-axis direction. The plurality of sensing electrodes RE may be arranged in the x-axis direction and the y-axis direction, and the sensing electrodes RE adjacent to one another in the x-axis direction may be electrically connected. For example, the sensing electrodes RE may be electrically connected through connection portions, and the connection portions may be disposed within the shortest distance between the driving electrodes TE adjacent to each other. The connection electrodes CE may be disposed on a different layer from the plurality of driving electrodes TE and the plurality of sensing electrodes RE, e.g., a rear surface layer. Each of the connection electrodes CE may include a first portion CEa and a second portion CEb. For example, the first portion CEa of the connection electrode CE may be connected to the driving electrode TE disposed on one side through a first contact hole CNT1and may extend in the third direction DR3. The second portion CEb of the connection electrode CE may be bent from the first portion CEa where it overlaps the sensing electrode RE to be extended in the second direction DR2, and may be connected to the driving electrode TE disposed on the other side through the first contact hole CNT1. In the following description, the first direction DR1may be a direction between the x-axis direction and the y-axis direction, the second direction DR2may be a direction between the direction opposite to the y-axis direction and the x-axis direction, the third direction DR3may be the direction opposite to the first direction DR1, and a fourth direction DR4may be the direction opposite to the second direction DR2. Accordingly, each of the plurality of connection electrodes CE may connect the adjacent driving electrodes TE in the y-axis direction. Each of the pixels groups PG may include first to third sub-pixels or first to fourth sub-pixels. The first to fourth sub-pixels may include first to fourth emission areas EA1, EA2, EA3, and EA4, respectively. For example, the first emission area EA1may emit light of a first color or red light, the second emission area EA2may emit light of a second color or green light, and the third emission area EA3may emit light of a third color or blue light. In addition, the fourth emission area EA4may emit light of the fourth color or light of one of the first to third colors. However, embodiments of the present disclosure are not limited thereto. One pixel group PG may represent a black-and-white or grayscale image through the first to third emission areas EA1to EA3or the first to fourth emission areas EA1to EA4. Grayscales of various colors, such as white, may be represented by combinations of light emitted from the first to third emission areas EA1to EA3or the first to fourth emission areas EA1to EA4. According to the arrangement structure of the first to third sub-pixels or the first to fourth sub-pixels, the plurality of driving electrodes TE, the plurality of sensing electrodes RE and the plurality of dummy electrodes DE are formed in a mesh structure or a net structure when viewed from the top (e.g., when viewed in a plan view). The plurality of driving electrodes TE, the plurality of sensing electrodes RE and the plurality of dummy electrodes DE may be disposed between and surround the first to third emission areas EA1to EA3or the first to fourth emission areas EA1to EA4, forming a pixel group PG when viewed from the top (e.g., when viewed in a plan view). Accordingly, in embodiments of the present disclosure, the plurality of driving electrodes TE, the plurality of sensing electrodes RE and the plurality of dummy electrodes DE do not overlap the first to fourth emission areas EA1to EA4. In embodiments of the present disclosure, the plurality of connection electrodes CE do not overlap the first to fourth emission areas EA1to EA4. Accordingly, the display device10can prevent the luminance of the light exiting from the first to fourth emission areas EA1, EA2, EA3and EA4from being lowered by the touch sensing unit TSU. Each of the plurality of driving electrodes TE may include a first portion TEa extended in the first direction DR1and a second portion TEb extended in the second direction DR2, so that in embodiments of the present disclosure, the plurality of driving electrodes TE do not overlap the first to fourth emission areas EA1to EA4. Each of the plurality of sensing electrodes RE may include a first portion REa extended in the first direction DR1and a second portion REb extended in the second direction DR2, so that in embodiments of the present disclosure, the plurality of sensing electrodes RE do not overlap the first to fourth emission areas EA1to EA4. In embodiments of the present disclosure, the plurality of dummy electrodes DE do not overlap the first to fourth emission areas EA1to EA4. Thus, in embodiments of the present disclosure, each of the touch electrodes TE and RE includes at least one portion (e.g., Tea, TEb of TE, and Rea, Reb of RE) disposed between adjacent emission areas among the plurality of emission areas EA1, EA2, EA3and EA4. Code patterns CP and light-blocking dummy patterns DP are formed on the front surfaces of a plurality of dummy electrodes DE, a plurality of driving electrodes TE, and a plurality of sensing electrodes RE. The code patterns CP and the light-blocking dummy pattern DP are formed via the same process. The code patterns CP are formed at predetermined spacing (e.g., about 300 μm) in some regions of the front surfaces of the plurality of dummy electrodes DE, the plurality of driving electrodes TE, and the plurality of sensing electrodes RE. When the code patterns CP are formed, the light-blocking dummy patterns DP are formed on the regions of the front surfaces of the plurality of dummy electrodes DE, the plurality of driving electrodes TE, and the plurality of sensing electrodes RE other than the regions where the code patterns CP are formed. Each of the code patterns CP is formed by covering some regions of the front surface of at least one of the plurality of driving electrodes TE, the plurality of sensing electrodes RE and the plurality of dummy electrodes DE with the plane cord shape of a predetermined size. They may be formed by covering not only some regions of the front surface of each of the electrodes, but also by covering at least one side surface along with the front surface. The code patterns CP may reduce the reflectance of infrared light by blocking and reflecting the infrared light applied from the touch input device20. The code patterns may be recognized by the touch input device20according to the code shape with reduced reflectance of infrared light. To this end, the code patterns CP may be formed to have a height or a thickness larger than the height or thickness of the light-blocking dummy patterns DP, and inclined surfaces of a predetermined inclination may be formed on the side and front surfaces of the code patterns CP. Herein, the terms “height” and “thickness” may be used interchangeably, and may refer to the thickness of the described component in the thickness direction (z-axis direction) of the device. The plane code shape of each of the code patterns CP may be formed in a closed loop shape such as, for example, a rectangle, a square, a circle and a diamond. Alternatively, the flat code shape of each of the code patterns CP may be formed in an open loop shape that surrounds only a part of one emission area. In addition, the plane code shape of each of the code patterns CP may be formed in a straight line shape or curved line shape having a predetermined length. When the code patterns CP are disposed between and surround the plurality of emission areas instead of one emission area, the shape of the code patterns CP may be formed in a mesh structure and a net structure when viewed from the top (e.g., when viewed in a plan view). Hereinafter, an example where the code patterns CP are formed in a diamond shape forming a closed loop when viewed from the top will be described. The light-blocking dummy patterns DP may be formed together with the code patterns CP when the code patterns CP are formed, and may be formed in regions other than the regions where the code patterns CP are formed. For example, the light-blocking dummy patterns DP may be disposed and formed so that they cover the entire front surfaces of the plurality of dummy electrodes DE, the plurality of driving electrodes TE and the plurality of sensing electrodes RE on which the code patterns CP are not formed. Since the light-blocking dummy patterns DP and the code patterns CP are formed via the same process, the light-blocking dummy patterns DP and the code patterns CP may be formed of the same material. The code patterns CP and the light-blocking dummy patterns DP are formed of light-blocking members made of a material that absorbs light. The code patterns CP and the light-blocking dummy pattern DP may be formed via a patterning process using a half tone mask. The light-blocking dummy patterns DP are formed within a predetermined thickness or height so that they can block light and are not recognized as code patterns. For example, the light-blocking dummy patterns DP are formed within a predetermined thickness so that they can reduce the reflectance of the plurality of driving electrodes TE, the plurality of sensing electrodes RE and the plurality of dummy electrodes DE, and are not recognized as code patterns by the touch input device20. The thickness or height of the light-blocking dummy patterns DP may be predetermined and formed differently depending on the use environments such as the outside brightness of the display device10, the intensity and the wavelength band of infrared light of the touch input device20. Alternatively, each of the code patterns CP may be higher or thicker than the thickness of the light-blocking dummy patterns DP by a predetermined height or thickness or more, and may have a greater light-blocking rate than the light-blocking dummy patterns DP so that the code patterns CP may be recognized as the code patterns by the touch input device20. The light-blocking dummy patterns DP may be formed in a substantially straight line or curved line shape having a predetermined length, or may be formed in an open loop shape that is bent to surround a part of at least one emission area. Alternatively, the light-blocking dummy patterns DP may be formed so that they cover the entire front surfaces of the plurality of dummy electrodes DE, the plurality of driving electrodes TE and the plurality of sensing electrodes RE to increase the overall light-blocking effect. In such a case, the overall shape of the light-blocking dummy patterns DP may be formed in a mesh structure and a net structure when viewed from the top. The light-blocking dummy patterns DP that block infrared and ultraviolet lights without being recognized as a code shape are formed on the front surfaces of the touch electrodes SEN and the dummy electrodes DE on which the code patterns CP are not formed, so that the exposed areas of the touch electrodes SEN and the dummy electrodes DE can be reduced. Accordingly, by reducing the reflective characteristics and reflectance of the touch electrodes SEN and the dummy electrodes DE, the recognition rate and accuracy of the code patterns CP of the touch input device20can be increased. FIG.9is an enlarged view showing the code patterns and the light-blocking patterns formed in area B1shown inFIG.6in further detail according to an embodiment of the present disclosure. Referring toFIG.9, code patterns CP may be formed at predetermined spacing of about 300 μm on the front surfaces of the driving electrodes TE and the sensing electrodes RE, as well as the dummy electrodes DE. In addition, the light-blocking dummy patterns DP are formed on a part of the front surfaces or on the entire front surfaces of the electrodes DE, TE and RE where the code patterns CP are not formed. The width in at least one direction, the size and the length in at least one direction of the code patterns CP may be set and formed so that they correspond to the size, sensing area, arrangement, etc. of the light-receiving portion21(b) or the optical image sensor included in the code detector21of the touch input device20. The code patterns CP have a thickness greater than the thickness of the nearby light-blocking dummy patterns DP by a predetermined thickness or more, so that the code patterns CP are darker and have a higher light-blocking rate than the nearby light-blocking dummy patterns DP. Accordingly, the code detector21of the touch input device20may recognize the width, size and length of the code patterns CP with a clearer contrast than the nearby light-blocking dummy patterns DP to sense the code shape of the code patterns CP. The light-blocking dummy patterns DP may be formed in a mesh structure of a predetermined size according to the shape of the electrodes DE, TE and RE in regions other than the regions where the code patterns CP are formed. For example, the light-blocking dummy patterns DP are formed to be disposed between and surround the border of the entire emission areas EA1to EA4, so that the overall shape may be formed in a mesh structure when viewed from the top (e.g., when viewed in a plan view). The overall width and length of the light-blocking dummy patterns DP in the mesh structure are greater or longer than the width and length of the code patterns CP, and the amount of reflected infrared light and the intensity of the reflected infrared light are greater than those of the code patterns CP. Therefore, in embodiments of the present disclosure, the light-blocking dummy patterns DP are not recognized as a code shape by the code detector21. FIG.10is a cross-sectional view showing the cross-sectional structure taken along line I-I′ ofFIG.9according to an embodiment of the present disclosure.FIG.11is a cross-sectional view showing the cross-sectional structure taken along line I-I ofFIG.10as simple blocks according to an embodiment of the present disclosure. Referring toFIGS.10and11, a barrier layer BR may be disposed on the substrate SUB. The substrate SUB may include an insulating material such as a polymer resin. For example, the substrate SUB may include polyimide. The substrate SUB may be a flexible substrate that can be bent, folded, or rolled. The barrier layer BR is a layer that protects the thin-film transistors of the thin-film transistor layer TFTL and an emissive layer172of the emission material layer EML. The barrier layer BR may be formed of multiple inorganic layers stacked alternately on one another. For example, the barrier layer BR may include multiple layers in which one or more inorganic layers of, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer and an aluminum oxide layer are alternately stacked on one another. Thin-film transistors ST1may be disposed on the barrier layer BR. Each of the thin-film transistors ST1includes an active layer ACT1, a gate electrode G1, a source electrode S1, and a drain electrode D1. The active layer ACT1, the source electrode S1and the drain electrode D1of each of the thin-film transistors ST1may be disposed on the barrier layer BR. The active layer ACT1of each of the thin-film transistors ST1includes, for example, polycrystalline silicon, single crystalline silicon, low-temperature polycrystalline silicon, amorphous silicon, or an oxide semiconductor. A part of the active layer ACT1overlapping the gate electrode G1in the third direction (z-axis direction), that is, the thickness direction of the substrate SUB, may be defined as a channel region. The source electrode S1and the drain electrode D1are regions that do not overlap the gate electrode G1in the third direction (z-axis direction), and may have conductivity by doping ions or impurities into a silicon semiconductor or an oxide semiconductor. A gate insulator130may be disposed on the active layer ACT1, the source electrode S1and the drain electrode D1of each of the thin-film transistors ST1. The gate insulator130may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The gate electrode G1of each of the thin-film transistors ST1may be disposed on the gate insulator130. The gate electrode G1may overlap the active layer ACT1in the third direction (z-axis direction). The gate electrode G1may include a single layer or multiple layers of one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. A first interlayer dielectric layer141may be disposed on the gate electrode G1of each of the thin-film transistors ST1. The first interlayer dielectric layer141may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The first interlayer dielectric layer141may include a plurality of inorganic layers. A capacitor electrode CAE may be disposed on the first interlayer dielectric layer141. The capacitor electrode CAE may overlap the gate electrode G1of the first thin-film transistor ST1in the third direction (z-axis direction). Since the first interlayer dielectric layer141has a predetermined dielectric constant, a capacitor can be formed by the capacitor electrode CAE, the gate electrode G1, and the first interlayer dielectric layer141disposed therebetween. The capacitor electrode CAE may include a single layer or multiple layers of one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. A second interlayer dielectric layer142may be disposed over the capacitor electrode CAE. The second interlayer dielectric layer142may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The second interlayer dielectric layer142may include a plurality of inorganic layers. A first anode connection electrode ANDE1may be disposed on the second interlayer dielectric layer142. The first anode connection electrode ANDE1may be connected to the drain electrode D1of the thin-film transistor ST1through a first connection contact hole ANCT1that penetrates the gate insulator130, the first interlayer dielectric layer141and the second interlayer dielectric layer142. The first anode connection electrode ANDE1may include a single layer or multiple layers of one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. A first planarization layer160may be disposed over the first anode connection electrode ANDE1and may provide a flat surface over level differences due to the thin-film transistor ST1. The first planarization layer160may be formed of an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin and a polyimide resin. A second anode connection electrode ANDE2may be disposed on the first planarization layer160. The second anode connection electrode ANDE2may be connected to the first anode connection electrode ANDE1through a second connection contact hole ANCT2penetrating the first planarization layer160. The second anode connection electrode ANDE2may include a single layer or multiple layers of one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. A second planarization layer180may be disposed on the second anode connection electrode ANDE2. The second planarization layer180may be formed as an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin and a polyimide resin. Light-emitting elements LEL and a bank190may be disposed on the second planarization layer180. Each of the light-emitting elements LEL includes a pixel electrode171, an emissive layer172, and a common electrode173. The pixel electrode171may be disposed on the second planarization layer180. The pixel electrode171may be connected to the second anode connection electrode ANDE2through a third connection contact hole ANCT3penetrating the second planarization layer180. In the top-emission structure in which light exits from the emissive layer172toward the common electrode173, the pixel electrode171may include a metal material having a high reflectivity such as, for example, a stack structure of aluminum and titanium (Ti/Al/Ti), a stack structure of aluminum and indium tin oxide (ITO) (ITO/Al/ITO), an APC alloy and a stack structure of APC alloy and ITO (ITO/APC/ITO). The APC alloy is an alloy of silver (Ag), palladium (Pd) and copper (Cu). The bank190may partition the pixel electrode171on the second planarization layer180to define the first to third emission areas EA1to EA3. The bank190may cover the edges of the pixel electrodes171. The bank190may be formed of an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin and a polyimide resin. In each of the first to third emission areas EA1, EA2and EA3, the pixel electrode171, the emissive layer172and the common electrode173are stacked sequentially on one another, so that holes from the pixel electrode171and electrons from the common electrode173are combined with each other in the emissive layer172to emit light. The emissive layer172may be disposed on the pixel electrode171and the bank190. The emissive layer172may include an organic material to emit light of a certain color. For example, the emissive layer172may include a hole transporting layer, an organic material layer, and an electron transporting layer. The common electrode173may be disposed on the emissive layer172. The common electrode173may cover the emissive layer172. The common electrode173may be a common layer formed commonly across the first emission area EA1, the second emission area EA2, and the third emission area EA3. A capping layer may be formed on the common electrode173. In the top-emission organic light-emitting diode, the common electrode173may be formed of a transparent conductive material (TCP) such as, for example, ITO and IZO that can transmit light, or a semi-transmissive conductive material such as, for example, magnesium (Mg), silver (Ag) and an alloy of magnesium (Mg) and silver (Ag). When the common electrode173is formed of a semi-transmissive metal material, the light extraction efficiency can be increased by using microcavities. An encapsulation layer TFEL may be disposed on the common electrode173. The encapsulation layer TFEL includes at least one inorganic layer, which may prevent permeation of oxygen or moisture into the emission material layer EML. In addition, the encapsulation layer TFEL includes at least one organic layer, which may protect the light-emitting element layer EML from foreign substances such as dust. For example, the encapsulation layer TFEL includes a first inorganic encapsulation layer TFE1, an organic encapsulation layer TFE2and a second inorganic encapsulation layer TFE3. The first inorganic encapsulation layer TFE1may be disposed on the common electrode173, the organic encapsulation layer TFE2may be disposed on the first inorganic encapsulation layer TFE1, and the second inorganic encapsulation layer TFE3may be disposed on the organic encapsulation layer TFE2. The first inorganic encapsulation layer TFE1and the second inorganic encapsulation layer TFE3may include multiple layers in which one or more inorganic layers of, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer and an aluminum oxide layer are alternately stacked on one another. The organic encapsulation layer TFE2may be an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, etc. The touch sensing unit TSU may be disposed on the encapsulation layer TFEL. The touch sensing unit TSU includes a first touch insulating layer TINS1, connection electrodes CE, a second touch insulating layer TINS2, the driving electrodes TE, the sensing electrodes RE, and a third touch insulating layer TINS3. The first touch insulating layer TINS1may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The connection electrodes CE may be disposed on the first touch insulating layer TINS1. The connection electrode CE may include a single layer or multiple layers of one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. The second touch insulating layer TINS2is disposed on the first touch insulating layer TINS1including the connection electrodes CE. The second touch insulating layer TINS2may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. Alternatively, the second touch insulating layer TINS2may be formed of an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin and a polyimide resin. The driving electrodes TE and the sensing electrodes RE may be disposed on the second touch insulating layer TINS2. In addition to the driving electrodes TE and the sensing electrodes RE, the dummy patterns DE, the first touch driving lines TL1, the second touch driving lines TL2and the touch sensing lines RL shown inFIG.4may be disposed on the second touch insulating layer TINS1. The driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE may be implemented as conductive metal electrodes, and may include one of, for example, molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd) and copper (Cu) or an alloy thereof. The driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE are formed in a mesh structure or a net structure so that they do not overlap the emission areas EA1to EA4. Each of the driving electrodes TE and the sensing electrodes RE may overlap some of the connection electrodes BE1in the third direction (z-axis direction). The driving electrodes TE may be connected to the connection electrodes CE through touch contact holes TCNT1penetrating through the second touch insulating layer TINS2. A light-blocking member is applied to the front surface of the second touch insulating layer TINS2including the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. Then, the applied light-blocking member is patterned into the shape of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE, and the predetermined plane code shape. For example, the light-blocking member may be formed into code patterns CP and light-blocking dummy patterns DP via exposure and patterning processes using a half-tone mask. To this end, the half-tone mask may include light-shielding portions in line with the positions and shape of the code patterns CP, semi-transmissive portions in line with the shape and positions of the light-blocking dummy patterns DP, and transmissive portions in line with emission areas EA1to EA4. By the patterning process using the halftone mask, the code patterns CP are formed on the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE, and the light-blocking dummy patterns DP are formed at substantially the same time. The thickness Ph (or height) of the light-blocking dummy patterns DP formed in line with the semi-transmissive portions of the halftone mask is formed within such a predetermined thickness that the light-blocking dummy patterns DP can block light, but are not recognized as a pattern by the touch input device20. The thickness Ph of the light-blocking dummy patterns DP that allows the light-blocking dummy patterns DP to block the light may be determined based on the use environments such as, for example, the outside brightness of the display device10, the intensity and the wavelength band of infrared light of the touch input device20. The width of the code patterns CP may be about equal to the width of the light-blocking dummy patterns DP depending on the width of the touch electrodes SEN. The thickness Ch (or height) of the code patterns CP is larger than the thickness Ph of the light-blocking dummy patterns DP by a predetermined thickness or more. Accordingly, the code patterns CP have a greater light-blocking rate than the light-blocking dummy patterns DP and are recognized as the code patterns CP by the touch input device20. The light-blocking member forming the code patterns CP and the light-blocking dummy patterns DP may be formed of a material including an infrared or ultraviolet absorbing material. For example, the light-blocking members may include a material including an inorganic or organic pigment. Herein, the inorganic pigment may be a pigment containing at least one compound of, for example, carbon black, cyanine, polymethine, anthraquinone, and phthalocyanine-based compounds. The organic pigment may include, but is not limited to, at least one of lactam black, perylene black, and aniline black. The third touch insulating layer TINS3is formed on the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP. The third touch insulating layer TINS3may provide a flat surface over the driving electrodes TE, the sensing electrodes RE and the connection electrodes BE1, which have different heights. To this end, the third touch insulating layer TINS3may be formed of an inorganic layer such as, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. Alternatively, the third touch insulating layer TINS3may be formed of an organic layer such as, for example, an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin and a polyimide resin. A plurality of color filter layers CFL1, CFL3and CFL4may be formed on the touch sensing unit TSU. For example, a plurality of color filter layers CFL1, CFL3and CFL4may be disposed on the third touch insulating layer TINS3in the form of a plane. Each of the color filter layers may be formed on the third touch insulating layer TINS3so that the color filter layers overlap the first to fourth emission areas EA1to EA4, or may be formed on the second touch insulating layer TINS2including the driving electrodes TE and the sensing electrodes RE so that the color filter layers overlap the first to fourth emission areas EA1to EA4. The first color filter CFL1may be disposed on the first emission area EA1emitting light of the first color, the second color filter may be disposed on the second emission area EA2emitting light of the second color, and the third color filter CFL3may be disposed on the third emission area EA3emitting light of the third color. In addition, the second color filter may be disposed also on the fourth emission area EA4that emits light of the second color. FIG.12is a cross-sectional view illustrating processing steps of a method of fabricating the code patterns and the light-blocking patterns shown inFIGS.10and11according to an embodiment of the present disclosure. Referring toFIG.12, a light-blocking member CPL containing inorganic or organic pigments may be applied on the front surface of the second touch insulating layer TINS2including the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. A halftone mask HMK is placed on the front side of the substrate SUB on which the light-blocking member CPL is applied, which has transmissive portions TM, first semi-transmissive portions SM1, and light-shielding portions SH. The light-shielding portions SH formed in the halftone mask HMK are in line with the shape and positions of the code patterns CP. The first semi-transmissive portions SM1formed in the halftone mask HMK are in line with the shape and positions of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. The transmissive portions TM formed in the halftone mask HMK are in line with the shape and positions of the emission areas EA1to EA4. While the halftone mask HMK is placed, an exposure process is performed for a predetermined period of time. A dry or wet etching process may be performed after the exposure process. FIG.13is a cross-sectional view illustrating a processing step of a method of patterning the code patterns and the light-blocking patterns shown inFIGS.11and12according to an embodiment of the present disclosure. Referring toFIG.13, after the light-blocking member CPL is exposed and etched by the halftone mask HMK, light-blocking dummy patterns DP and code patterns CP are formed on the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. For example, the light-blocking dummy patterns DP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE in line with the first semi-transmissive portions SM1of the halftone mask HMK, with the thickness Ph determined by the light transmittance of the first semi-transmissive portions SM1. The thickness Ph (or height) of the light-blocking dummy patterns DP formed in line with the first semi-transmissive portions SM1may be within such a predetermined thickness that the light-blocking dummy patterns DP can block light, but are not recognized as a pattern by the touch input device20. The code patterns CP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE in line with the light-shielding portions SH of the halftone mask HMK, with the thickness Ch determined by the light transmittance of the light-shielding portions SM1. The thickness Ch (or height) of the code patterns CP formed in line with the light-shielding portions SH may be greater than the thickness Ph of the light-blocking dummy patterns DP because the light-shielding portions SH having a lower light transmittance. The code patterns CP may have a higher light-blocking rate than the light-blocking dummy patterns DP and may be recognized as the code pattern CP by the touch input device20by lowering the reflectance of infrared light. The light-blocking member CPL in line with the transmissive portions TM of the half-tone mask HMK is completely removed during an etching process after the exposure. After the light-blocking member CPL is completely removed from the front surfaces of the emission areas EA1to EA4in line with the transmission portions TM, the emission areas EA1to EA4are exposed on the front side. The third touch insulating layer TINS3may be formed on the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP. The plurality of color filter layers CFL1, CFL3and CFL4may be disposed in a plane shape on the third touch insulating layer TINS3. FIG.14is a cross-sectional view showing in detail code patterns and light-blocking patterns according to an embodiment of the present disclosure.FIG.15is an enlarged, cross-sectional view showing a cross-sectional structure of a code pattern shown in area Cl ofFIG.14according to an embodiment of the present disclosure. Referring toFIGS.14and15, by patterning the light-blocking member CPL applied on the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE, the code patterns CP and light-blocking dummy patterns DP are formed on the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. The thickness Ch (or height) of the code patterns CP is larger than the thickness Ph (or height) of the light-blocking dummy patterns DP, and thus, the code patterns CP have a greater light-blocking rate than the light-blocking dummy patterns DP, so that the code patterns CP are recognized as the code patterns CP by the touch input device20. At least one side surface of each of the code patterns CP may be inclined by a first inclination aθ (also referred to as an angle of inclination) with respect to the second touch insulating layer TINS2. The front surface of each of the code patterns CP may be inclined by a second inclination bθ (also referred to as an angle of inclination) with respect to the second touch insulating layer TINS2. As can be seen from the cross-sectional structure shown inFIG.15along the shorter axis of the code patterns CP, the side surfaces and front surfaces of the code patterns CP have slopes of different first and second inclinations aθ and bθ, respectively. Accordingly, infrared light incident from the front side of the code patterns CP may be refracted and diffusely reflected toward the lateral sides of the code patterns CP. Accordingly, the diffuse reflectance of the code patterns CP may be higher than that of the light-blocking dummy patterns DP, and the amount of infrared light reflected from the code patterns CP to the touch input device20may be reduced. As such, the code patterns CP may have a greater refractive index toward the sides than the light-blocking dummy patterns DP. In addition, the code patterns CP may have a greater diffuse reflectance than the light-blocking dummy patterns DP and may be recognized as the code patterns CP by the touch input device20. As a result, as the diffuse reflectance of the code patterns CP increases, the recognition rate of the code patterns CP recognized by the touch input device20can also be increased. FIG.16is a cross-sectional view illustrating processing steps of a method of fabricating the code patterns and the light-blocking patterns shown inFIG.14according to an embodiment of the present disclosure. Referring toFIG.16, a light-blocking member CPL containing inorganic or organic pigments may be applied on the front surface of the second touch insulating layer TINS2including the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. A halftone mask HMK is placed on the front side of the substrate SUB on which the light-blocking member CPL is applied, which has transmissive portions TM, first semi-transmissive portions SM1, second semi-transmissive portions SM2and light-shielding portions SH. The second semi-transmissive portions SM2formed in the halftone mask HMK are located on the both sides of each of the light-shielding portions SH. The second semi-transmissive portions SM2located on the both sides of each of the light-shielding portions SH may be in line with the shape and the positions of the code patterns CP. The first semi-transmissive portions SM1formed in the halftone mask HMK are in line with the shape and positions of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. The transmissive portions TM formed in the halftone mask HMK are in line with the shape and positions of the emission areas EA1to EA4. While the halftone mask HMK is placed, an exposure process is performed for a predetermined period of time. A dry or wet etching process may be performed after the exposure process. FIG.17is a cross-sectional view illustrating processing steps of a method of patterning the code patterns and the light-blocking patterns shown inFIG.16according to an embodiment of the present disclosure. Referring toFIG.17, the light-blocking member CPL is exposed to light and etched by the halftone mask HMK shown inFIG.16, so that light-blocking dummy patterns DP and code patterns CP are formed on the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. For example, the light-blocking dummy patterns DP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE in line with the first semi-transmissive portions SM1of the halftone mask HMK, with the thickness Ph determined by the light transmittance of the first semi-transmissive portions SM1. The thickness Ph (or height) of the light-blocking dummy patterns DP formed in line with the first semi-transmissive portions SM1may be within such a predetermined thickness that the light-blocking dummy patterns DP can block light but are not recognized as a pattern by the touch input device20. Code patterns CP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrode DE in line with the light-shielding portions SH and second semi-transmissive portions SM2located on the both sides of each of the light-shielding portions SH, with the thickness Ch determined by the light transmittance of the light-shielding portions SH and the second semi-transmissive portions SM2. The thickness Ch (or height) of the code patterns CP formed in line with the light-shielding portions SH may be greater than the thickness Ph of the light-blocking dummy patterns DP because the light-shielding portions SH having a lower light transmittance. In addition, since the light transmittance of the second semi-transmissive portions SM2is higher than the light transmittance of the light-shielding portions SH and lower than the light transmittance of the first semi-transmissive portions SM1, slopes having different first and second inclinations aθ and bθ are formed on the side surfaces and the front surface of each of the code patterns CP. Accordingly, the code patterns CP may have a higher diffuse reflectance than the light-blocking dummy patterns DP and may be recognized as the code patterns CP by the touch input device20by lowering the reflectance of infrared light on the front side. The light-blocking member CPL in line with the transmissive portions TM of the half-tone mask HMK is completely removed during an etching process after the exposure. After the light-blocking member CPL is completely removed from the front surfaces of the emission areas EA1to EA4in line with the transmission portions TM, the emission areas EA1to EA4are exposed on the front side. The third touch insulating layer TINS3may be formed on the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP. The plurality of color filter layers CFL1, CFL3and CFL4may be disposed in a plane shape on the third touch insulating layer TINS3. FIG.18is a cross-sectional view showing in detail code patterns and light-blocking patterns according to an embodiment of the present disclosure.FIG.19is a cross-sectional view illustrating processing steps of a method of fabricating the code patterns and the light-blocking patterns shown inFIG.18according to an embodiment of the present disclosure. Referring toFIGS.18and19, a light-blocking member CPL including a transparent or semi-transparent pigment may be applied on the front surface of the second touch insulating layer TINS2including the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. The transparent or semi-transparent pigment may include at least one compound of, for example, cyanine, polymethine, anthraquinone, and phthalocyanine compounds. Accordingly, the light-blocking member CPL including the transparent or semi-transparent pigment is patterned to form the light-blocking dummy patterns DP and the code patterns CP, and thus, the light-blocking dummy patterns DP and the code patterns CP may be transparent or semi-transparent, including transparent or semi-transparent pigments To expose the light-blocking dummy patterns DP and the code patterns CP to light, a halftone mask HMK is placed on the front side of the substrate SUB on which the light-blocking member CPL is applied, which includes transmissive portions TM, first semi-transmissive portions SM1, second semi-transmissive portions SM2and light-shielding portions SH. The second semi-transmissive portions SM2formed in the halftone mask HMK are located on the both sides of each of the light-shielding portions SH. The second semi-transmissive portions SM2located on the both sides of each of the light-shielding portions SH may be in line with the shape and the positions of the code patterns CP. The first semi-transmissive portions SM1formed in the halftone mask HMK are in line with the shape and positions of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. The transmissive portions TM formed in the halftone mask HMK are in line with the shape and positions of the emission areas EA1to EA4. While the halftone mask HMK is placed, an exposure process is performed for a predetermined period of time. A dry or wet etching process may be performed after the exposure process. FIG.20is a cross-sectional view illustrating processing steps of a method of patterning the code patterns and the light-blocking patterns shown inFIG.18according to an embodiment of the present disclosure. Referring toFIG.20, the light-blocking member CPL is exposed to light and etched by the halftone mask HMK shown inFIG.19, so that light-blocking dummy patterns DP and code patterns CP are formed on the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE. For example, the light-blocking dummy patterns DP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrodes DE in line with the first semi-transmissive portions SM1of the halftone mask HMK, with the thickness Ph determined by the light transmittance of the first semi-transmissive portions SM1. Code patterns CP are formed on some positions of the front surfaces of the driving electrodes TE, the sensing electrodes RE and the dummy electrode DE in line with the light-shielding portions SH and second semi-transmissive portions SM2located on the both sides of each of the light-shielding portions SH. In this instance, since the light transmittance of the second semi-transmissive portions SM2is higher than the light transmittance of the light-shielding portions SH and lower than the light transmittance of the first semi-transmissive portions SM1, slopes having different first and second inclinations aθ and bθ are formed on the side surfaces AH and the front surface FH of each of the code patterns CP. The light-blocking member CPL in line with the transmissive portions TM of the half-tone mask HMK is completely removed during an etching process after the exposure. After the light-blocking member CPL is completely removed from the front surfaces of the emission areas EA1to EA4in line with the transmission portions TM, the emission areas EA1to EA4are exposed on the front side. The third touch insulating layer TINS3may be formed on the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP. The plurality of color filter layers CFL1, CFL3and CFL4may be disposed in a plane shape on the third touch insulating layer TINS3. FIG.21is an enlarged plan view showing the arrangement of the code patterns and the light-blocking patterns shown inFIGS.18and20according to an embodiment of the present disclosure. As shown inFIG.21, slopes having different first and second inclinations aθ and bθ are formed on the side surfaces and front surface of each of the code patterns CP, so that the code patterns CP have a higher diffuse reflectance than the light-blocking dummy patterns DP due to the slopes having the difference first and second inclinations aθ and WI As a result, the code patterns CP having the slopes with the first and second inclinations aθ and bθ are recognized as being dark by the touch input device20because the front reflectance of infrared light is low, and can be recognized as the code patterns CP. For example, even though the light-blocking dummy patterns DP and the code patterns CP are formed to be transparent or semi-transparent as they contain transparent or semi-transparent pigments, the code patterns CP have a higher diffuse reflectance than the light-blocking dummy patterns due to the slopes having the first and second inclinations aθ and bo. Therefore, the code patterns CP can be recognized by the touch input device20as the code patterns CP since they have a lower front reflectance of infrared light. FIG.22is a cross-sectional view showing the cross-sectional structure taken along line I-I′ ofFIG.9according to an embodiment of the present disclosure. Referring toFIG.22, a plurality of color filter layers CFL1, CFL3and CFL4may be formed directly on each of the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP. For example, a plurality of color filter layers CFL1, CFL3and CFL4in a plane shape may be disposed directly on the driving electrodes TE and the sensing electrodes RE including the code patterns CP and the light-blocking dummy patterns DP, without any additional insulating layer being disposed for planarization. The color filter layers are formed on a second touch insulating layer TINS2including the driving electrodes TE and the sensing electrodes RE so that they overlap the first to fourth emission areas EA1to EA4, respectively. The first color filter CFL1may be disposed on the first emission area EA1emitting light of the first color, and the second color filter may be disposed on the second emission area EA2emitting light of the second color. The third color filter CFL3may be disposed on the third emission area EA3emitting light of the third color, and the second color filter may be disposed on the fourth emission area EA4emitting light of the second color. According to an embodiment of the present disclosure as described above, the display device and the touch input system including the same can generate the touch coordinate data of the touch input device20without complicated calculation or correction by way of using the code patterns CP of the display panel100, and can perform touch input by the touch input device20. For example, according to embodiments of the present disclosure, touch input features can be performed based on accurate input coordinates, manufacturing costs can be saved, power consumption can be reduced, and the driving process can become more efficient. In addition, according to the display device according to an embodiment and the touch input system including the same, the recognition rate of the code patterns CP can be increased based on the configuration of the thickness and inclination of the code patterns CP. In addition, as the light-blocking dummy patterns DP are added on the touch electrodes SEN of the display panel100, the light reflectance on the display panel100may be reduced. As a result, the recognition rate and accuracy of the code patterns CP and code information can be increased. FIGS.23and24are perspective views showing a display device according to an embodiment of the present disclosure. In the example shown inFIGS.23and24, a display device10is a foldable display device that is folded in the first direction (x-axis direction). The display device10may remain folded as well as unfolded. The display device10may be folded inward (in-folding manner) such that the front surface is located inside. When the display device10is bent or folded in the in-folding manner, a part of the front surface of the display device10may face the other part of the front surface. Alternatively, the display device10may be folded outward (out-folding manner) such that the front surface is located outside. When the display device10is bent or folded in the out-folding manner, a part of the rear surface of the display device10may face the other part of the rear surface. The first non-folding area NFA1may be disposed on one side, for example, the right side of the folding area FDA. The second non-folding area NFA2may be disposed on the opposite side, for example, the left side of the folding area FDA. The touch sensing unit TSU according to an embodiment of the present disclosure may be formed and disposed on each of the first non-folding area NFA1and the second non-folding area NFA2. The first folding line FOL1and the second folding line FOL2may extend in the second direction (y-axis direction), and the display device10may be folded in the first direction (x-axis direction). As a result, the length of the display device10in the first direction (the x-axis direction) may be reduced to about half, increasing the portability of the display device10. The direction in which the first folding line FOL1and the second folding line FOL2are extended is not limited to the second direction (y-axis direction). For example, the first folding line FOL1and the second folding line FOL2may extend in the first direction (x-axis direction), and the display device10may be folded in the second direction (y-axis direction) according to an embodiment. In such a case, the length of the display device10in the second direction (y-axis direction) may be reduced to about half. Alternatively, the first folding line FOL1and the second folding line FOL2may extend in a diagonal direction of the display device10between the first direction (x-axis direction) and the second direction (y-axis direction) according to an embodiment. In such a case, the display device10may be folded in a triangle shape. When the first folding line FOL1and the second folding line FOL2extend in the second direction (y-axis direction), the length of the folding area FDA in the first direction (x-axis direction) may be smaller than the length in the second direction (y-axis direction). In addition, the length of the first non-folding area NFA1in the first direction (x-axis direction) may be larger than the length of the folding area FDA in the first direction (x-axis direction). The length of the second non-folding area NFA2in the first direction (x-axis direction) may be larger than the length of the folding area FDA in the first direction (x-axis direction). The first display area DA1may be disposed on the front side of the display device10. The first display area DA1may overlap the folding area FDA, the first non-folding area NFA1, and the second non-folding area NFA2. Therefore, when the display device10is unfolded, images may be displayed on the front side of the folding area FDA, the first non-folding area NFA1and the second non-folding area NFA2of the display device10. The second display area DA2may be disposed on the rear side of the display device10. The second display area DA2may overlap the second non-folding area NFA2. Therefore, when the display device10is folded, images may be displayed on the front side of the second non-folding area NFA2of the display device10. Although the through hole TH where a component such as, for example, camera SDA is formed is located in the first non-folding area NFA1inFIGS.23and24, embodiments of the present disclosure are not limited thereto. For example, in embodiments, the through hole TH or the camera SDA may be located in the second non-folding area NFA2or the folding area FDA. FIGS.25and26are perspective views showing a display device according to an embodiment of the present disclosure. In the example shown inFIGS.25and26, a display device10is a foldable display device that is folded in the second direction (y-axis direction). The display device10may remain folded as well as unfolded. The display device10may be folded inward (in-folding manner) such that the front surface is located inside. When the display device10is bent or folded in the in-folding manner, a part of the front surface of the display device10may face the other part of the front surface. Alternatively, the display device10may be folded outward (out-folding manner) such that the front surface is located outside. When the display device10is bent or folded in the out-folding manner, a part of the rear surface of the display device10may face the other part of the rear surface. The display device10may include a folding area FDA, a first non-folding area NFA1, and a second non-folding area NFA2. The display device10can be folded at the folding area FDA, and cannot be folded at the first non-folding area NFA1and the second non-folding area NFA2. The first non-folding area NFA1may be disposed on one side, for example, the lower side of the folding area FDA. The second non-folding area NFA2may be disposed on the other side, for example, the upper side of the folding area FDA. The touch sensing unit TSU according to an embodiment of the present disclosure may be formed and disposed on each of the first non-folding area NFA1and the second non-folding area NFA2. The folding area FDA may be an area bent with a predetermined curvature over the first folding line FOL1and the second folding line FOL2. Therefore, the first folding line FOL1may be a boundary between the folding area FDA and the first non-folding area NFA1, and the second folding line FOL2may be a boundary between the folding area FDA and the second non-folding area NFA2. The first folding line FOL1and the second folding line FOL2may extend in the first direction (x-axis direction) as shown inFIGS.25and26, and the display device10may be folded in the second direction (y-axis direction). As a result, the length of the display device10in the second direction (the y-axis direction) may be reduced to about half, thus, increasing the portability of the display device10. The direction in which the first folding line FOL1and the second folding line FOL2are extended is not limited to the first direction (x-axis direction). For example, the first folding line FOL1and the second folding line FOL2may extend in the second direction (y-axis direction), and the display device10may be folded in the first direction (x-axis direction). In such a case, the length of the display device10in the first direction (x-axis direction) may be reduced to about half. Alternatively, the first folding line FOL1and the second folding line FOL2may extend in a diagonal direction of the display device10between the first direction (x-axis direction) and the second direction (y-axis direction). In such a case, the display device may be folded in a triangle shape. When the first folding line FOL1and the second folding line FOL2extend in the first direction (x-axis direction) as shown inFIGS.25and26, the length of the folding area FDA in the second direction (y-axis direction) may be smaller than the length in the first direction (x-axis direction). In addition, the length of the first non-folding area NFA1in the second direction (y-axis direction) may be larger than the length of the folding area FDA in the second direction (y-axis direction). The length of the second non-folding area NFA2in the second direction (y-axis direction) may be larger than the length of the folding area FDA in the second direction (y-axis direction). The first display area DA1may be disposed on the front side of the display device10. The first display area DA1may overlap the folding area FDA, the first non-folding area NFA1, and the second non-folding area NFA2. Therefore, when the display device10is unfolded, images may be displayed on the front side of the folding area FDA, the first non-folding area NFA1and the second non-folding area NFA2of the display device10. The second display area DA2may be disposed on the rear side of the display device10. The second display area DA2may overlap the second non-folding area NFA2. Therefore, when the display device10is folded, images may be displayed on the front side of the second non-folding area NFA2of the display device10. Although the through hole TH where the camera SDA (or another component) is disposed is located in the second non-folding area NFA2inFIGS.25and26, embodiments of the present disclosure are not limited thereto. For example, in embodiments, the through hole TH may be located in the first non-folding area NFA1or the folding area FDA. As is traditional in the field of the present disclosure, embodiments are described, and illustrated in the drawings, in terms of functional blocks, units and/or modules. Those skilled in the art will appreciate that these blocks, units and/or modules are physically implemented by electronic (or optical) circuits such as logic circuits, discrete components, microprocessors, hard-wired circuits, memory elements, wiring connections, etc., which may be formed using semiconductor-based fabrication techniques or other manufacturing technologies. In the case of the blocks, units and/or modules being implemented by microprocessors or similar, they may be programmed using software (e.g., microcode) to perform various functions discussed herein and may optionally be driven by firmware and/or software. Alternatively, each block, unit and/or module may be implemented by dedicated hardware, or as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. While the present disclosure has been particularly shown and described with reference to embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims. | 97,193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.